v0.10.9.x Help test RC2 forking issues

Status
Not open for further replies.

chaeplin

Active Member
Core Developer
Mar 29, 2014
749
356
133
This is nomp log. 6/4 ~ 6/5.
There was 104 node.

Code:
2014-06-04 15:17:52 ProcessBlock: ACCEPTED
~~~
2014-06-05 00:08:41 Shutdown : done

grep 'version message' nomp_debug.log | grep send |awk '{print $11}' | sed -e 's/peer=//' -e 's/:.*//' | sort -n | uniq | wc -l
104
nomp Subver is here http://54.183.73.24:8080/subver.txt
(2 min update, 76 addnode)
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
This is nomp log. 6/4 ~ 6/5.
There was 104 node.

Code:
2014-06-04 15:17:52 ProcessBlock: ACCEPTED
~~~
2014-06-05 00:08:41 Shutdown : done

grep 'version message' nomp_debug.log | grep send |awk '{print $11}' | sed -e 's/peer=//' -e 's/:.*//' | sort -n | uniq | wc -l
104
nomp Subver is here http://54.183.73.24:8080/subver.txt
(2 min update, 76 addnode)
This is the result of one of my masternodes, uptime 27h
Code:
~/.darkcoin$ grep 'version message' ~/.darkcoin/testnet3/debug.log | grep send | awk '{print $11}' | sed -e 's/peer=//' -e 's/:.*//' | sort -n | uniq  | wc -l
74
Code:
~/.darkcoin$ grep 'version message' ~/.darkcoin/testnet3/debug.log | grep receive | awk '{print$6}' | sort -V | uniq
/Satoshi:0.9.2.2/:
/Satoshi:0.9.3.3/:
/Satoshi:0.9.3.4/:
/Satoshi:0.9.4.2/:
/Satoshi:0.9.4.11/:
/Satoshi:0.9.5.8/:
/Satoshi:0.9.5.9/:
/Satoshi:0.9.5.11/:
/Satoshi:0.10.7.3/:
/Satoshi:0.10.8.2/:
/Satoshi:0.10.8.11/:
/Satoshi:0.10.9.4/:
/Satoshi:0.10.9.9/:
/Satoshi:0.10.9.10/:
/Satoshi:0.10.9/:
So my masternode has seen connect attempts from 74 distinct IPs with 15 different client version strings during the last 27 hours - i wonder who is running a rather ancient v0.9.2.2 :)
 

jimbit

Well-known Member
Foundation Member
May 23, 2014
229
103
203
Code:
~/.darkcoin$  grep 'version message' testnet3/debug.log | grep send | awk '{print $11}' | sed -e 's/peer=//' -e 's/:.*//' | sort -n | uniq  | wc -l
35
Code:
~/.darkcoin$ grep 'version message' ~/.darkcoin/testnet3/debug.log | grep receive | awk '{print$6}' | sort -V | uniq
/Satoshi:0.9.2.2/:
/Satoshi:0.9.4.2/:
/Satoshi:0.9.4.11/:
/Satoshi:0.9.5.3/:
/Satoshi:0.9.5.8/:
/Satoshi:0.9.5.9/:
/Satoshi:0.10.7.3/:
/Satoshi:0.10.8.2/:
/Satoshi:0.10.9.4/:
/Satoshi:0.10.9.6/:
/Satoshi:0.10.9.7/:
/Satoshi:0.10.9.8/:
/Satoshi:0.10.9.9/:
/Satoshi:0.10.9.10/:
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
The masternode count is bogus to me: From 18 to 71 in 3minutes whereas overall connection count is more or less the same as before. We have seen this plateau behaviour several times now (unfortunatly chaeplin reseted the charts) - and i don't believe that this is the usual P2P ramp up behaviour.

How likely is it to have 4 times more masternodes than client connections in the network?
My math lectures are some days ago now, and i forgot how to calculate the average edge count per vertex in a p2p mesh with vertices having the default of 8 outgoing and 8 incoming connections. Can anyone figure out how realistic the 4:1 ratio is?

My three nodes have reduced ban time and 256 incoming connections, so i guess they will harvest any connection they can get over time. And i have never seen more than 50 connections in testnet (currently 22, 22 and 45)

Maybe its possible to spoof the masternode discovery messages and to poison the list with non-existent nodes? @Evan: What takes precendence: The node-message broadcasts or the local lastSeen pinging to maintain the local masternode list?
I did some further investigations, and my claim stays: the masternode list is bogus.

I ran a script using tcpping/tcptraceroute against the list of active IPs, to check if port 19999 is open on the machines. The results are as follows:

Code:
pinging 23.20.78.184 -->seq 0: no response (timeout)
pinging 23.22.13.31 -->seq 0: no response (timeout)
pinging 23.23.55.5 -->seq 0: no response (timeout)
pinging 23.242.106.27 -->seq 0: tcp response from cpe-23-242-106-27.socal.res.rr.com (23.242.106.27) [open]  102.207 ms
pinging 37.187.47.129 -->seq 0: tcp response from 129.ip-37-187-47.eu (37.187.47.129) [open]  80.862 ms
pinging 37.187.216.11 -->seq 0: tcp response from 11.ip-37-187-216.eu (37.187.216.11) [closed]  86.208 ms
pinging 50.16.10.185 -->seq 0: no response (timeout)
pinging 50.16.13.19 -->seq 0: no response (timeout)
pinging 50.16.159.173 -->seq 0: no response (timeout)
pinging 50.17.9.225 -->seq 0: no response (timeout)
pinging 50.17.56.71 -->seq 0: no response (timeout)
pinging 50.17.76.216 -->seq 0: no response (timeout)
pinging 54.76.47.232 -->seq 0: no response (timeout)
pinging 54.80.118.25 -->seq 0: no response (timeout)
pinging 54.80.186.130 -->seq 0: no response (timeout)
pinging 54.80.217.163 -->seq 0: no response (timeout)
pinging 54.80.221.149 -->seq 0: no response (timeout)
pinging 54.80.248.174 -->seq 0: no response (timeout)
pinging 54.81.150.16 -->seq 0: no response (timeout)
pinging 54.81.244.239 -->seq 0: no response (timeout)
pinging 54.82.237.183 -->seq 0: no response (timeout)
pinging 54.83.151.118 -->seq 0: no response (timeout)
pinging 54.86.103.191 -->seq 0: tcp response from ec2-54-86-103-191.compute-1.amazonaws.com (54.86.103.191) [open]  16.433 ms
pinging 54.87.121.203 -->seq 0: no response (timeout)
pinging 54.197.213.80 -->seq 0: no response (timeout)
pinging 54.198.109.108 -->seq 0: no response (timeout)
pinging 54.198.145.174 -->seq 0: no response (timeout)
pinging 54.198.191.99 -->seq 0: no response (timeout)
pinging 54.198.252.150 -->seq 0: no response (timeout)
pinging 54.203.217.224 -->seq 0: no response (timeout)
pinging 54.211.202.218 -->seq 0: no response (timeout)
pinging 54.224.28.113 -->seq 0: no response (timeout)
pinging 54.224.75.195 -->seq 0: no response (timeout)
pinging 54.224.75.199 -->seq 0: no response (timeout)
pinging 54.225.6.1 -->seq 0: no response (timeout)
pinging 54.226.159.60 -->seq 0: no response (timeout)
pinging 54.227.104.145 -->seq 0: no response (timeout)
pinging 54.227.119.61 -->seq 0: no response (timeout)
pinging 54.234.251.170 -->seq 0: no response (timeout)
pinging 54.237.191.208 -->seq 0: no response (timeout)
pinging 54.242.110.149 -->seq 0: no response (timeout)
pinging 54.242.139.236 -->seq 0: no response (timeout)
pinging 54.242.152.165 -->seq 0: no response (timeout)
pinging 54.255.159.230 -->seq 0: no response (timeout)
pinging 84.25.161.117 -->seq 0: tcp response from 5419A175.cm-5-2c.dynamic.ziggo.nl (84.25.161.117) [open]  97.837 ms
pinging 93.188.161.89 -->seq 0: tcp response from 93.188.161.89 [open]  18.385 ms
pinging 104.33.210.231 -->seq 0: tcp response from cpe-104-33-210-231.socal.res.rr.com (104.33.210.231) [open]  97.893 ms
pinging 107.20.4.185 -->seq 0: no response (timeout)
pinging 107.20.28.12 -->seq 0: no response (timeout)
pinging 107.20.38.255 -->seq 0: no response (timeout)
pinging 107.20.122.204 -->seq 0: no response (timeout)
pinging 107.21.138.152 -->seq 0: no response (timeout)
pinging 107.22.42.148 -->seq 0: no response (timeout)
pinging 108.61.199.47 -->seq 0: tcp response from 108.61.199.47.vultr.com (108.61.199.47) [open]  83.854 ms
pinging 184.73.34.180 -->seq 0: no response (timeout)
pinging 184.73.48.127 -->seq 0: no response (timeout)
pinging 184.73.179.148 -->seq 0: tcp response from ec2-184-73-179-148.compute-1.amazonaws.com (184.73.179.148) [open]  1.544 ms
pinging 184.73.179.187 -->seq 0: tcp response from ec2-184-73-179-187.compute-1.amazonaws.com (184.73.179.187) [open]  61.965 ms
pinging 184.73.179.196 -->seq 0: tcp response from ec2-184-73-179-196.compute-1.amazonaws.com (184.73.179.196) [open]  29.792 ms
pinging 188.226.133.22 -->seq 0: tcp response from 188.226.133.22 [closed]  93.537 ms
pinging 204.236.211.102 -->seq 0: no response (timeout)
You will notice that most IPs are not open on port 19999, only a decent rest of 12(!) IPs is at least listening on port 19999.

I don't know how the masses of dead IPs are coming on the list and why they stay there since almost 20 hours, since they do not provide any service to the network, but my fear is they take part in the voting too. The code to determine if a node is alive and providing darksend service needs to be refurbished - proof of service needs to be PROOF of service!

If somebody wants to verify my findings

Code:
$ sudo apt-get install tcptraceroute
$ wget http://www.vdberg.org/~richard/tcpping
$ chmod 755 tcpping
$ ./tcpping -w1 -x 1 184.73.179.148 19999
http://xmodulo.com/2013/01/how-to-install-tcpping-on-linux.html

EDIT: I ran my script against mainnet as well - Only 173 out of 355 (49%) masternodes have open port 9999 - the rest is dead :sad: please proof me wrong...

Code:
pinging 5.254.116.232 --> seq 0: tcp response from lh25767.voxility.net (5.254.116.232) [closed]  131.646 ms
pinging 23.20.8.141 --> seq 0: no response (timeout)
pinging 23.20.40.89 --> seq 0: no response (timeout)
pinging 23.20.86.159 --> seq 0: no response (timeout)
pinging 23.20.120.177 --> seq 0: no response (timeout)
pinging 23.20.183.122 --> seq 0: no response (timeout)
pinging 23.20.251.111 --> seq 0: no response (timeout)
pinging 23.98.71.17 --> seq 0: no response (timeout)
pinging 37.59.168.129 --> seq 0: tcp response from masternode.darkcoin.fr (37.59.168.129) [open]  84.389 ms
pinging 37.187.43.227 --> seq 0: tcp response from 227.ip-37-187-43.eu (37.187.43.227) [open]  80.885 ms
pinging 37.187.47.129 --> seq 0: tcp response from 129.ip-37-187-47.eu (37.187.47.129) [open]  80.621 ms
pinging 37.187.244.244 --> seq 0: tcp response from 244.ip-37-187-244.eu (37.187.244.244) [open]  93.088 ms
pinging 37.187.244.247 --> seq 0: tcp response from 247.ip-37-187-244.eu (37.187.244.247) [open]  90.011 ms
pinging 46.22.128.38 --> seq 0: tcp response from 128-38.colo.sta.blacknight.ie (46.22.128.38) [open]  92.735 ms
pinging 46.22.128.38 --> seq 0: tcp response from 128-38.colo.sta.blacknight.ie (46.22.128.38) [open]  95.496 ms
pinging 46.28.206.33 --> seq 0: tcp response from 46.28.206.33 [closed]  100.029 ms
pinging 46.240.170.28 --> seq 0: tcp response from 46.240.170.28 [open]  126.888 ms
pinging 50.16.136.92 --> seq 0: no response (timeout)
pinging 50.16.161.222 --> seq 0: no response (timeout)
pinging 50.17.41.33 --> seq 0: no response (timeout)
pinging 50.17.52.166 --> seq 0: no response (timeout)
pinging 50.17.176.215 --> seq 0: no response (timeout)
pinging 50.19.68.175 --> seq 0: no response (timeout)
[...]
 
Last edited by a moderator:

eduffield

Core Developer
Mar 9, 2014
1,084
5,320
183
***** PLEASE UPDATE TO 9.5.11 OR 10.9.11 ******

Yesterday I evaluated the checkpoint enforcing mechanism for detecting abuse of the voting system and decided it didn't really work very well. I didn't like the way the network was acting and we were getting lots of orphans. It worked but it would have created a bunch of problems on mainnet. So I stayed up most of the night hunting down the underlying problems with the masternode forking issues and found them! After that I went back and removed the whole checkpointing system.

Binaries (stable)
http://www.darkcoin.io/downloads/forkfix/darkcoin-qt
http://www.darkcoin.io/downloads/forkfix/darkcoind

RC3 Binaries ( masternodes )
http://www.darkcoin.io/downloads/rc3/darkcoin-qt
http://www.darkcoin.io/downloads/rc3/darkcoind
 

eduffield

Core Developer
Mar 9, 2014
1,084
5,320
183
I did some further investigations, and my claim stays: the masternode list is bogus.

I ran a script using tcpping/tcptraceroute against the list of active IPs, to check if port 19999 is open on the machines. The results are as follows:

Code:
pinging 23.20.78.184 -->seq 0: no response (timeout)
pinging 23.22.13.31 -->seq 0: no response (timeout)
pinging 23.23.55.5 -->seq 0: no response (timeout)
pinging 23.242.106.27 -->seq 0: tcp response from cpe-23-242-106-27.socal.res.rr.com (23.242.106.27) [open]  102.207 ms
pinging 37.187.47.129 -->seq 0: tcp response from 129.ip-37-187-47.eu (37.187.47.129) [open]  80.862 ms
pinging 37.187.216.11 -->seq 0: tcp response from 11.ip-37-187-216.eu (37.187.216.11) [closed]  86.208 ms
pinging 50.16.10.185 -->seq 0: no response (timeout)
pinging 50.16.13.19 -->seq 0: no response (timeout)
pinging 50.16.159.173 -->seq 0: no response (timeout)
pinging 50.17.9.225 -->seq 0: no response (timeout)
pinging 50.17.56.71 -->seq 0: no response (timeout)
pinging 50.17.76.216 -->seq 0: no response (timeout)
pinging 54.76.47.232 -->seq 0: no response (timeout)
pinging 54.80.118.25 -->seq 0: no response (timeout)
pinging 54.80.186.130 -->seq 0: no response (timeout)
pinging 54.80.217.163 -->seq 0: no response (timeout)
pinging 54.80.221.149 -->seq 0: no response (timeout)
pinging 54.80.248.174 -->seq 0: no response (timeout)
pinging 54.81.150.16 -->seq 0: no response (timeout)
pinging 54.81.244.239 -->seq 0: no response (timeout)
pinging 54.82.237.183 -->seq 0: no response (timeout)
pinging 54.83.151.118 -->seq 0: no response (timeout)
pinging 54.86.103.191 -->seq 0: tcp response from ec2-54-86-103-191.compute-1.amazonaws.com (54.86.103.191) [open]  16.433 ms
pinging 54.87.121.203 -->seq 0: no response (timeout)
pinging 54.197.213.80 -->seq 0: no response (timeout)
pinging 54.198.109.108 -->seq 0: no response (timeout)
pinging 54.198.145.174 -->seq 0: no response (timeout)
pinging 54.198.191.99 -->seq 0: no response (timeout)
pinging 54.198.252.150 -->seq 0: no response (timeout)
pinging 54.203.217.224 -->seq 0: no response (timeout)
pinging 54.211.202.218 -->seq 0: no response (timeout)
pinging 54.224.28.113 -->seq 0: no response (timeout)
pinging 54.224.75.195 -->seq 0: no response (timeout)
pinging 54.224.75.199 -->seq 0: no response (timeout)
pinging 54.225.6.1 -->seq 0: no response (timeout)
pinging 54.226.159.60 -->seq 0: no response (timeout)
pinging 54.227.104.145 -->seq 0: no response (timeout)
pinging 54.227.119.61 -->seq 0: no response (timeout)
pinging 54.234.251.170 -->seq 0: no response (timeout)
pinging 54.237.191.208 -->seq 0: no response (timeout)
pinging 54.242.110.149 -->seq 0: no response (timeout)
pinging 54.242.139.236 -->seq 0: no response (timeout)
pinging 54.242.152.165 -->seq 0: no response (timeout)
pinging 54.255.159.230 -->seq 0: no response (timeout)
pinging 84.25.161.117 -->seq 0: tcp response from 5419A175.cm-5-2c.dynamic.ziggo.nl (84.25.161.117) [open]  97.837 ms
pinging 93.188.161.89 -->seq 0: tcp response from 93.188.161.89 [open]  18.385 ms
pinging 104.33.210.231 -->seq 0: tcp response from cpe-104-33-210-231.socal.res.rr.com (104.33.210.231) [open]  97.893 ms
pinging 107.20.4.185 -->seq 0: no response (timeout)
pinging 107.20.28.12 -->seq 0: no response (timeout)
pinging 107.20.38.255 -->seq 0: no response (timeout)
pinging 107.20.122.204 -->seq 0: no response (timeout)
pinging 107.21.138.152 -->seq 0: no response (timeout)
pinging 107.22.42.148 -->seq 0: no response (timeout)
pinging 108.61.199.47 -->seq 0: tcp response from 108.61.199.47.vultr.com (108.61.199.47) [open]  83.854 ms
pinging 184.73.34.180 -->seq 0: no response (timeout)
pinging 184.73.48.127 -->seq 0: no response (timeout)
pinging 184.73.179.148 -->seq 0: tcp response from ec2-184-73-179-148.compute-1.amazonaws.com (184.73.179.148) [open]  1.544 ms
pinging 184.73.179.187 -->seq 0: tcp response from ec2-184-73-179-187.compute-1.amazonaws.com (184.73.179.187) [open]  61.965 ms
pinging 184.73.179.196 -->seq 0: tcp response from ec2-184-73-179-196.compute-1.amazonaws.com (184.73.179.196) [open]  29.792 ms
pinging 188.226.133.22 -->seq 0: tcp response from 188.226.133.22 [closed]  93.537 ms
pinging 204.236.211.102 -->seq 0: no response (timeout)
You will notice that most IPs are not open on port 19999, only a decent rest of 12(!) IPs is at least listening on port 19999.

I don't know how the masses of dead IPs are coming on the list and why they stay there since almost 20 hours, since they do not provide any service to the network, but my fear is they take part in the voting too. The code to determine if a node is alive and providing darksend service needs to be refurbished - proof of service needs to be PROOF of service!

If somebody wants to verify my findings

Code:
$ sudo apt-get install tcptraceroute
$ wget http://www.vdberg.org/~richard/tcpping
$ chmod 755 tcpping
$ ./tcpping -w1 -x 1 184.73.179.148 19999
http://xmodulo.com/2013/01/how-to-install-tcpping-on-linux.html

EDIT: I ran my script against mainnet as well - Only 173 out of 355 (49%) masternodes have open port 9999 - the rest is dead :sad: please proof me wrong...

Code:
pinging 5.254.116.232 --> seq 0: tcp response from lh25767.voxility.net (5.254.116.232) [closed]  131.646 ms
pinging 23.20.8.141 --> seq 0: no response (timeout)
pinging 23.20.40.89 --> seq 0: no response (timeout)
pinging 23.20.86.159 --> seq 0: no response (timeout)
pinging 23.20.120.177 --> seq 0: no response (timeout)
pinging 23.20.183.122 --> seq 0: no response (timeout)
pinging 23.20.251.111 --> seq 0: no response (timeout)
pinging 23.98.71.17 --> seq 0: no response (timeout)
pinging 37.59.168.129 --> seq 0: tcp response from masternode.darkcoin.fr (37.59.168.129) [open]  84.389 ms
pinging 37.187.43.227 --> seq 0: tcp response from 227.ip-37-187-43.eu (37.187.43.227) [open]  80.885 ms
pinging 37.187.47.129 --> seq 0: tcp response from 129.ip-37-187-47.eu (37.187.47.129) [open]  80.621 ms
pinging 37.187.244.244 --> seq 0: tcp response from 244.ip-37-187-244.eu (37.187.244.244) [open]  93.088 ms
pinging 37.187.244.247 --> seq 0: tcp response from 247.ip-37-187-244.eu (37.187.244.247) [open]  90.011 ms
pinging 46.22.128.38 --> seq 0: tcp response from 128-38.colo.sta.blacknight.ie (46.22.128.38) [open]  92.735 ms
pinging 46.22.128.38 --> seq 0: tcp response from 128-38.colo.sta.blacknight.ie (46.22.128.38) [open]  95.496 ms
pinging 46.28.206.33 --> seq 0: tcp response from 46.28.206.33 [closed]  100.029 ms
pinging 46.240.170.28 --> seq 0: tcp response from 46.240.170.28 [open]  126.888 ms
pinging 50.16.136.92 --> seq 0: no response (timeout)
pinging 50.16.161.222 --> seq 0: no response (timeout)
pinging 50.17.41.33 --> seq 0: no response (timeout)
pinging 50.17.52.166 --> seq 0: no response (timeout)
pinging 50.17.176.215 --> seq 0: no response (timeout)
pinging 50.19.68.175 --> seq 0: no response (timeout)
[...]
The list isn't really bogus, those were EC2 instances with port 9999 blocked. I haven't actually gotten around to implementing the true proof-of-service part of the masternode system yet. Maybe I can fit that into RC4? Currently if a client can't connect it just skips them and moves onto the next, so it's not a huge deal.

The mainnet version of the software has the ghost masternode problem, so 70+% of that list is not really there.
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
So I stayed up most of the night hunting down the underlying problems with the masternode forking issues and found them! After that I went back and removed the whole checkpointing system.
Nice work Evan! The approach to hunt the forking bug down instead of introducing a system which is "masking" the problem is the better one :)
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
The list isn't really bogus, those were EC2 instances with port 9999 blocked. I haven't actually gotten around to implementing the true proof-of-service part of the masternode system yet.
But how is a masternode able to be place on the list with port 19999/9999 blocked? No client can connect to them, neither for blockchain download, nor for darksend service? Where the ports closed after startup?

Maybe I can fit that into RC4? Currently if a client can't connect it just skips them and moves onto the next, so it's not a huge deal.
Let me check if dead/blocked IPs received payments/votes first - if we ensure that payments only get paid when client is connectable on port 9999/19999 real proof of service may be pushed to RC4 for sure.

The mainnet version of the software has the ghost masternode problem, so 70+% of that list is not really there.
Yeah, i remember that.
 
  • Like
Reactions: archLinuxUser

chaeplin

Active Member
Core Developer
Mar 29, 2014
749
356
133
But how is a masternode able to be place on the list with port 19999/9999 blocked? No client can connect to them, neither for blockchain download, nor for darksend service? Where the ports closed after startup?
Outbound connection only client ><.
It's like a node in NAT without port forwarding or listen=0.
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
Outbound connection only client ><.
It's like a node in NAT without port forwarding or listen=0.
Yeah, but why do the outbound connection only clients spike in all at a time (remember your graph showing the raise from 17--> 68)? Did someone start his dialup 40 times to get a new IP? And why does the TTL code not remove the nodes after 30mins?
 

daaarkcoins

Member
May 21, 2014
95
40
68
0.10.9.11 keeps crashing on my machine after a couple minutes. Neither reindexing nor completely purging the testnet3 folder help. Tried everything at least twice. The last log line is always of the "CheckBlock() : pindexPrev->GetBlockHash()" kind.
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
0.10.9.11 keeps crashing on my machine after a couple minutes. Neither reindexing nor completely purging the testnet3 folder help. Tried everything at least twice. The last log line is always of the "CheckBlock() : pindexPrev->GetBlockHash()" kind.
Evan, does the client take into account that there are at least 2 valid payment ratios (10% and 20%) when rescanning the whole blockchain? Or do the payment checks start at block 17275+ on testnet?
 

chaeplin

Active Member
Core Developer
Mar 29, 2014
749
356
133
Code:
   "height" : 17276,
    "votes" : [
        "7c430000000000001976a9146724d59ce01e66a0dee6019f697116c0ebfa184f88ac01000000"
    ],
    "payee" : "",
    "masternode_payments" : false
 
  • Like
Reactions: flare

eduffield

Core Developer
Mar 9, 2014
1,084
5,320
183
0.10.9.11 keeps crashing on my machine after a couple minutes. Neither reindexing nor completely purging the testnet3 folder help. Tried everything at least twice. The last log line is always of the "CheckBlock() : pindexPrev->GetBlockHash()" kind.
Try downloading the binary again. Masternode payments haven't started yet, so you shouldn't be seeing this: "CheckBlock() : pindexPrev->GetBlockHash()" .
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
Let me check if dead/blocked IPs received payments/votes first - if we ensure that payments only get paid when client is connectable on port 9999/19999 real proof of service may be pushed to RC4 for sure.
Evan, i did some digging in chaeplins stats and found indeed addresses/IP which got paid while blocking port 19999. I don't think that is the idea of proof of service :D

Example#1
Code:
54.227.104.145:19999 1 201372494a05b71984ee84f68bc42b06d5a29fa91e7caa98aec93677070ab696 msMrvVzPQtXNCZBuMdQEkaAHxVGbKp4Ltj 1000.0
[...]
pinging 54.227.104.145 -->seq 0: no response (timeout)
[...]
      4 msMrvVzPQtXNCZBuMdQEkaAHxVGbKp4Ltj
[...]
http://23.23.186.131:1234/address/msMrvVzPQtXNCZBuMdQEkaAHxVGbKp4Ltj

Example#2
Code:
54.234.251.170:19999 1 f432ffdf0242b30283d89e00eaf8821f6597978693063353c0ed0b621b7076dc n1HRzCV9RBxVJoC5noKVNBCx4gAbTZPyPZ 1000.0
[...]
pinging 54.234.251.170 -->seq 0: no response (timeout)
[...]
      3 n1HRzCV9RBxVJoC5noKVNBCx4gAbTZPyPZ
[...]
http://23.23.186.131:1234/address/n1HRzCV9RBxVJoC5noKVNBCx4gAbTZPyPZ
Currently if a client can't connect it just skips them and moves onto the next, so it's not a huge deal.
Could you please review the code in the masternode which is checking wether MNs are connectable on port 9999/19999 if it is really moving forward? As far as i can see normal client/miner does not care wether a node is connectable or not - it is just retrieving a winner from the masternode list and pushes the pubkey to the voting list.

https://github.com/darkcoinproject/...d5d4713b1c1b07f095e515ec0a/src/main.cpp#L5746
https://github.com/darkcoinproject/...d5d4713b1c1b07f095e515ec0a/src/main.cpp#L5789

The enabled property of CMasterNode is always set to 1 and the Check() method is not implemented in client/miner.

https://github.com/darkcoinproject/...7fd5d4713b1c1b07f095e515ec0a/src/main.h#L2437

EDIT: Meanwhile the masternode count is back to 14 and only 2 have tcpping timeout - i wonder when/if the 50 zombies will return :D

Code:
pinging 184.73.179.148 --> seq 0: tcp response from ec2-184-73-179-148.compute-1.amazonaws.com (184.73.179.148) [closed]  3.371 ms
pinging 184.73.179.187 --> seq 0: tcp response from ec2-184-73-179-187.compute-1.amazonaws.com (184.73.179.187) [closed]  1.461 ms
pinging 188.226.243.116 --> seq 0: tcp response from 188.226.243.116 [closed]  87.879 ms
pinging 37.187.47.129 --> seq 0: tcp response from 129.ip-37-187-47.eu (37.187.47.129) [open]  80.518 ms
pinging 54.203.217.224 --> seq 0: no response (timeout)
pinging 93.188.161.89 --> seq 0: tcp response from 93.188.161.89 [closed]  17.269 ms
pinging 54.76.47.232 --> seq 0: tcp response from ec2-54-76-47-232.eu-west-1.compute.amazonaws.com (54.76.47.232) [closed]  97.727 ms
pinging 54.86.103.191 --> seq 0: tcp response from ec2-54-86-103-191.compute-1.amazonaws.com (54.86.103.191) [closed]  2.757 ms
pinging 84.25.161.117 --> seq 0: tcp response from 5419A175.cm-5-2c.dynamic.ziggo.nl (84.25.161.117) [closed]  101.059 ms
pinging 54.255.159.230 --> seq 0: no response (timeout)
pinging 23.242.106.27 --> seq 0: tcp response from cpe-23-242-106-27.socal.res.rr.com (23.242.106.27) [closed]  104.185 ms
pinging 188.226.248.36 --> seq 0: tcp response from 188.226.248.36 [open]  92.434 ms
pinging 107.170.200.102 --> seq 0: tcp response from drk01.cryptomix.net (107.170.200.102) [closed]  78.664 ms
pinging 184.73.179.196 --> seq 0: tcp response from ec2-184-73-179-196.compute-1.amazonaws.com (184.73.179.196) [closed]  1.497 ms
 
Last edited by a moderator:

eduffield

Core Developer
Mar 9, 2014
1,084
5,320
183
Evan, i did some digging in chaeplins stats and found indeed addresses/IP which got paid while blocking port 19999. I don't think that is the idea of proof of service :D

Example#1
Code:
54.227.104.145:19999 1 201372494a05b71984ee84f68bc42b06d5a29fa91e7caa98aec93677070ab696 msMrvVzPQtXNCZBuMdQEkaAHxVGbKp4Ltj 1000.0
[...]
pinging 54.227.104.145 -->seq 0: no response (timeout)
[...]
      4 msMrvVzPQtXNCZBuMdQEkaAHxVGbKp4Ltj
[...]
http://23.23.186.131:1234/address/msMrvVzPQtXNCZBuMdQEkaAHxVGbKp4Ltj

Example#2
Code:
54.234.251.170:19999 1 f432ffdf0242b30283d89e00eaf8821f6597978693063353c0ed0b621b7076dc n1HRzCV9RBxVJoC5noKVNBCx4gAbTZPyPZ 1000.0
[...]
pinging 54.234.251.170 -->seq 0: no response (timeout)
[...]
      3 n1HRzCV9RBxVJoC5noKVNBCx4gAbTZPyPZ
[...]
http://23.23.186.131:1234/address/n1HRzCV9RBxVJoC5noKVNBCx4gAbTZPyPZ


Could you please review the code in the masternode which is checking wether MNs are connectable on port 9999/19999 if it is really moving forward? As far as i can see normal client/miner does not care wether a node is connectable or not - it is just retrieving a winner from the masternode list and pushes the pubkey to the voting list.

https://github.com/darkcoinproject/...d5d4713b1c1b07f095e515ec0a/src/main.cpp#L5746
https://github.com/darkcoinproject/...d5d4713b1c1b07f095e515ec0a/src/main.cpp#L5789

The enabled property of CMasterNode is always set to 1 and the Check() method is not implemented in client/miner.

https://github.com/darkcoinproject/...7fd5d4713b1c1b07f095e515ec0a/src/main.h#L2437

EDIT: Meanwhile the masternode count is back to 14 and only 2 have tcpping timeout - i wonder when/if the 50 zombies will return :D

Code:
pinging 184.73.179.148 --> seq 0: tcp response from ec2-184-73-179-148.compute-1.amazonaws.com (184.73.179.148) [closed]  3.371 ms
pinging 184.73.179.187 --> seq 0: tcp response from ec2-184-73-179-187.compute-1.amazonaws.com (184.73.179.187) [closed]  1.461 ms
pinging 188.226.243.116 --> seq 0: tcp response from 188.226.243.116 [closed]  87.879 ms
pinging 37.187.47.129 --> seq 0: tcp response from 129.ip-37-187-47.eu (37.187.47.129) [open]  80.518 ms
pinging 54.203.217.224 --> seq 0: no response (timeout)
pinging 93.188.161.89 --> seq 0: tcp response from 93.188.161.89 [closed]  17.269 ms
pinging 54.76.47.232 --> seq 0: tcp response from ec2-54-76-47-232.eu-west-1.compute.amazonaws.com (54.76.47.232) [closed]  97.727 ms
pinging 54.86.103.191 --> seq 0: tcp response from ec2-54-86-103-191.compute-1.amazonaws.com (54.86.103.191) [closed]  2.757 ms
pinging 84.25.161.117 --> seq 0: tcp response from 5419A175.cm-5-2c.dynamic.ziggo.nl (84.25.161.117) [closed]  101.059 ms
pinging 54.255.159.230 --> seq 0: no response (timeout)
pinging 23.242.106.27 --> seq 0: tcp response from cpe-23-242-106-27.socal.res.rr.com (23.242.106.27) [closed]  104.185 ms
pinging 188.226.248.36 --> seq 0: tcp response from 188.226.248.36 [open]  92.434 ms
pinging 107.170.200.102 --> seq 0: tcp response from drk01.cryptomix.net (107.170.200.102) [closed]  78.664 ms
pinging 184.73.179.196 --> seq 0: tcp response from ec2-184-73-179-196.compute-1.amazonaws.com (184.73.179.196) [closed]  1.497 ms
The current implement has a few requirements for being paid as a masternode:

1.) You have unspent 1000DRK. This is uniquely kept track of and can't be forged.
2.) You use "masternode start" to initiate the startup sequence
3.) Your client must ping the network every 30 minutes.

These checks are implemented here, in CMasterNode::Check:
https://github.com/darkcoinproject/...d5d4713b1c1b07f095e515ec0a/src/main.cpp#L5807

This does NOT currently include any checks to make sure the port is open, but I'll add this in to the startup sequence and error out if the client can't reach itself.

The "proof-of-service" part of this hasn't been implemented at all. So far we just have a solid foundation for communicating the list, the proof of 1000DRK and the pinging system (this stuff didn't even work before RC3).
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
This does NOT currently include any checks to make sure the port is open, but I'll add this in to the startup sequence and error out if the client can't reach itself.

The "proof-of-service" part of this hasn't been implemented at all. So far we just have a solid foundation for communicating the list, the proof of 1000DRK and the pinging system (this stuff didn't even work before RC3).
Good stuff! Keep in mind that the usual client is capable to reach itself through loopback - so a sucessful connection to itself does not implicitly mean it is reachable through public IP.

And sorry for being a PITA the last few days, but i think exactly this will improve the architecture of darkcoin :)

regards,
Holger
 

chaeplin

Active Member
Core Developer
Mar 29, 2014
749
356
133
Good stuff! Keep in mind that the usual client is capable to reach itself through loopback - so a sucessful connection to itself does not implicitly mean it is reachable through public IP.

And sorry for being a PITA the last few days, but i think exactly this will improve the architecture of darkcoin :)

regards,
Holger
I think passive checking is more appropriate, by checking connection status of port
(if my port 9999/19999 was connected from outside) or by viewing peers.dat regularly.
 

Kai

Member
Apr 6, 2014
110
56
78
I think passive checking is more appropriate, by checking connection status of port
(if my port 9999/19999 was connected from outside) or by viewing peers.dat regularly.
Maybe a MN owner can tamper with the port check (if it's done locally).
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
I think passive checking is more appropriate, by checking connection status of port
(if my port 9999/19999 was connected from outside) or by viewing peers.dat regularly.
Yes, good idea! The info if incoming port is reachable is already in peers.dat / getpeerinfo - Evan might use this :)
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
Maybe a MN owner can tamper with the port check (if it's done locally).
Sure, but until we have full proof of service implemented in RC4 by - lets say - relaying a test darksend message we can start with this basic check.
 
  • Like
Reactions: Kai

JGCMiner

Moderator
Moderator
Jun 8, 2014
360
211
113
Good stuff! Keep in mind that the usual client is capable to reach itself through loopback - so a sucessful connection to itself does not implicitly mean it is reachable through public IP.

And sorry for being a PITA the last few days, but i think exactly this will improve the architecture of darkcoin :)

regards,
Holger
As an investor and someone that doesn't have enough time to do much more than make posts complaining about X or Y :cool: -- I want to say thank you Flare for your time, effort, and hard work.

Also, @ Evan...
a lot of people really don't like when their baby is poked at, but your responsiveness to the issues that Flare and others have raised is refreshing and really instills confidence in me as an investor.

Keep it up guys. :)
 

Kai

Member
Apr 6, 2014
110
56
78
Sure, but until we have full proof of service implemented in RC4 by - lets say - relaying a test darksend message we can start with this basic check.
Each masternode can check the others to create a common list of valid MN.
 
  • Like
Reactions: flare

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
As an investor and someone that doesn't have enough time to do much more than make posts complaining about X or Y :cool: -- I want to say thank you Flare for your time, effort, and hard work.
Thanks, you are welcome :)

Also, @ Evan...
a lot of people really don't like when their baby is poked at, but your responsiveness to the issues that Flare and others have raised is refreshing and really instills confidence in me as an investor
I could not have said better! Evan is taking my QA bugging like a champion, its tough to get your work reviewed and audited like that. Others would say: "STFU!", but Evan is always coming back with a elaborated and polite answer/solution. I appreciate working with him on Darkcoinproject very much. :)
 
S

snogcel

Guest
Hey guys,

I'd like to help out the testing process by running a testnet MasterNode - can anyone send 1000 drk to XqUeH6armeKnXyoykvEyWGWaoLaxFwoM2o ?
 
Status
Not open for further replies.