• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

v0.10.9.x Help test RC2 forking issues

Status
Not open for further replies.
Found the problem, uploading a fix. Should see much lower orphans after this.

Either you temporarily disabled the checkpointer for debugging on your side (i noticed a forkfix-debug branch in github) or my miner (~20% network hashpower) is giving the checkpointer a hard time: My miner found a 33% block, which sucessfully passed under the radar of checkpointer and got confirmed by network.

Block 16856
Code:
    {
        "account" : "",
        "address" : "mrMmWYT6oRdXNrKYaS5wBvvvfJ8iHJJF51",
        "category" : "generate",
        "amount" : 100.00000000,
        "confirmations" : 124,
        "generated" : true,
        "blockhash" : "000000001362a1ad208beade22692e44c93aa5c88e6bbebfdd33b633c1f4f8c8",
        "blockindex" : 0,
        "blocktime" : 1402354485,
        "txid" : "c8aa0cfe89b059436ff5c6f4b8882619a2e75c028f02dc5837ade2d47f5e91f3",
        "time" : 1402354485,
        "timereceived" : 1402354489
    },

Additionally i see some 50DRK transactions to my miner i don't understand - so either someone is sending me tips on testnet (thanks for that!) or someone tampered Block 16919

Code:
    {
        "account" : "",
        "address" : "mprn34PKGRYgpRnQTTx9gxm8sHPxBtrwAh",
        "category" : "receive",
        "amount" : 50.00000000,
        "confirmations" : 73,
        "blockhash" : "000000029e93dff2f3f54b0e22491f063937244872c179d731e6aa9ed1791bd2",
        "blockindex" : 4,
        "blocktime" : 1402369776,
        "txid" : "3748cd423abc1c7b8fdfd08497510c7ab429ab027f77e7fe1c722c436c41cc8c",
        "time" : 1402364673,
        "timereceived" : 1402364673
    },
    {
        "account" : "",
        "address" : "mqriDtEzZN1kwzFKXNLdm2fabNs9PcB5jt",
        "category" : "receive",
        "amount" : 50.00000000,
        "confirmations" : 73,
        "blockhash" : "000000029e93dff2f3f54b0e22491f063937244872c179d731e6aa9ed1791bd2",
        "blockindex" : 2,
        "blocktime" : 1402369776,
        "txid" : "c7f1e1cddd70743c6b2eff93597b5114e9e9cde6a6a87716172d3c9c566cc705",
        "time" : 1402364734,
        "timereceived" : 1402364734
    },
    {
        "account" : "",
        "address" : "mpARGkch5FcgfT8tCk8YaJgzUvKoiXyAjF",
        "category" : "receive",
        "amount" : 50.00000000,
        "confirmations" : 73,
        "blockhash" : "000000029e93dff2f3f54b0e22491f063937244872c179d731e6aa9ed1791bd2",
        "blockindex" : 3,
        "blocktime" : 1402369776,
        "txid" : "7b1b50814752534d22fd9bc1f513d5806f4ce32cc3d004ff2acf8c4ae9836240",
        "time" : 1402364753,
        "timereceived" : 1402364753
    },

Also the ratio of paid out block is still effected significantly: My miner found 40 blocks since block 16778 - thats ~19% of all blocks found (209) - matching my hashpower quite nicely.

11 blocks out of these 40 contained no MN payment - Thats 25%, So my '20% attack' brought down the paid blocks from 98% to 75% - which is confirmed by http://tdrk.poolhash.org/blocks/masterpay.txt

Code:
last 90 blocks
     26 -
      5 moqpGCABQQefuujQ9nsAMJVEiwAXyAMw6o
      4 mxjko3miNiPCPqcHqnCDW6XUSgHfPs8TEs
[...]

The ROI of my attack is quite bad indeed: I lost 29 mined blocks :grin:

Just curious: Does DGW3 take orphaned blocks into account? 29 blocks out of 209 got orphaned - do we still have an average blocktime of 2.5min?
 
Last edited by a moderator:
Either you temporarily disabled the checkpointer for debugging on your side (i noticed a forkfix-debug branch in github) or my miner (~20% network hashpower) is giving the checkpointer a hard time...

Explorer is at Block 17016, http://tdrk.poolhash.org/ is at 17010 - fork or is the daemon just stuck?

EDIT: OK, i saw that tdiff is awfully high, tdrk is just lagging behind :smile:
 
Last edited by a moderator:
Well spotted - i think that the log spam issues we have seen twice the last days have their root cause in misbehaving clients regarding protocol version. Taking into account that the issue was resolved by Evan by increasing the protocolversion i assume that manipulating the protocolverson might bring the issue back.

Testcase: Rent a AWS instance with 10GBit connection, modify the protocolversion to be one above the actual version (70018--> 70019) and see what happens to the MN network.

Expected behaviour: The network does not care at all.

Worstcase: This setup can stop the MN network due to massive CPU load on the MNs due to log spam.
Someone will have to test this because I'm not engaged with testnet debugging.
 
Wooooooaahh !! What just happened? MN list just got humungous! Like... holy crap!

The masternode count is bogus to me: From 18 to 71 in 3minutes whereas overall connection count is more or less the same as before. We have seen this plateau behaviour several times now (unfortunatly chaeplin reseted the charts) - and i don't believe that this is the usual P2P ramp up behaviour.

How likely is it to have 4 times more masternodes than client connections in the network?

My math lectures are some days ago now, and i forgot how to calculate the average edge count per vertex in a p2p mesh with vertices having the default of 8 outgoing and 8 incoming connections. Can anyone figure out how realistic the 4:1 ratio is?

My three nodes have reduced ban time and 256 incoming connections, so i guess they will harvest any connection they can get over time. And i have never seen more than 50 connections in testnet (currently 22, 22 and 45)

Maybe its possible to spoof the masternode discovery messages and to poison the list with non-existent nodes? @Evan: What takes precendence: The node-message broadcasts or the local lastSeen pinging to maintain the local masternode list?
 
The masternode count is bogus to me: From 18 to 71 in 3minutes whereas overall connection count is more or less the same as before. We have seen this plateau behaviour several times now (unfortunatly chaeplin reseted the charts) - and i don't believe that this is the usual P2P ramp up behaviour.

How likely is it to have 4 times more masternodes than client connections in the network?
My math lectures are some days ago now, and i forgot how to calculate the average edge count per vertex in a p2p mesh with vertices having the default of 8 outgoing and 8 incoming connections. Can anyone figure out how realistic the 4:1 ratio is?

My three nodes have reduced ban time and 256 incoming connections, so i guess they will harvest any connection they can get over time. And i have never seen more than 50 connections in testnet (currently 22, 22 and 45)

Maybe its possible to spoof the masternode discovery messages and to poison the list with non-existent nodes? @Evan: What takes precendence: The node-message broadcasts or the local lastSeen pinging to maintain the local masternode list?
I have changed my daemon outgoing to 64(default 8)
Code:
static const int MAX_OUTBOUND_CONNECTIONS = 64;

no of p2p connection < no of masternode is possible.

Anyway, I will check existence of masternode by adding masternode ip to addnode.

http://tdrk.poolhash.org/blocks/subver.txt
(maybe by old client cause ..)
Code:
4 "subver" : "/Satoshi:0.10.9.10/",
3 "subver" : "/Satoshi:0.10.9.4/",
9 "subver" : "/Satoshi:0.10.9.9/",
1 "subver" : "/Satoshi:0.9.4.11/",
1 "subver" : "/Satoshi:0.9.5.8/",
1 "subver" : "/Satoshi:0.9.5.9/",
 
Last edited by a moderator:
no of p2p connection < no of masternode is possible.

I do not doubt that it's possible, but the 4:1 ratio is what i doubt :smile:

Anyway, I will check existence of masternode by adding masternode ip to addnode.
Good idea! Any valid masternode should be connectable on port 19999 - or at least deny the connection. Could you add the clientversion and/or protocolversion to your stats?

http://tdrk.poolhash.org/blocks/subver.txt
(maybe by old client cause ..)
Code:
4 "subver" : "/Satoshi:0.10.9.10/",
3 "subver" : "/Satoshi:0.10.9.4/",
9 "subver" : "/Satoshi:0.10.9.9/",
1 "subver" : "/Satoshi:0.9.4.11/",
1 "subver" : "/Satoshi:0.9.5.8/",
1 "subver" : "/Satoshi:0.9.5.9/",

My understanding was that old clients, using outdated protocol version are not able to connect to the new masternodes at all. Maybe there is a sidechannel we don't see (like a client that is capable to connect both network segments like a hub) :smile:

The node having 45 connections does not see any RC2 masternodes at all

Code:
      2         "subver" : "/Satoshi:0.9.4.11/",
      1         "subver" : "/Satoshi:0.9.5.8/",
      4         "subver" : "/Satoshi:0.9.5.9/",
      3         "subver" : "/Satoshi:0.10.9.4/",
     34         "subver" : "/Satoshi:0.10.9.9/",
      5         "subver" : "/Satoshi:0.10.9.10/",
 
Last edited by a moderator:
This is nomp log. 6/4 ~ 6/5.
There was 104 node.

Code:
2014-06-04 15:17:52 ProcessBlock: ACCEPTED
~~~
2014-06-05 00:08:41 Shutdown : done

grep 'version message' nomp_debug.log | grep send |awk '{print $11}' | sed -e 's/peer=//' -e 's/:.*//' | sort -n | uniq | wc -l
104

nomp Subver is here http://54.183.73.24:8080/subver.txt
(2 min update, 76 addnode)
 
This is nomp log. 6/4 ~ 6/5.
There was 104 node.

Code:
2014-06-04 15:17:52 ProcessBlock: ACCEPTED
~~~
2014-06-05 00:08:41 Shutdown : done

grep 'version message' nomp_debug.log | grep send |awk '{print $11}' | sed -e 's/peer=//' -e 's/:.*//' | sort -n | uniq | wc -l
104

nomp Subver is here http://54.183.73.24:8080/subver.txt
(2 min update, 76 addnode)

This is the result of one of my masternodes, uptime 27h
Code:
~/.darkcoin$ grep 'version message' ~/.darkcoin/testnet3/debug.log | grep send | awk '{print $11}' | sed -e 's/peer=//' -e 's/:.*//' | sort -n | uniq  | wc -l
74

Code:
~/.darkcoin$ grep 'version message' ~/.darkcoin/testnet3/debug.log | grep receive | awk '{print$6}' | sort -V | uniq
/Satoshi:0.9.2.2/:
/Satoshi:0.9.3.3/:
/Satoshi:0.9.3.4/:
/Satoshi:0.9.4.2/:
/Satoshi:0.9.4.11/:
/Satoshi:0.9.5.8/:
/Satoshi:0.9.5.9/:
/Satoshi:0.9.5.11/:
/Satoshi:0.10.7.3/:
/Satoshi:0.10.8.2/:
/Satoshi:0.10.8.11/:
/Satoshi:0.10.9.4/:
/Satoshi:0.10.9.9/:
/Satoshi:0.10.9.10/:
/Satoshi:0.10.9/:

So my masternode has seen connect attempts from 74 distinct IPs with 15 different client version strings during the last 27 hours - i wonder who is running a rather ancient v0.9.2.2 :smile:
 
Code:
~/.darkcoin$  grep 'version message' testnet3/debug.log | grep send | awk '{print $11}' | sed -e 's/peer=//' -e 's/:.*//' | sort -n | uniq  | wc -l
35
Code:
~/.darkcoin$ grep 'version message' ~/.darkcoin/testnet3/debug.log | grep receive | awk '{print$6}' | sort -V | uniq
/Satoshi:0.9.2.2/:
/Satoshi:0.9.4.2/:
/Satoshi:0.9.4.11/:
/Satoshi:0.9.5.3/:
/Satoshi:0.9.5.8/:
/Satoshi:0.9.5.9/:
/Satoshi:0.10.7.3/:
/Satoshi:0.10.8.2/:
/Satoshi:0.10.9.4/:
/Satoshi:0.10.9.6/:
/Satoshi:0.10.9.7/:
/Satoshi:0.10.9.8/:
/Satoshi:0.10.9.9/:
/Satoshi:0.10.9.10/:
 
The masternode count is bogus to me: From 18 to 71 in 3minutes whereas overall connection count is more or less the same as before. We have seen this plateau behaviour several times now (unfortunatly chaeplin reseted the charts) - and i don't believe that this is the usual P2P ramp up behaviour.

How likely is it to have 4 times more masternodes than client connections in the network?
My math lectures are some days ago now, and i forgot how to calculate the average edge count per vertex in a p2p mesh with vertices having the default of 8 outgoing and 8 incoming connections. Can anyone figure out how realistic the 4:1 ratio is?

My three nodes have reduced ban time and 256 incoming connections, so i guess they will harvest any connection they can get over time. And i have never seen more than 50 connections in testnet (currently 22, 22 and 45)

Maybe its possible to spoof the masternode discovery messages and to poison the list with non-existent nodes? @Evan: What takes precendence: The node-message broadcasts or the local lastSeen pinging to maintain the local masternode list?

I did some further investigations, and my claim stays: the masternode list is bogus.

I ran a script using tcpping/tcptraceroute against the list of active IPs, to check if port 19999 is open on the machines. The results are as follows:

Code:
pinging 23.20.78.184 -->seq 0: no response (timeout)
pinging 23.22.13.31 -->seq 0: no response (timeout)
pinging 23.23.55.5 -->seq 0: no response (timeout)
pinging 23.242.106.27 -->seq 0: tcp response from cpe-23-242-106-27.socal.res.rr.com (23.242.106.27) [open]  102.207 ms
pinging 37.187.47.129 -->seq 0: tcp response from 129.ip-37-187-47.eu (37.187.47.129) [open]  80.862 ms
pinging 37.187.216.11 -->seq 0: tcp response from 11.ip-37-187-216.eu (37.187.216.11) [closed]  86.208 ms
pinging 50.16.10.185 -->seq 0: no response (timeout)
pinging 50.16.13.19 -->seq 0: no response (timeout)
pinging 50.16.159.173 -->seq 0: no response (timeout)
pinging 50.17.9.225 -->seq 0: no response (timeout)
pinging 50.17.56.71 -->seq 0: no response (timeout)
pinging 50.17.76.216 -->seq 0: no response (timeout)
pinging 54.76.47.232 -->seq 0: no response (timeout)
pinging 54.80.118.25 -->seq 0: no response (timeout)
pinging 54.80.186.130 -->seq 0: no response (timeout)
pinging 54.80.217.163 -->seq 0: no response (timeout)
pinging 54.80.221.149 -->seq 0: no response (timeout)
pinging 54.80.248.174 -->seq 0: no response (timeout)
pinging 54.81.150.16 -->seq 0: no response (timeout)
pinging 54.81.244.239 -->seq 0: no response (timeout)
pinging 54.82.237.183 -->seq 0: no response (timeout)
pinging 54.83.151.118 -->seq 0: no response (timeout)
pinging 54.86.103.191 -->seq 0: tcp response from ec2-54-86-103-191.compute-1.amazonaws.com (54.86.103.191) [open]  16.433 ms
pinging 54.87.121.203 -->seq 0: no response (timeout)
pinging 54.197.213.80 -->seq 0: no response (timeout)
pinging 54.198.109.108 -->seq 0: no response (timeout)
pinging 54.198.145.174 -->seq 0: no response (timeout)
pinging 54.198.191.99 -->seq 0: no response (timeout)
pinging 54.198.252.150 -->seq 0: no response (timeout)
pinging 54.203.217.224 -->seq 0: no response (timeout)
pinging 54.211.202.218 -->seq 0: no response (timeout)
pinging 54.224.28.113 -->seq 0: no response (timeout)
pinging 54.224.75.195 -->seq 0: no response (timeout)
pinging 54.224.75.199 -->seq 0: no response (timeout)
pinging 54.225.6.1 -->seq 0: no response (timeout)
pinging 54.226.159.60 -->seq 0: no response (timeout)
pinging 54.227.104.145 -->seq 0: no response (timeout)
pinging 54.227.119.61 -->seq 0: no response (timeout)
pinging 54.234.251.170 -->seq 0: no response (timeout)
pinging 54.237.191.208 -->seq 0: no response (timeout)
pinging 54.242.110.149 -->seq 0: no response (timeout)
pinging 54.242.139.236 -->seq 0: no response (timeout)
pinging 54.242.152.165 -->seq 0: no response (timeout)
pinging 54.255.159.230 -->seq 0: no response (timeout)
pinging 84.25.161.117 -->seq 0: tcp response from 5419A175.cm-5-2c.dynamic.ziggo.nl (84.25.161.117) [open]  97.837 ms
pinging 93.188.161.89 -->seq 0: tcp response from 93.188.161.89 [open]  18.385 ms
pinging 104.33.210.231 -->seq 0: tcp response from cpe-104-33-210-231.socal.res.rr.com (104.33.210.231) [open]  97.893 ms
pinging 107.20.4.185 -->seq 0: no response (timeout)
pinging 107.20.28.12 -->seq 0: no response (timeout)
pinging 107.20.38.255 -->seq 0: no response (timeout)
pinging 107.20.122.204 -->seq 0: no response (timeout)
pinging 107.21.138.152 -->seq 0: no response (timeout)
pinging 107.22.42.148 -->seq 0: no response (timeout)
pinging 108.61.199.47 -->seq 0: tcp response from 108.61.199.47.vultr.com (108.61.199.47) [open]  83.854 ms
pinging 184.73.34.180 -->seq 0: no response (timeout)
pinging 184.73.48.127 -->seq 0: no response (timeout)
pinging 184.73.179.148 -->seq 0: tcp response from ec2-184-73-179-148.compute-1.amazonaws.com (184.73.179.148) [open]  1.544 ms
pinging 184.73.179.187 -->seq 0: tcp response from ec2-184-73-179-187.compute-1.amazonaws.com (184.73.179.187) [open]  61.965 ms
pinging 184.73.179.196 -->seq 0: tcp response from ec2-184-73-179-196.compute-1.amazonaws.com (184.73.179.196) [open]  29.792 ms
pinging 188.226.133.22 -->seq 0: tcp response from 188.226.133.22 [closed]  93.537 ms
pinging 204.236.211.102 -->seq 0: no response (timeout)

You will notice that most IPs are not open on port 19999, only a decent rest of 12(!) IPs is at least listening on port 19999.

I don't know how the masses of dead IPs are coming on the list and why they stay there since almost 20 hours, since they do not provide any service to the network, but my fear is they take part in the voting too. The code to determine if a node is alive and providing darksend service needs to be refurbished - proof of service needs to be PROOF of service!

If somebody wants to verify my findings

Code:
$ sudo apt-get install tcptraceroute
$ wget http://www.vdberg.org/~richard/tcpping
$ chmod 755 tcpping
$ ./tcpping -w1 -x 1 184.73.179.148 19999

http://xmodulo.com/2013/01/how-to-install-tcpping-on-linux.html

EDIT: I ran my script against mainnet as well - Only 173 out of 355 (49%) masternodes have open port 9999 - the rest is dead :sad: please proof me wrong...

Code:
pinging 5.254.116.232 --> seq 0: tcp response from lh25767.voxility.net (5.254.116.232) [closed]  131.646 ms
pinging 23.20.8.141 --> seq 0: no response (timeout)
pinging 23.20.40.89 --> seq 0: no response (timeout)
pinging 23.20.86.159 --> seq 0: no response (timeout)
pinging 23.20.120.177 --> seq 0: no response (timeout)
pinging 23.20.183.122 --> seq 0: no response (timeout)
pinging 23.20.251.111 --> seq 0: no response (timeout)
pinging 23.98.71.17 --> seq 0: no response (timeout)
pinging 37.59.168.129 --> seq 0: tcp response from masternode.darkcoin.fr (37.59.168.129) [open]  84.389 ms
pinging 37.187.43.227 --> seq 0: tcp response from 227.ip-37-187-43.eu (37.187.43.227) [open]  80.885 ms
pinging 37.187.47.129 --> seq 0: tcp response from 129.ip-37-187-47.eu (37.187.47.129) [open]  80.621 ms
pinging 37.187.244.244 --> seq 0: tcp response from 244.ip-37-187-244.eu (37.187.244.244) [open]  93.088 ms
pinging 37.187.244.247 --> seq 0: tcp response from 247.ip-37-187-244.eu (37.187.244.247) [open]  90.011 ms
pinging 46.22.128.38 --> seq 0: tcp response from 128-38.colo.sta.blacknight.ie (46.22.128.38) [open]  92.735 ms
pinging 46.22.128.38 --> seq 0: tcp response from 128-38.colo.sta.blacknight.ie (46.22.128.38) [open]  95.496 ms
pinging 46.28.206.33 --> seq 0: tcp response from 46.28.206.33 [closed]  100.029 ms
pinging 46.240.170.28 --> seq 0: tcp response from 46.240.170.28 [open]  126.888 ms
pinging 50.16.136.92 --> seq 0: no response (timeout)
pinging 50.16.161.222 --> seq 0: no response (timeout)
pinging 50.17.41.33 --> seq 0: no response (timeout)
pinging 50.17.52.166 --> seq 0: no response (timeout)
pinging 50.17.176.215 --> seq 0: no response (timeout)
pinging 50.19.68.175 --> seq 0: no response (timeout)
[...]
 
Last edited by a moderator:
***** PLEASE UPDATE TO 9.5.11 OR 10.9.11 ******

Yesterday I evaluated the checkpoint enforcing mechanism for detecting abuse of the voting system and decided it didn't really work very well. I didn't like the way the network was acting and we were getting lots of orphans. It worked but it would have created a bunch of problems on mainnet. So I stayed up most of the night hunting down the underlying problems with the masternode forking issues and found them! After that I went back and removed the whole checkpointing system.

Binaries (stable)
http://www.darkcoin.io/downloads/forkfix/darkcoin-qt
http://www.darkcoin.io/downloads/forkfix/darkcoind

RC3 Binaries ( masternodes )
http://www.darkcoin.io/downloads/rc3/darkcoin-qt
http://www.darkcoin.io/downloads/rc3/darkcoind
 
I did some further investigations, and my claim stays: the masternode list is bogus.

I ran a script using tcpping/tcptraceroute against the list of active IPs, to check if port 19999 is open on the machines. The results are as follows:

Code:
pinging 23.20.78.184 -->seq 0: no response (timeout)
pinging 23.22.13.31 -->seq 0: no response (timeout)
pinging 23.23.55.5 -->seq 0: no response (timeout)
pinging 23.242.106.27 -->seq 0: tcp response from cpe-23-242-106-27.socal.res.rr.com (23.242.106.27) [open]  102.207 ms
pinging 37.187.47.129 -->seq 0: tcp response from 129.ip-37-187-47.eu (37.187.47.129) [open]  80.862 ms
pinging 37.187.216.11 -->seq 0: tcp response from 11.ip-37-187-216.eu (37.187.216.11) [closed]  86.208 ms
pinging 50.16.10.185 -->seq 0: no response (timeout)
pinging 50.16.13.19 -->seq 0: no response (timeout)
pinging 50.16.159.173 -->seq 0: no response (timeout)
pinging 50.17.9.225 -->seq 0: no response (timeout)
pinging 50.17.56.71 -->seq 0: no response (timeout)
pinging 50.17.76.216 -->seq 0: no response (timeout)
pinging 54.76.47.232 -->seq 0: no response (timeout)
pinging 54.80.118.25 -->seq 0: no response (timeout)
pinging 54.80.186.130 -->seq 0: no response (timeout)
pinging 54.80.217.163 -->seq 0: no response (timeout)
pinging 54.80.221.149 -->seq 0: no response (timeout)
pinging 54.80.248.174 -->seq 0: no response (timeout)
pinging 54.81.150.16 -->seq 0: no response (timeout)
pinging 54.81.244.239 -->seq 0: no response (timeout)
pinging 54.82.237.183 -->seq 0: no response (timeout)
pinging 54.83.151.118 -->seq 0: no response (timeout)
pinging 54.86.103.191 -->seq 0: tcp response from ec2-54-86-103-191.compute-1.amazonaws.com (54.86.103.191) [open]  16.433 ms
pinging 54.87.121.203 -->seq 0: no response (timeout)
pinging 54.197.213.80 -->seq 0: no response (timeout)
pinging 54.198.109.108 -->seq 0: no response (timeout)
pinging 54.198.145.174 -->seq 0: no response (timeout)
pinging 54.198.191.99 -->seq 0: no response (timeout)
pinging 54.198.252.150 -->seq 0: no response (timeout)
pinging 54.203.217.224 -->seq 0: no response (timeout)
pinging 54.211.202.218 -->seq 0: no response (timeout)
pinging 54.224.28.113 -->seq 0: no response (timeout)
pinging 54.224.75.195 -->seq 0: no response (timeout)
pinging 54.224.75.199 -->seq 0: no response (timeout)
pinging 54.225.6.1 -->seq 0: no response (timeout)
pinging 54.226.159.60 -->seq 0: no response (timeout)
pinging 54.227.104.145 -->seq 0: no response (timeout)
pinging 54.227.119.61 -->seq 0: no response (timeout)
pinging 54.234.251.170 -->seq 0: no response (timeout)
pinging 54.237.191.208 -->seq 0: no response (timeout)
pinging 54.242.110.149 -->seq 0: no response (timeout)
pinging 54.242.139.236 -->seq 0: no response (timeout)
pinging 54.242.152.165 -->seq 0: no response (timeout)
pinging 54.255.159.230 -->seq 0: no response (timeout)
pinging 84.25.161.117 -->seq 0: tcp response from 5419A175.cm-5-2c.dynamic.ziggo.nl (84.25.161.117) [open]  97.837 ms
pinging 93.188.161.89 -->seq 0: tcp response from 93.188.161.89 [open]  18.385 ms
pinging 104.33.210.231 -->seq 0: tcp response from cpe-104-33-210-231.socal.res.rr.com (104.33.210.231) [open]  97.893 ms
pinging 107.20.4.185 -->seq 0: no response (timeout)
pinging 107.20.28.12 -->seq 0: no response (timeout)
pinging 107.20.38.255 -->seq 0: no response (timeout)
pinging 107.20.122.204 -->seq 0: no response (timeout)
pinging 107.21.138.152 -->seq 0: no response (timeout)
pinging 107.22.42.148 -->seq 0: no response (timeout)
pinging 108.61.199.47 -->seq 0: tcp response from 108.61.199.47.vultr.com (108.61.199.47) [open]  83.854 ms
pinging 184.73.34.180 -->seq 0: no response (timeout)
pinging 184.73.48.127 -->seq 0: no response (timeout)
pinging 184.73.179.148 -->seq 0: tcp response from ec2-184-73-179-148.compute-1.amazonaws.com (184.73.179.148) [open]  1.544 ms
pinging 184.73.179.187 -->seq 0: tcp response from ec2-184-73-179-187.compute-1.amazonaws.com (184.73.179.187) [open]  61.965 ms
pinging 184.73.179.196 -->seq 0: tcp response from ec2-184-73-179-196.compute-1.amazonaws.com (184.73.179.196) [open]  29.792 ms
pinging 188.226.133.22 -->seq 0: tcp response from 188.226.133.22 [closed]  93.537 ms
pinging 204.236.211.102 -->seq 0: no response (timeout)

You will notice that most IPs are not open on port 19999, only a decent rest of 12(!) IPs is at least listening on port 19999.

I don't know how the masses of dead IPs are coming on the list and why they stay there since almost 20 hours, since they do not provide any service to the network, but my fear is they take part in the voting too. The code to determine if a node is alive and providing darksend service needs to be refurbished - proof of service needs to be PROOF of service!

If somebody wants to verify my findings

Code:
$ sudo apt-get install tcptraceroute
$ wget http://www.vdberg.org/~richard/tcpping
$ chmod 755 tcpping
$ ./tcpping -w1 -x 1 184.73.179.148 19999

http://xmodulo.com/2013/01/how-to-install-tcpping-on-linux.html

EDIT: I ran my script against mainnet as well - Only 173 out of 355 (49%) masternodes have open port 9999 - the rest is dead :sad: please proof me wrong...

Code:
pinging 5.254.116.232 --> seq 0: tcp response from lh25767.voxility.net (5.254.116.232) [closed]  131.646 ms
pinging 23.20.8.141 --> seq 0: no response (timeout)
pinging 23.20.40.89 --> seq 0: no response (timeout)
pinging 23.20.86.159 --> seq 0: no response (timeout)
pinging 23.20.120.177 --> seq 0: no response (timeout)
pinging 23.20.183.122 --> seq 0: no response (timeout)
pinging 23.20.251.111 --> seq 0: no response (timeout)
pinging 23.98.71.17 --> seq 0: no response (timeout)
pinging 37.59.168.129 --> seq 0: tcp response from masternode.darkcoin.fr (37.59.168.129) [open]  84.389 ms
pinging 37.187.43.227 --> seq 0: tcp response from 227.ip-37-187-43.eu (37.187.43.227) [open]  80.885 ms
pinging 37.187.47.129 --> seq 0: tcp response from 129.ip-37-187-47.eu (37.187.47.129) [open]  80.621 ms
pinging 37.187.244.244 --> seq 0: tcp response from 244.ip-37-187-244.eu (37.187.244.244) [open]  93.088 ms
pinging 37.187.244.247 --> seq 0: tcp response from 247.ip-37-187-244.eu (37.187.244.247) [open]  90.011 ms
pinging 46.22.128.38 --> seq 0: tcp response from 128-38.colo.sta.blacknight.ie (46.22.128.38) [open]  92.735 ms
pinging 46.22.128.38 --> seq 0: tcp response from 128-38.colo.sta.blacknight.ie (46.22.128.38) [open]  95.496 ms
pinging 46.28.206.33 --> seq 0: tcp response from 46.28.206.33 [closed]  100.029 ms
pinging 46.240.170.28 --> seq 0: tcp response from 46.240.170.28 [open]  126.888 ms
pinging 50.16.136.92 --> seq 0: no response (timeout)
pinging 50.16.161.222 --> seq 0: no response (timeout)
pinging 50.17.41.33 --> seq 0: no response (timeout)
pinging 50.17.52.166 --> seq 0: no response (timeout)
pinging 50.17.176.215 --> seq 0: no response (timeout)
pinging 50.19.68.175 --> seq 0: no response (timeout)
[...]

The list isn't really bogus, those were EC2 instances with port 9999 blocked. I haven't actually gotten around to implementing the true proof-of-service part of the masternode system yet. Maybe I can fit that into RC4? Currently if a client can't connect it just skips them and moves onto the next, so it's not a huge deal.

The mainnet version of the software has the ghost masternode problem, so 70+% of that list is not really there.
 
So I stayed up most of the night hunting down the underlying problems with the masternode forking issues and found them! After that I went back and removed the whole checkpointing system.

Nice work Evan! The approach to hunt the forking bug down instead of introducing a system which is "masking" the problem is the better one :smile:
 
The list isn't really bogus, those were EC2 instances with port 9999 blocked. I haven't actually gotten around to implementing the true proof-of-service part of the masternode system yet.

But how is a masternode able to be place on the list with port 19999/9999 blocked? No client can connect to them, neither for blockchain download, nor for darksend service? Where the ports closed after startup?

Maybe I can fit that into RC4? Currently if a client can't connect it just skips them and moves onto the next, so it's not a huge deal.

Let me check if dead/blocked IPs received payments/votes first - if we ensure that payments only get paid when client is connectable on port 9999/19999 real proof of service may be pushed to RC4 for sure.

The mainnet version of the software has the ghost masternode problem, so 70+% of that list is not really there.
Yeah, i remember that.
 
But how is a masternode able to be place on the list with port 19999/9999 blocked? No client can connect to them, neither for blockchain download, nor for darksend service? Where the ports closed after startup?
Outbound connection only client ><.
It's like a node in NAT without port forwarding or listen=0.
 
Outbound connection only client ><.
It's like a node in NAT without port forwarding or listen=0.
Yeah, but why do the outbound connection only clients spike in all at a time (remember your graph showing the raise from 17--> 68)? Did someone start his dialup 40 times to get a new IP? And why does the TTL code not remove the nodes after 30mins?
 
Status
Not open for further replies.
Back
Top