• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

RC3: Hard Fork on June 20th!

Quick question. What about the multiple 1k config? I forget the name. Having 1 node with 2k doubling the vote count? Possible?
+1, wasnt that planned for the next release? that's more important than cold storage tbh :D
 
+1, wasnt that planned for the next release? that's more important than cold storage tbh :D

Isn't this debatable.. wouldn't that help concentrate/centralize the MN's... wouldn't it be better to have more smaller ones?
 
should be another one now... will add the rest in a few days.

I am getting a flood of messages

014-06-12 19:38:14 ProcessMessage(dsee, 209 bytes) FAILED
2014-06-12 19:38:14 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:14 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:14 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:14 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:14 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:14 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:14 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:14 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:14 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:14 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:14 ProcessMessage(dsee, 273 bytes) FAILED
2014-06-12 19:38:15 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:15 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:15 ProcessMessage(dsee, 241 bytes) FAILED
2014-06-12 19:38:15 ProcessMessage(dsee, 209 bytes) FAILED
 
Isn't this debatable.. wouldn't that help concentrate/centralize the MN's... wouldn't it be better to have more smaller ones?

Well, no, because they're not mixing coins. It just saves multiple VPS and multiple local wallets.
 
Damn you're quick! You're on top of my list for tips! As a Mac user, I would be totally lost without your efforts.
Not Found
The requested URL /downloads/DarkCoin-Qt-MacOSX-v0.9.10.0.zip was not found on this server.
-- Mirror works!

Hi yidakee, thanks a lot! Excuse me, I posted a link with a typo at first indeed. Fixed now.
 
My question seemed to have been missed so I'm reposting it below... If you need further clarification on what I'm asking just let me know. Thanks!

So with the hot/cold wallet support will this allow me to now run two local wallet masternodes on the same PC where each local master node is pointing to a different EC2 remote server? The PC only has one external IP address.
 
Well, no, because they're not mixing coins. It just saves multiple VPS and multiple local wallets.
See, I don't understand why they should be eligible to receive payments when they're not available to work. People with only enough coins for a single masternode have to pay for a server and have to do proof of work, and those who have many nodes worth of coins only have to run a single server but collect the rewards of multiple servers without doing any work. It really doesn't make any sense to me?
 
My question seemed to have been missed so I'm reposting it below... If you need further clarification on what I'm asking just let me know. Thanks!

So with the hot/cold wallet support will this allow me to now run two local wallet masternodes on the same PC where each local master node is pointing to a different EC2 remote server? The PC only has one external IP address.
Yes, if you run them cold. That means that once you successfully start the local masternmode, and it successfully hooks up with the remote one, and you get the message that you may now turn off your local wallet, etc... then you can take it down, store it on a jump drive or cd or something. But I haven't been able to successfully do that and am trying to figure out what I'm doing wrong. When I see the error of my ways, I will add them to my tutorial in the guides section :)

But you will still need to run the remote on it's own ip address.
 
See, I don't understand why they should be eligible to receive payments when they're not available to work. People with only enough coins for a single masternode have to pay for a server and have to do proof of work, and those who have many nodes worth of coins only have to run a single server but collect the rewards of multiple servers without doing any work. It really doesn't make any sense to me?

Well spotted, accumulating A x 1000 DRK in one masternode is not proof of service but proof of stake. So my take on this: don't change the fundamentals, keep 1000DRK vin per service forever.
 
I do have to say that the reason this was thought up was because Evan thought we'd have more than 10,000 masternodes and that so many would start to clog up the system. But that has not been the case. I believe the highest numbers we had were in the 800 mn range. I honestly don't think we're going to have too many masternodes but over time, we may have too few. Not that I want to make it easier to have a masternode, I think in the future, people will be pooling their resources to open masternodes, as they already are. I do worry about this as well, and hope those providing this service are reputable and will create a system where people can be sure their coins are safe??

Anyway, I just don't see the need to pander to the wealthiest amongst us. It doesn't seem right. And I'm not really talking about being jealous, but also about how people will feel about the system, will it alienate our supporters? Well, that's how I feel about it at this time from what I've seen happen since last month. I honestly think that masternodes will find their balance, and their balance will be far lower than the number of MNs required to clog up the traffic.
 
But 100% of these 10 are operational :grin:

Masternode network is gaining traction: 49 nodes, 100% operational

Code:
pinging 46.22.128.38 -->seq 0: tcp response from 128-38.colo.sta.blacknight.ie (46.22.128.38) [open]  95.576 ms
pinging 54.76.111.171 -->seq 0: tcp response from ec2-54-76-111-171.eu-west-1.compute.amazonaws.com (54.76.111.171) [open]  80.042 ms
pinging 37.187.47.129 -->seq 0: tcp response from 129.ip-37-187-47.eu (37.187.47.129) [open]  83.692 ms
pinging 107.170.139.43 -->seq 0: tcp response from 107.170.139.43 [open]  9.114 ms
pinging 128.199.213.156 -->seq 0: tcp response from 128.199.213.156 [open]  249.336 ms
pinging 188.226.195.27 -->seq 0: tcp response from 188.226.195.27 [open]  95.545 ms
pinging 188.226.247.114 -->seq 0: tcp response from 188.226.247.114 [open]  91.560 ms
pinging 54.84.135.47 -->seq 0: tcp response from ec2-54-84-135-47.compute-1.amazonaws.com (54.84.135.47) [open]  1.278 ms
pinging 54.72.17.216 -->seq 0: tcp response from ec2-54-72-17-216.eu-west-1.compute.amazonaws.com (54.72.17.216) [open]  84.841 ms
pinging 162.248.5.147 -->seq 0: tcp response from 162.248.5.147 [open]  66.866 ms
pinging 91.214.169.126 -->seq 0: tcp response from nxtpeer.vserver.softronics.ch (91.214.169.126) [open]  116.461 ms
pinging 192.99.184.42 -->seq 0: tcp response from drk01.flipsidehobbies.com (192.99.184.42) [open]  15.601 ms
pinging 192.99.184.43 -->seq 0: tcp response from 192.99.184.43 [open]  15.550 ms
pinging 192.99.184.44 -->seq 0: tcp response from 192.99.184.44 [open]  15.667 ms
pinging 192.99.184.45 -->seq 0: tcp response from 192.99.184.45 [open]  15.613 ms
pinging 192.99.184.46 -->seq 0: tcp response from 192.99.184.46 [open]  15.601 ms
pinging 192.99.184.47 -->seq 0: tcp response from 192.99.184.47 [open]  15.816 ms
pinging 192.99.184.48 -->seq 0: tcp response from 192.99.184.48 [open]  15.832 ms
pinging 192.99.184.49 -->seq 0: tcp response from 192.99.184.49 [open]  15.468 ms
pinging 192.99.184.50 -->seq 0: tcp response from 192.99.184.50 [open]  15.550 ms
pinging 192.99.184.51 -->seq 0: tcp response from 192.99.184.51 [open]  15.701 ms
pinging 192.99.184.52 -->seq 0: tcp response from 192.99.184.52 [open]  15.762 ms
pinging 192.99.184.53 -->seq 0: tcp response from 192.99.184.53 [open]  15.561 ms
pinging 192.99.184.54 -->seq 0: tcp response from 192.99.184.54 [open]  15.630 ms
pinging 192.99.184.55 -->seq 0: tcp response from 192.99.184.55 [open]  16.339 ms
pinging 192.99.184.56 -->seq 0: tcp response from 192.99.184.56 [open]  15.681 ms
pinging 192.99.184.57 -->seq 0: tcp response from 192.99.184.57 [open]  15.804 ms
pinging 192.99.184.58 -->seq 0: tcp response from 192.99.184.58 [open]  15.813 ms
pinging 192.99.184.59 -->seq 0: tcp response from 192.99.184.59 [open]  15.984 ms
pinging 54.184.62.75 -->seq 0: tcp response from ec2-54-184-62-75.us-west-2.compute.amazonaws.com (54.184.62.75) [open]  80.728 ms
pinging 192.99.184.60 -->seq 0: tcp response from 192.99.184.60 [open]  15.754 ms
pinging 192.99.184.61 -->seq 0: tcp response from 192.99.184.61 [open]  15.815 ms
pinging 192.99.184.62 -->seq 0: tcp response from 192.99.184.62 [open]  15.786 ms
pinging 192.99.184.63 -->seq 0: tcp response from 192.99.184.63 [open]  15.859 ms
pinging 162.243.66.24 -->seq 0: tcp response from drk02.cryptomix.net (162.243.66.24) [open]  9.189 ms
pinging 54.86.15.235 -->seq 0: tcp response from ec2-54-86-15-235.compute-1.amazonaws.com (54.86.15.235) [open]  1.427 ms
pinging 54.200.3.190 -->seq 0: tcp response from ec2-54-200-3-190.us-west-2.compute.amazonaws.com (54.200.3.190) [open]  78.052 ms
pinging 54.178.168.241 -->seq 0: tcp response from ec2-54-178-168-241.ap-northeast-1.compute.amazonaws.com (54.178.168.241) [open]  161.675 ms
pinging 188.226.252.28 -->seq 0: tcp response from 188.226.252.28 [open]  87.999 ms
pinging 98.101.247.254 -->seq 0: tcp response from rrcs-98-101-247-254.midsouth.biz.rr.com (98.101.247.254) [open]  23.684 ms
pinging 54.186.36.157 -->seq 0: tcp response from ec2-54-186-36-157.us-west-2.compute.amazonaws.com (54.186.36.157) [open]  76.652 ms
pinging 162.243.76.23 -->seq 0: tcp response from 162.243.76.23 [open]  8.690 ms
pinging 54.244.160.108 -->seq 0: tcp response from ec2-54-244-160-108.us-west-2.compute.amazonaws.com (54.244.160.108) [open]  80.881 ms
pinging 54.244.144.14 -->seq 0: tcp response from ec2-54-244-144-14.us-west-2.compute.amazonaws.com (54.244.144.14) [open]  77.085 ms
pinging 54.203.5.20 -->seq 0: tcp response from ec2-54-203-5-20.us-west-2.compute.amazonaws.com (54.203.5.20) [open]  75.634 ms
pinging 54.202.66.227 -->seq 0: tcp response from ec2-54-202-66-227.us-west-2.compute.amazonaws.com (54.202.66.227) [open]  79.537 ms
pinging 54.188.8.228 -->seq 0: tcp response from ec2-54-188-8-228.us-west-2.compute.amazonaws.com (54.188.8.228) [open]  84.053 ms
pinging 162.243.219.25 -->seq 0: tcp response from 162.243.219.25 [open]  9.139 ms
pinging 54.184.167.145 -->seq 0: tcp response from ec2-54-184-167-145.us-west-2.compute.amazonaws.com (54.184.167.145) [open]  87.638 ms

Good!

I do have to say that the reason this was thought up was because Evan thought we'd have more than 10,000 masternodes and that so many would start to clog up the system. But that has not been the case. I believe the highest numbers we had were in the 800 mn range. I honestly don't think we're going to have too many masternodes but over time, we may have too few.

My last measures of mainnet RC2 revealed that there were 170 listening nodes. Taking into account that some nodes may have had blocked ports i expect node count to be 200ish by June 20th.

800mn were not a correct count, due to flaws in RC2 masternode protocol.

See, I don't understand why they should be eligible to receive payments when they're not available to work. People with only enough coins for a single masternode have to pay for a server and have to do proof of work, and those who have many nodes worth of coins only have to run a single server but collect the rewards of multiple servers without doing any work. It really doesn't make any sense to me?

You may have noticed from the above ping log that there is someone with a rig of 20 masternodes (192.99.184.*) - so he is providing real service to the network. Now imagine how it would harm the network when he accumulates 20000DRK into just one masternode....
 
Last edited by a moderator:
****** ATTENTION: POOL OPERATORS ****************************

You MUST update your pool software to pay out the correct amount or
your blocks will be rejected by the network.

Stratum Users:

https://github.com/darkcoinproject/darkcoin-stratum/commit/1aa9317eb1612e290d9dad232744a1cda844471a

NOMP Users:

https://github.com/darkcoinproject/node-stratum-pool/commit/c37103907007d517650cb61360826b0112895cc5


about pool operators
we use NOMP with MPOS support https://github.com/zone117x/node-open-mining-portal
these changes are invalid for our pool
 
See, I don't understand why they should be eligible to receive payments when they're not available to work. People with only enough coins for a single masternode have to pay for a server and have to do proof of work, and those who have many nodes worth of coins only have to run a single server but collect the rewards of multiple servers without doing any work. It really doesn't make any sense to me?

Well... yeah it does. In ecological term its even better for the environment. If you have one node, great. If you have 50, why run 50 independent servers? The only answer is to prevent a DDoS attack, and that would be really bad for someone doing say 50k x1 - server operator would loose all 50 earnings at once. But why not have 20k +20+ 5k + 5k and just manage 4 servers?

And lets not forget Tante, we're on free tier EC2. In one year's time, they'll start costing us! Multiplied by the number of nodes.
Imagine paying for, and doing maintenance on all 50, one at a time. Sure, more work, more rewards... Here is less work, much higher risk, same reward.

If setting up a masternode was like super complicated, or required actual daily network monitoring... then yes, totally agree, I do get where you come from. Other than that, with a few dozen nodes the network is secure, and we have hundreds already.

I dont see anything unfair about it, maybe I'm missing something?
 
Last edited by a moderator:
Hi,
I am updating my masternodes and I noticed that my first one is now listed as "0". I used the cold storage local/remote method (started the masternode this morning, it was listed as "1" for a while).
Is it normal? Will it show again as "1" (active) given some time?
I read in the testing thread that sometimes it could take several hours. Just wondering if that's my "problem".
 
1. list is up to 82 nodes now :)

2. I have asked this before, gona try again. I know I am not as good as most of you here.. so ......
How do I determine my public key that is used in the voting? I know in the code it is:
mv.GetPubKey().ToString().c_str()

My question is... How can I figure out what mine is? is there an rpc command to list it?
 
Last edited by a moderator:
Back
Top