RC3: Hard Fork on June 20th!

TanteStefana

Grizzled Member
Foundation Member
Mar 9, 2014
2,876
1,866
1,283
Well, no, because they're not mixing coins. It just saves multiple VPS and multiple local wallets.
See, I don't understand why they should be eligible to receive payments when they're not available to work. People with only enough coins for a single masternode have to pay for a server and have to do proof of work, and those who have many nodes worth of coins only have to run a single server but collect the rewards of multiple servers without doing any work. It really doesn't make any sense to me?
 

TanteStefana

Grizzled Member
Foundation Member
Mar 9, 2014
2,876
1,866
1,283
My question seemed to have been missed so I'm reposting it below... If you need further clarification on what I'm asking just let me know. Thanks!

So with the hot/cold wallet support will this allow me to now run two local wallet masternodes on the same PC where each local master node is pointing to a different EC2 remote server? The PC only has one external IP address.
Yes, if you run them cold. That means that once you successfully start the local masternmode, and it successfully hooks up with the remote one, and you get the message that you may now turn off your local wallet, etc... then you can take it down, store it on a jump drive or cd or something. But I haven't been able to successfully do that and am trying to figure out what I'm doing wrong. When I see the error of my ways, I will add them to my tutorial in the guides section :)

But you will still need to run the remote on it's own ip address.
 

flare

Grizzled Member
May 18, 2014
2,286
2,404
1,183
Germany
See, I don't understand why they should be eligible to receive payments when they're not available to work. People with only enough coins for a single masternode have to pay for a server and have to do proof of work, and those who have many nodes worth of coins only have to run a single server but collect the rewards of multiple servers without doing any work. It really doesn't make any sense to me?
Well spotted, accumulating A x 1000 DRK in one masternode is not proof of service but proof of stake. So my take on this: don't change the fundamentals, keep 1000DRK vin per service forever.
 
  • Like
Reactions: jimbit

TanteStefana

Grizzled Member
Foundation Member
Mar 9, 2014
2,876
1,866
1,283
I do have to say that the reason this was thought up was because Evan thought we'd have more than 10,000 masternodes and that so many would start to clog up the system. But that has not been the case. I believe the highest numbers we had were in the 800 mn range. I honestly don't think we're going to have too many masternodes but over time, we may have too few. Not that I want to make it easier to have a masternode, I think in the future, people will be pooling their resources to open masternodes, as they already are. I do worry about this as well, and hope those providing this service are reputable and will create a system where people can be sure their coins are safe??

Anyway, I just don't see the need to pander to the wealthiest amongst us. It doesn't seem right. And I'm not really talking about being jealous, but also about how people will feel about the system, will it alienate our supporters? Well, that's how I feel about it at this time from what I've seen happen since last month. I honestly think that masternodes will find their balance, and their balance will be far lower than the number of MNs required to clog up the traffic.
 

flare

Grizzled Member
May 18, 2014
2,286
2,404
1,183
Germany
But 100% of these 10 are operational :D
Masternode network is gaining traction: 49 nodes, 100% operational

Code:
pinging 46.22.128.38 -->seq 0: tcp response from 128-38.colo.sta.blacknight.ie (46.22.128.38) [open]  95.576 ms
pinging 54.76.111.171 -->seq 0: tcp response from ec2-54-76-111-171.eu-west-1.compute.amazonaws.com (54.76.111.171) [open]  80.042 ms
pinging 37.187.47.129 -->seq 0: tcp response from 129.ip-37-187-47.eu (37.187.47.129) [open]  83.692 ms
pinging 107.170.139.43 -->seq 0: tcp response from 107.170.139.43 [open]  9.114 ms
pinging 128.199.213.156 -->seq 0: tcp response from 128.199.213.156 [open]  249.336 ms
pinging 188.226.195.27 -->seq 0: tcp response from 188.226.195.27 [open]  95.545 ms
pinging 188.226.247.114 -->seq 0: tcp response from 188.226.247.114 [open]  91.560 ms
pinging 54.84.135.47 -->seq 0: tcp response from ec2-54-84-135-47.compute-1.amazonaws.com (54.84.135.47) [open]  1.278 ms
pinging 54.72.17.216 -->seq 0: tcp response from ec2-54-72-17-216.eu-west-1.compute.amazonaws.com (54.72.17.216) [open]  84.841 ms
pinging 162.248.5.147 -->seq 0: tcp response from 162.248.5.147 [open]  66.866 ms
pinging 91.214.169.126 -->seq 0: tcp response from nxtpeer.vserver.softronics.ch (91.214.169.126) [open]  116.461 ms
pinging 192.99.184.42 -->seq 0: tcp response from drk01.flipsidehobbies.com (192.99.184.42) [open]  15.601 ms
pinging 192.99.184.43 -->seq 0: tcp response from 192.99.184.43 [open]  15.550 ms
pinging 192.99.184.44 -->seq 0: tcp response from 192.99.184.44 [open]  15.667 ms
pinging 192.99.184.45 -->seq 0: tcp response from 192.99.184.45 [open]  15.613 ms
pinging 192.99.184.46 -->seq 0: tcp response from 192.99.184.46 [open]  15.601 ms
pinging 192.99.184.47 -->seq 0: tcp response from 192.99.184.47 [open]  15.816 ms
pinging 192.99.184.48 -->seq 0: tcp response from 192.99.184.48 [open]  15.832 ms
pinging 192.99.184.49 -->seq 0: tcp response from 192.99.184.49 [open]  15.468 ms
pinging 192.99.184.50 -->seq 0: tcp response from 192.99.184.50 [open]  15.550 ms
pinging 192.99.184.51 -->seq 0: tcp response from 192.99.184.51 [open]  15.701 ms
pinging 192.99.184.52 -->seq 0: tcp response from 192.99.184.52 [open]  15.762 ms
pinging 192.99.184.53 -->seq 0: tcp response from 192.99.184.53 [open]  15.561 ms
pinging 192.99.184.54 -->seq 0: tcp response from 192.99.184.54 [open]  15.630 ms
pinging 192.99.184.55 -->seq 0: tcp response from 192.99.184.55 [open]  16.339 ms
pinging 192.99.184.56 -->seq 0: tcp response from 192.99.184.56 [open]  15.681 ms
pinging 192.99.184.57 -->seq 0: tcp response from 192.99.184.57 [open]  15.804 ms
pinging 192.99.184.58 -->seq 0: tcp response from 192.99.184.58 [open]  15.813 ms
pinging 192.99.184.59 -->seq 0: tcp response from 192.99.184.59 [open]  15.984 ms
pinging 54.184.62.75 -->seq 0: tcp response from ec2-54-184-62-75.us-west-2.compute.amazonaws.com (54.184.62.75) [open]  80.728 ms
pinging 192.99.184.60 -->seq 0: tcp response from 192.99.184.60 [open]  15.754 ms
pinging 192.99.184.61 -->seq 0: tcp response from 192.99.184.61 [open]  15.815 ms
pinging 192.99.184.62 -->seq 0: tcp response from 192.99.184.62 [open]  15.786 ms
pinging 192.99.184.63 -->seq 0: tcp response from 192.99.184.63 [open]  15.859 ms
pinging 162.243.66.24 -->seq 0: tcp response from drk02.cryptomix.net (162.243.66.24) [open]  9.189 ms
pinging 54.86.15.235 -->seq 0: tcp response from ec2-54-86-15-235.compute-1.amazonaws.com (54.86.15.235) [open]  1.427 ms
pinging 54.200.3.190 -->seq 0: tcp response from ec2-54-200-3-190.us-west-2.compute.amazonaws.com (54.200.3.190) [open]  78.052 ms
pinging 54.178.168.241 -->seq 0: tcp response from ec2-54-178-168-241.ap-northeast-1.compute.amazonaws.com (54.178.168.241) [open]  161.675 ms
pinging 188.226.252.28 -->seq 0: tcp response from 188.226.252.28 [open]  87.999 ms
pinging 98.101.247.254 -->seq 0: tcp response from rrcs-98-101-247-254.midsouth.biz.rr.com (98.101.247.254) [open]  23.684 ms
pinging 54.186.36.157 -->seq 0: tcp response from ec2-54-186-36-157.us-west-2.compute.amazonaws.com (54.186.36.157) [open]  76.652 ms
pinging 162.243.76.23 -->seq 0: tcp response from 162.243.76.23 [open]  8.690 ms
pinging 54.244.160.108 -->seq 0: tcp response from ec2-54-244-160-108.us-west-2.compute.amazonaws.com (54.244.160.108) [open]  80.881 ms
pinging 54.244.144.14 -->seq 0: tcp response from ec2-54-244-144-14.us-west-2.compute.amazonaws.com (54.244.144.14) [open]  77.085 ms
pinging 54.203.5.20 -->seq 0: tcp response from ec2-54-203-5-20.us-west-2.compute.amazonaws.com (54.203.5.20) [open]  75.634 ms
pinging 54.202.66.227 -->seq 0: tcp response from ec2-54-202-66-227.us-west-2.compute.amazonaws.com (54.202.66.227) [open]  79.537 ms
pinging 54.188.8.228 -->seq 0: tcp response from ec2-54-188-8-228.us-west-2.compute.amazonaws.com (54.188.8.228) [open]  84.053 ms
pinging 162.243.219.25 -->seq 0: tcp response from 162.243.219.25 [open]  9.139 ms
pinging 54.184.167.145 -->seq 0: tcp response from ec2-54-184-167-145.us-west-2.compute.amazonaws.com (54.184.167.145) [open]  87.638 ms
Good!

I do have to say that the reason this was thought up was because Evan thought we'd have more than 10,000 masternodes and that so many would start to clog up the system. But that has not been the case. I believe the highest numbers we had were in the 800 mn range. I honestly don't think we're going to have too many masternodes but over time, we may have too few.
My last measures of mainnet RC2 revealed that there were 170 listening nodes. Taking into account that some nodes may have had blocked ports i expect node count to be 200ish by June 20th.

800mn were not a correct count, due to flaws in RC2 masternode protocol.

See, I don't understand why they should be eligible to receive payments when they're not available to work. People with only enough coins for a single masternode have to pay for a server and have to do proof of work, and those who have many nodes worth of coins only have to run a single server but collect the rewards of multiple servers without doing any work. It really doesn't make any sense to me?
You may have noticed from the above ping log that there is someone with a rig of 20 masternodes (192.99.184.*) - so he is providing real service to the network. Now imagine how it would harm the network when he accumulates 20000DRK into just one masternode....
 
Last edited by a moderator:

poolforall

New Member
Jun 13, 2014
1
0
1
****** ATTENTION: POOL OPERATORS ****************************

You MUST update your pool software to pay out the correct amount or
your blocks will be rejected by the network.

Stratum Users:

https://github.com/darkcoinproject/darkcoin-stratum/commit/1aa9317eb1612e290d9dad232744a1cda844471a

NOMP Users:

https://github.com/darkcoinproject/node-stratum-pool/commit/c37103907007d517650cb61360826b0112895cc5


about pool operators
we use NOMP with MPOS support https://github.com/zone117x/node-open-mining-portal
these changes are invalid for our pool
 

yidakee

Well-known Member
Foundation Member
Apr 16, 2014
1,812
1,168
283
See, I don't understand why they should be eligible to receive payments when they're not available to work. People with only enough coins for a single masternode have to pay for a server and have to do proof of work, and those who have many nodes worth of coins only have to run a single server but collect the rewards of multiple servers without doing any work. It really doesn't make any sense to me?
Well... yeah it does. In ecological term its even better for the environment. If you have one node, great. If you have 50, why run 50 independent servers? The only answer is to prevent a DDoS attack, and that would be really bad for someone doing say 50k x1 - server operator would loose all 50 earnings at once. But why not have 20k +20+ 5k + 5k and just manage 4 servers?

And lets not forget Tante, we're on free tier EC2. In one year's time, they'll start costing us! Multiplied by the number of nodes.
Imagine paying for, and doing maintenance on all 50, one at a time. Sure, more work, more rewards... Here is less work, much higher risk, same reward.

If setting up a masternode was like super complicated, or required actual daily network monitoring... then yes, totally agree, I do get where you come from. Other than that, with a few dozen nodes the network is secure, and we have hundreds already.

I dont see anything unfair about it, maybe I'm missing something?
 
Last edited by a moderator:

elbereth

Active Member
Dash Support Group
Mar 25, 2014
466
490
133
Costa Rica
dashninja.pl
Dash Address
XkfkHqMnhvQovo7kXQjvnNiFnQhRNZYCsz
Hi,
I am updating my masternodes and I noticed that my first one is now listed as "0". I used the cold storage local/remote method (started the masternode this morning, it was listed as "1" for a while).
Is it normal? Will it show again as "1" (active) given some time?
I read in the testing thread that sometimes it could take several hours. Just wondering if that's my "problem".
 

jimbit

Well-known Member
Foundation Member
May 23, 2014
229
103
203
1. list is up to 82 nodes now :)

2. I have asked this before, gona try again. I know I am not as good as most of you here.. so ......
How do I determine my public key that is used in the voting? I know in the code it is:
mv.GetPubKey().ToString().c_str()

My question is... How can I figure out what mine is? is there an rpc command to list it?
 
Last edited by a moderator:

Wh1teKn1ght

New Member
May 11, 2014
32
5
8
Yes, if you run them cold. That means that once you successfully start the local masternmode, and it successfully hooks up with the remote one, and you get the message that you may now turn off your local wallet, etc... then you can take it down, store it on a jump drive or cd or something. But I haven't been able to successfully do that and am trying to figure out what I'm doing wrong. When I see the error of my ways, I will add them to my tutorial in the guides section :)

But you will still need to run the remote on it's own ip address.
Thanks for the reply Tante... I'll try setting them up o er the weekend or early next week and see if I can get them going from the one external IP from my PC.
 

elbereth

Active Member
Dash Support Group
Mar 25, 2014
466
490
133
Costa Rica
dashninja.pl
Dash Address
XkfkHqMnhvQovo7kXQjvnNiFnQhRNZYCsz
Ok I have now 7 masternodes running, only 2 are "masternode start"ed but the others daemons are running nonetheless and all 7 of the daemons have a different masternode list... And they are all running for at least 4 hours.
The started 2 masternodes are using the local/remote setup with cold storage for the 1000 DRK.
Is there something fishy going on? I don't want to get a hell gone loose situation like with RC2 during the hard fork on 20/06...

Edit: Small example with a few IPs (when X the IP is not even listed in masternode list):
Code:
Masternode IP:Port   1    2    3    4    5    6    7
----------------------------------------------------
37.59.168.129:9999   1    0    0    0    0    0    0
37.187.47.129:9999   0    1    1    1    1    1    X
46.22.128.38:9999    0    1    1    1    1    1    X
46.240.170.28:9999   0    1    1    1    1    1    X
54.72.17.216:9999    1    1    1    1    1    1    1
54.72.196.78:9999    0    1    1    1    1    1    X
Edit 2: https://elbzo.net/masternodes.html
 
Last edited by a moderator:

mattmct

Member
Mar 13, 2014
259
92
88
Excited to see this update. Well done to all for their hardwork!

I'm now updating my few masternodes. Has anyone confirmed that the cold wallets are working right?
 

mattmct

Member
Mar 13, 2014
259
92
88
Ok I have now 7 masternodes running, only 2 are "masternode start"ed but the others daemons are running nonetheless and all 7 of the daemons have a different masternode list... And they are all running for at least 4 hours.
The started 2 masternodes are using the local/remote setup with cold storage for the 1000 DRK.
Is there something fishy going on? I don't want to get a hell gone loose situation like with RC2 during the hard fork on 20/06...
I hope there is a logical answer to this. I remember seeing something similar happen last time I ran my masternodes a month or so ago. Updating mine now, I'll see if there is a similar situation on my few.

Update: Just updated two of my masternodes, both seem to be on the same list, and count is as shown on the masternode webpage.
 
Last edited by a moderator:

ScioMind

Member
May 28, 2014
183
73
88

HammerHedd

Member
Mar 10, 2014
182
34
88
I do have to say that the reason this was thought up was because Evan thought we'd have more than 10,000 masternodes and that so many would start to clog up the system. But that has not been the case. I believe the highest numbers we had were in the 800 mn range. I honestly don't think we're going to have too many masternodes but over time, we may have too few. Not that I want to make it easier to have a masternode, I think in the future, people will be pooling their resources to open masternodes, as they already are. I do worry about this as well, and hope those providing this service are reputable and will create a system where people can be sure their coins are safe??

Anyway, I just don't see the need to pander to the wealthiest amongst us. It doesn't seem right. And I'm not really talking about being jealous, but also about how people will feel about the system, will it alienate our supporters? Well, that's how I feel about it at this time from what I've seen happen since last month. I honestly think that masternodes will find their balance, and their balance will be far lower than the number of MNs required to clog up the traffic.
I'm a big fan of intelligent networks, so what if the code itself only allowed multiple nodes after a certain threshold was reached, like 5000 if that is a reasonable limit? That way the network itself would adapt to its own needs.
 

TanteStefana

Grizzled Member
Foundation Member
Mar 9, 2014
2,876
1,866
1,283
My last measures of mainnet RC2 revealed that there were 170 listening nodes. Taking into account that some nodes may have had blocked ports i expect node count to be 200ish by June 20th.

800mn were not a correct count, due to flaws in RC2 masternode protocol.

You may have noticed from the above ping log that there is someone with a rig of 20 masternodes (192.99.184.*) - so he is providing real service to the network. Now imagine how it would harm the network when he accumulates 20000DRK into just one masternode....
Exactly, and why I think it should stay that way. We don't have as many nodes as we thought we would, and I suspect that running a node will not be for everyone. And when it all settles out, we probably won't have enough masternodes to remotely "clog up" communications. It would be far better for everyone to have their masternodes working, IMHO.
.....cut........
And lets not forget Tante, we're on free tier EC2. In one year's time, they'll start costing us! Multiplied by the number of nodes.
Imagine paying for, and doing maintenance on all 50, one at a time. Sure, more work, more rewards... Here is less work, much higher risk, same reward.
If setting up a masternode was like super complicated, or required actual daily network monitoring... then yes, totally agree, I do get where you come from. Other than that, with a few dozen nodes the network is secure, and we have hundreds already.
I dont see anything unfair about it, maybe I'm missing something?
Well, actually, I'm already paying for my servers, but that's besides the point. I think we should have at least a thousand nodes actually working to keep the network healthy. If we really only have a couple of hundred, I would find that to be alarming if it didn't grow into a healthy number of working nodes.

Anyway, I can see this being a solution if we had tens of thousands of nodes, and they were slowing things down. But it doesn't look like this is a problem. And if it's true that we really only had about 200 nodes at last months peak because of ghost nodes showing up on the system, then it's definitely not a problem.

I took the 3 year commitment deal on Amazon for about 100 bucks. They're costing me about 14 bucks a month with internet traffic. It's not that much, and there might be better deals out there? I committed to this before everyone started talking about spreading out to other providers, so it's a little unfortunate that I committed myself so early on, as I can only open servers that Amazon runs. But on the good side, I could open them up in a matter of minutes in Argentina or Tokyo if I had to, if the government shut the nodes down in the USA. So I really don't think it's an issue, and I really don't think that would ever happen ;P
 
  • Like
Reactions: flare

darkness

Member
Mar 9, 2014
46
10
48
The started 2 masternodes are using the local/remote setup with cold storage for the 1000 DRK.l
Sorry I don't know why some of your nodes aren't working, but can you confirm you have 2 working nodes which were started with the "same" wallet?
 

TanteStefana

Grizzled Member
Foundation Member
Mar 9, 2014
2,876
1,866
1,283
I'm a big fan of intelligent networks, so what if the code itself only allowed multiple nodes after a certain threshold was reached, like 5000 if that is a reasonable limit? That way the network itself would adapt to its own needs.
That'd be really cool, but seeing how much work Evan is going through right now to get Masternodes to function properly, I kind of fear the complexity of creating something like that might be too much! Or a 2 year project, LOL. Another way, is how a lot of people were talking about lowering the payments by lowering the chance at being chosen by a small percentage for each "stack" of drk. But that still leaves us with too few masternodes actually doing work. Remember, we want to protect the mixing from being logged by a bad player. I know that the payments could be made to give each "stack" an entry into the lottery, but are the assignments to the Masternodes for mixing (actual work) also given to these multi-stacked masternodes more often? If not, then someone with a lot of masternodes would be able to "see" a lot of mixes and possibly expose an account, even after multi mixing?

Anyway, it's something to think about right now. Or I should say it should be re-thought given that it doesn't look like we have enough masternodes on the system to warrant doing this IMO.
 
  • Like
Reactions: yidakee

flare

Grizzled Member
May 18, 2014
2,286
2,404
1,183
Germany
I think we should have at least a thousand nodes actually working to keep the network healthy. If we really only have a couple of hundred, I would find that to be alarming if it didn't grow into a healthy number of working nodes.
Things will start to get interesting here, as soon as people/market will realise that there is a yearly "interest rate" of ~100% at only 200 masternodes - this will attract a lot of new potential masternodes owners.

So let's say market forces will level out at a yearly interest rate of 10% on masternode "shares" ==> 1000 * 0.1 / 365 = 0.274DRK per node and day.

With a given daily node payment supply of roughly 0.2 * 2800DRK/d = 560DRK this will give us a masternode count of ~2000 nodes. If market levels out at 5% we will see 4000 nodes, and so on..

So i am quite confident that we will see a healthy network with between 2000 and 4000 nodes soon :cool:
 
Last edited by a moderator:
  • Like
Reactions: yidakee

ScioMind

Member
May 28, 2014
183
73
88
Earlier I asked what the difference was between the compiled versions and 10.10.0 (Masternodes/Darksend). I surmise that the difference is that the compiled version is 9.10.0, which does not have Masternode/Darksend capabilities. I understand the need to keep the Masternode/Darksend features "closed source" until such time as these unique properties of Darkcoin have been perfected and proven...and agree the wisdom behind this. If the source code were released pre-maturely then there would be countless "clones" flooding the crypto-currency sphere, and darkcoin (and its early adopters) would lose much of the benefits enjoyed by its uniquness and its primacy in this area.
My question, however, is whether the binaries at http://www.darkcoin.io/downloads/rc/darkcoin can be used to run a pool (specifically a p2pool). I had initially thought that they would, until I read in the original post of this thread:
(Please compile if you're running a pool, exchange, etc)
https://github.com/darkcoinproject/darkcoin
Part of the purpose of my question is for my own benefit, just to make sure I'm doing things correctly. I am also writing a very complete step-by-step tutorial on various aspects of setting up a server with Darkcoin, Masternode, P2Pool, and even performing updates when needed. I am an experienced Linux user and web programmer, so these things are fairly simple to me, but can obviously be quite confusing to those with less experience. I am attempting to deliver very clear step-by-step instructions to make such procedures as "mechanical" as possible and eliminate any guesswork, while also emphasizing the need for security and creating a hardened server for such purposes. I believe that this will benefit us all by making the Darkcoin network both larger AND more secure, and will lead to the wider adoption of Darkcoin in general.
Anyway...if anyone could clarify the issue for me, I would appreciate it.
 

Lariondos

Well-known Member
Foundation Member
Apr 8, 2014
89
62
158
Current coin supply limits the theoretically possible number of masternodes to 4377. With good coin distribution, I expect the real number to be significantly lower. But this does not take in account how popular masternode pools may get.
 

flare

Grizzled Member
May 18, 2014
2,286
2,404
1,183
Germany
If market levels out at 5% we will see 4000 nodes, and so on..

So i am quite confident that we will see a healthy network with between 2000 and 4000 nodes soon :cool:
Current coin supply limits the theoretically possible number of masternodes to 4377. With good coin distribution, I expect the real number to be significantly lower. But this does not take in account how popular masternode pools may get.
So both fundamentals (overall coin supply and yearly masternode share interest rate) indicate a corridor between 2000 and 4000 nodes, depending on coin distribution and masternode popularity.

If we only see 1000 nodes, i can live with 20% ROI a year :D
 
  • Like
Reactions: Lariondos

fernando

Powered by Dash
Foundation Member
May 9, 2014
1,527
2,059
283
So both fundamentals (overall coin supply and yearly masternode share interest rate) indicate a corridor between 2000 and 4000 nodes, depending on coin distribution and masternode popularity.

If we only see 1000 nodes, i can live with 20% ROI a year :D
I'd lower the upper limit of that corridor. A currency with a big percentage of the supply tied to the currency structure would not be healthy.

Right now Darkcoin is all about Darkcoin, but at some point in the future it has to be possible to use it for everyday things. Of course, you can just use very small decimals of a low number of available coins, but that would not be a stable situation because of the risks of all those locked-in-masternodes-coins that can flood the market at any time.

ROI is important, but atm all investors are probably individuals (too risky/new for institutional investors), so the entry price is also important. Maybe when the shared masternodes become more popular this will not the case. Short term I would not expect huge increases in the number of masternodes because then the price of the coin would change a lot as the distrubution changes a lot. And then people would be wary because nobody likes to buy after a huge spike in price...

Anyway, all this rumble is to say that I think you're gonna be very happy with your ROI because my bet for one month in (July 20th) is below 500.
 
  • Like
Reactions: Lariondos and flare

Bizmonger

New Member
Jun 2, 2014
28
4
3
I have updated my 5 nodes.

We have 10 Masternodes at mainet.

Shocking .. I have expected at least 50 - 60 nodes up.
chaeplin
As a Windows developer, I am unfamiliar with Linux and struggled setting up my Masternode last week.
However, your setup guides did help me get through the pain of configuring a masternode.

What exactly do I need to do now in order to keep my maternode current with RC3?
For example, which steps do I need to repeat from the guides you provided?

Thanks.
 

chaeplin

Active Member
Mar 29, 2014
749
356
133
chaeplin
As a Windows developer, I am unfamiliar with Linux and struggled setting up my Masternode last week.
However, your setup guides did help me get through the pain of configuring a masternode.

What exactly do I need to do now in order to keep my maternode current with RC3?
For example, which steps do I need to repeat from the guides you provided?

Thanks.
Guide updated. ;D
 

elbereth

Active Member
Dash Support Group
Mar 25, 2014
466
490
133
Costa Rica
dashninja.pl
Dash Address
XkfkHqMnhvQovo7kXQjvnNiFnQhRNZYCsz
I updated darkcoind to 0.10.10.1-beta (I noticed there was a .1 in the source code) and now they are all in sync. :)
 

flare

Grizzled Member
May 18, 2014
2,286
2,404
1,183
Germany
Still not in sync: https://elbzo.net/masternodes.html
And I started fresh all the daemons (deleted everything in .darkcoin folder).
I updated darkcoind to 0.10.10.1-beta (I noticed there was a .1 in the source code) and now they are all in sync. :)
Good you got this one sorted, i was starting to get worried :confused:

Your stat page is very impressive and useful to check the health of the MN network, perhaps chaeplin can link it from drk.poolhash.org :)