• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

11.2 - Dash Release

I couldn't sleep and kept messing around with things.

On six low end servers running the daemon with the addnode removed from the conf file one out of the six hung a couple minutes after it synched the blockchain the other 5 are still ok, these are not mn just daemons running.

Below is where I think things started going wrong with the one that hung.

Code:
2015-04-02 03:14:55 ProcessBlock: ACCEPTED
2015-04-02 03:14:55 keypool reserve 2
2015-04-02 03:14:55 keypool return 2
2015-04-02 03:14:55 ProcessMessages(dsee, 253 bytes): Exception 'CDataStream::r$
2015-04-02 03:14:55 ProcessMessage(dsee, 253 bytes) FAILED
<snip>
2015-04-02 03:15:04 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:05 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:05 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:05 ProcessMessages(dsee, 253 bytes): Exception 'CDataStream::r$
2015-04-02 03:15:05 ProcessMessage(dsee, 253 bytes) FAILED
2015-04-02 03:15:05 ProcessMessages(dsee, 253 bytes): Exception 'CDataStream::r$
2015-04-02 03:15:05 ProcessMessage(dsee, 253 bytes) FAILED
2015-04-02 03:15:07 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:13 CDarksendPool::UpdateState() - Can't set state to ERROR or $
2015-04-02 03:15:13 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:13 ProcessMessages(dsee, 253 bytes): Exception 'CDataStream::r$
<snip>
2015-04-02 03:15:16 ProcessMessage(dsee, 253 bytes) FAILED
2015-04-02 03:15:19 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:25 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:25 ProcessMessages(dsee, 253 bytes): Exception 'CDataStream::r$
2015-04-02 03:15:25 ProcessMessage(dsee, 253 bytes) FAILED

The beefier server is still humming along 11.5 hours later, but it will hit its ram limit in the next couple of hours.
 
Last edited by a moderator:
I see drk.mn says that 11.2.17 is the lastest version but it only makes up 0.1% of the network. I'm guessing it's a test node and the site just pulls the highest version that it sees and says it's the latest?
 
Hey Everyone

I upgraded my single MN shortly after the release and haven't had a single hiccup or issue. In fact, I was around 3.5 days into a payment cycle and just assumed I'd be back at the start of the cycle as a result of the migration to the dashd daemon, but the dashninja page still reported my MN had been running for the same period (i.e. it 'remembered') and I received not one but two payments within an hour of each other shortly after the migration was completed! I was pleased with that given I'd missed out on payments a few times previously during other periods where enforcement had been turned off.

I just thought I'd put the server monitoring graphs from my Vultr VPS instance on here out of interest. You can see quite clearly when the MN was converted to using dashd and the reduction is CPU Usage as a result, so something's working properly.

My report then is that it's all good for me and my MN hasn't dropped off once.

20150402 Vultr VPS monitoring graphs for MN after migration to dashd.jpg
 
I see drk.mn says that 11.2.17 is the lastest version but it only makes up 0.1% of the network. I'm guessing it's a test node and the site just pulls the highest version that it sees and says it's the latest?
drk.mn pulls version from latest git tag to determine latest version as far as i know
 
Oh Scheisse!

Can you start them up again and run "top" to see how much cpu/ram it is using? Does either keep climbing before it dies? If it takes a long time to die, you could run "top -b -n 10000 >> /tmp/top.txt

I am curious if you're running out of memory or something. Is there a core file? We might need to run "ulimit -c unlimited" to have it drop a core file that could be reviewed.
 
I couldn't sleep and kept messing around with things.

On six low end servers running the daemon with the addnode removed from the conf file one out of the six hung a couple minutes after it synched the blockchain the other 5 are still ok, these are not mn just daemons running.

Below is where I think things started going wrong with the one that hung.

Code:
2015-04-02 03:14:55 ProcessBlock: ACCEPTED
2015-04-02 03:14:55 keypool reserve 2
2015-04-02 03:14:55 keypool return 2
2015-04-02 03:14:55 ProcessMessages(dsee, 253 bytes): Exception 'CDataStream::r$
2015-04-02 03:14:55 ProcessMessage(dsee, 253 bytes) FAILED
<snip>
2015-04-02 03:15:04 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:05 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:05 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:05 ProcessMessages(dsee, 253 bytes): Exception 'CDataStream::r$
2015-04-02 03:15:05 ProcessMessage(dsee, 253 bytes) FAILED
2015-04-02 03:15:05 ProcessMessages(dsee, 253 bytes): Exception 'CDataStream::r$
2015-04-02 03:15:05 ProcessMessage(dsee, 253 bytes) FAILED
2015-04-02 03:15:07 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:13 CDarksendPool::UpdateState() - Can't set state to ERROR or $
2015-04-02 03:15:13 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:13 ProcessMessages(dsee, 253 bytes): Exception 'CDataStream::r$
<snip>
2015-04-02 03:15:16 ProcessMessage(dsee, 253 bytes) FAILED
2015-04-02 03:15:19 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:25 dseep - Asking source node for missing entry CTxIn(COutPoin$
2015-04-02 03:15:25 ProcessMessages(dsee, 253 bytes): Exception 'CDataStream::r$
2015-04-02 03:15:25 ProcessMessage(dsee, 253 bytes) FAILED

The beefier server is still humming along 11.5 hours later, but it will hit its ram limit in the next couple of hours.

My MNs with addnode removed stay up after almost 10 hours now. The memory consumption increased by ~10 MB. So, I believe there is something wrong with MNs peers connected together for an extended period of time. Notice that when you have two MNs connected with addnode as peers, they usually stayed connected for a long long time (i.e. they don't disconnect from each other). I'm not saying that addnode is the problem. I'm saying that addnode can create the conditions for the problem. That may help the developers to recreate the problem for debugging purpose.
 
Hoping for a fix, couldn't get my mns working with the new release. They were rock stable with the previous version (1.25)
 
Hey Everyone

I upgraded my single MN shortly after the release and haven't had a single hiccup or issue. In fact, I was around 3.5 days into a payment cycle and just assumed I'd be back at the start of the cycle as a result of the migration to the dashd daemon, but the dashninja page still reported my MN had been running for the same period (i.e. it 'remembered') and I received not one but two payments within an hour of each other shortly after the migration was completed! I was pleased with that given I'd missed out on payments a few times previously during other periods where enforcement had been turned off.

I just thought I'd put the server monitoring graphs from my Vultr VPS instance on here out of interest. You can see quite clearly when the MN was converted to using dashd and the reduction is CPU Usage as a result, so something's working properly.

My report then is that it's all good for me and my MN hasn't dropped off once.

View attachment 1260
I am running this version before the version bump. My MNs still dropped off until I removed the addnode lines in dash.conf. So, I think the bug hasn't been completely resolved yet. But, let's see how 0.11.2.17 works for others.
 
Still running 0.11.2.16 on mine.

Update: With a little TLC, it seems I am able to coax the non-working daemons into a working state by killing the process and removing .dash/.lock (thanks UdjinM6), plus babysitting with tail -f .dash/debug.log to make sure it is alive, and if it stops, repeat the process until it finally stays alive and synced.
 
Still running 0.11.2.16 on mine.

Update: With a little TLC, it seems I am able to coax the non-working daemons into a working state by killing the process and removing .dash/.lock (thanks UdjinM6), plus babysitting with tail -f .dash/debug.log to make sure it is alive, and if it stops, repeat the process until it finally stays alive and synced.

I'm testing a self-build v0.11.2.17 for a while now and it seems the Masternodes don't drop any more, so just wait until Evan announces this new version.
 
I'm testing a self-build v0.11.2.17 for a while now and it seems the Masternodes don't drop any more, so just wait until Evan announces this new version.
How long has it been running without being kicked off? Do you have addnode to other MNs in dash.conf?
 
Some about 12 hours now, the rest about 5 hours. Did not change dash.conf, just swapped dashd and restarted.
Then, the only other changes I had is a script that run dashd getblocktemplate. I stopped running getblocktemplate from it. Or, the patches in 0.11.2.17 completely fixed the problem. I did get kicked off before removing addnode in dash.conf and getblocktemplate in the script. Perhaps, I will addnode from the command prompt if more people reports no issue with 0.11.2.17.

The memory consumption stays constant at 366MB now if anyone is interested in this.
 
I start-many'd 6 MNs yesterday, and they have all been solid ever since...no payments though. Just wanted to share a success story for the lurkers! :wink:
I'll 2nd that - I have 1 MN running on AWS West coast (in case anyone is trying to verify providers who aren't having issues)

Updated yesterday evening, received first DASH mn payment a few hours later. Haven't hit a single snag.
 
Back
Top