• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

V12 Release

Hi,

I think the new version need more resource, what I'm seen is that my MN crashed more often, (not so much but 2 out of 15 last 24h).
Maybe we'll need to have better VPS? But should be interesting to know of it's RAM that is needed (most probably), or cpu? Or Bandwidth?

Also someone on BCT complaining about the crash of his MN (and saw some message here also from Tao,...). I think we have to check this carefully as I think that some MN operators are capable to sold their MN, if they didn't have their payment with silver spoon directly in their mouth ;) A part of this it's important to have stable daemon for MN, also for an healthy Network.
What are few lines from debug.log at the time it crashed?
 
Well the one that crashed has a crontab that restart it automatically (and log is already overwrited). And the other was not responding (have to kill it) and didn't save debug.log.

Since then no crash for the moment ;) so I promise that next crash I'll save the debug.log (just for you ;).
 
I'm actually not sure about settings but I am sure the configuration is fine since Flare is running them for me. But I have had to issue "start-many" about 5 time in the last 5 hours.

Hope that helps :).

Pablo.
 
I'm actually not sure about settings but I am sure the configuration is fine since Flare is running them for me. But I have had to issue "start-many" about 5 time in the last 5 hours.
Hope that helps :).
Pablo.
I keep dropping off Elbereth's list, I restart , it reappears, but says that the connection is over 1 day long so was never terminated :confused:
 
I'm actually not sure about settings but I am sure the configuration is fine since Flare is running them for me. But I have had to issue "start-many" about 5 time in the last 5 hours.

Hope that helps :).

Pablo.
Your nodes look fine for me so far, there was only one daemon crashed this morning, i guess that's the one you noted as dropping.
 
final got the Pi up and running - ugh - took a minute - learning new settings....
Adjusting Guide....

THX for everybody's assistance :-D
 
moli you have suggested to me a coupla times when using dash-cli in Windows cmd console, it's helpful to have the console opened in the dash-cli directory - yes, it saves having to type whole path, eg. "C:\Program Files (x86)\dash-cli.exe".

Anyways, a quickie way to open cmd in a particular folder is "Shift key + Right-click" on desired folder (or folder shortcut):

rclk_cmd.png
 
There's really something happening after two days of running masternodes. I don't have the problem because I host the nodes myself and enough resources and nobody's throttling them.
CPU after two day's of .45 release:
29pzu5l.jpg
29pzu5l.jpg


Memory after two day's of .45 release:
2mgteg1.png


From masternode top -H, command
3470 XXXX 22 2 1160620 461376 52776 R 10.9 22.5 329:24.41 dash-msghand

strace -p 3470, so what's the process is doing, spamming this:
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724245, {1439916948, 991107072}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724247, {1439916948, 992479377}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724249, {1439916948, 993825973}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724251, {1439916948, 995181868}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724253, {1439916948, 996536142}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724255, {1439916948, 997914829}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724257, {1439916948, 999268586}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724259, {1439916949, 595133}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724261, {1439916949, 1980812}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724263, {1439916949, 3334407}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724265, {1439916949, 4749414}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724267, {1439916949, 6155997}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724269, {1439916949, 7511230}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724271, {1439916949, 8798410}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7f71e89c5b18, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f71e89c5b44, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 345724273, {1439916949, 10154768}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
 
One shut down as well. This one is on vultr actually, so spec are ok. MN on worse vps are ok atm

SU5ouqu.jpg

Code:
2015-08-18 13:34:11 dseg - Sent 1 Masternode entries to 150.254.118.211:49647
2015-08-18 13:34:12 dseg - Sent 1 Masternode entries to 150.254.118.211:49647
2015-08-18 13:34:13 dseg - Sent 1 Masternode entries to 150.254.118.211:49647
2015-08-18 13:34:13 dseg - Sent 1 Masternode entries to 150.254.118.211:49647
2015-08-18 13:34:13 dseg - Sent 1 Masternode entries to 150.254.118.211:49647
2015-08-18 13:34:13 dseg - Sent 1 Masternode entries to 150.254.118.211:49647
2015-08-18 13:34:14 dseg - Sent 1 Masternode entries to 150.254.118.211:49647
2015-08-18 13:34:14 dseg - Sent 1 Masternode entries to 150.254.118.211:49647
2015-08-18 13:34:15 dseg - Sent 1 Masternode entries to 150.254.118.211:49647
2015-08-18 13:34:15 dseg - Sent 1 Masternode entries to 150.254.118.211:49647
2015-08-18 13:34:16 Verifying mncache.dat format...
2015-08-18 13:34:17 CMasternodeMan::AskForMN - Asking node for missing entry, vin: CTxIn(COutPoint(d69f1bab6a2393a081dad7f39348995a6175f9abe480e71a1129729cb909e670, 0), scriptSig=)
2015-08-18 13:34:17 Loaded info from mncache.dat  437ms
2015-08-18 13:34:17   Masternodes: 2574, peers who asked us for Masternode list: 1, peers we asked for Masternode list: 0, entries in Masternode list we asked for: 120, nDsqCount: 6286
2015-08-18 13:34:17 Writting info to mncache.dat...
 
Posted this already on BCT but pages there have a tendency to move by very fast so i thought i post it here as well :

My masternodes on VULTR just died too after running two days : all 768 MB, single CPU, 15 GB SSD, 1000 GB bandwidth (they rarely exceed 60 GB bandwidth)
Unfortunetely no debug.logs.. below picture pretty much the same for all masternodes.

This specific server had 56.4GB inbound and 57.3GB outbound.
8lvHi7s.jpg


To try to fix it i'm trying the following :

in dash.conf on server-side i changed / added

maxconnections=100 (was 256)
txindex=1

in dash.conf in cold wallet i just added

txindex=1

I deleted peers.dat and mncache and did a -reindex and gave masternode start passphrase from cold wallet and they are all visible and active on dashninja.pl again.

./dash-cli masternodelist full | grep -e SERVER_IP gave "ENABLED" for all masternodes again (it gave that 2 days ago too)
masternode debug gave for all remote masternode wallets the "successfully started masternode message".

I'm keeping the debug.logs on the servers this time in case something goes wrong (again)
 
Last edited by a moderator:
Explain this one please: txindex=1
Its from UdjinM6's checklist on Bitcointalk forum (tbh i thought it was already activated with v0.12.0.45) -->
https://bitcointalk.org/index.php?topic=421615.msg12174625#msg12174625
Checklist:
- "txindex=1" in dash.conf
- 1 run with -reindex after that ^^^
- start-many/alias from local after reindex is finished
- "masternode debug" on remote should give you "Masternode successfully started", if it's not try this https://dashtalk.org/threads/v12-release.5888/page-15#post-63716
 
The thing that really matters is showing those people you are grateful for what they do. If there's a mistake in a video, point it out, but don't make me feel bad about it.

I'm really sorry that my reply offended you. Let me assure you that it was meant as a hint, not to speak bad about your work. It came out the wrong way. This may be caused also by my bad english. Please accept my apologies.
 
For memory usage, maybe I'll check to install nmon on 1 or 2 of my MN.
Never did it for ubuntu but I'll check.

EDIT:
Ok found it, install the binary directly, and colecting data right now.
Code:
./nmon_x86_64_ubuntu1410 -f -s 120 -c 1000
Will post result later
 
Last edited by a moderator:
Back
Top