Thanks for this splawik21.If you have received the PoSe Penalty on your MN and you have updated the dashd to 14.0.1 and have the sentinel on 1.4.0 but still your PoSe score rise be sure you have not put the blsprivkey (operator private key) on both sides, remote server and local.
ONLY blsprivkey (operator private key) must be on dash.conf remote server, on the local side it MUST be a public operator key which always is created in pair when you use "bls generate" command in console.
Please follow the steps precisely when updating/registering the MN especially that this is the FIRST step, to generate the blskeys pair. https://docs.dash.org/en/latest/masternodes/dip3-upgrade.html#generate-a-bls-key-pair
Yes. I rebuilt all my servers with V14 back in April and registered them as deterministic. I check their status hourly through cron (via RCP to dashd with masternode list command) to verify their status is indeed ENABLED. This is how I am aware that half of them are down. Weirdly, the other half are still humming along fine, which is why it doesn't seem like a configuration issue (they all have identical config).did you proper register your MN (determanistic and all) ?
BLS operator key ?
See : https://www.dash.org/forum/threads/dash-core-v0-14-on-mainnet.45821/page-2#post-212492Half my nodes on Vultr are banned even though they've never experienced any down time. All of them have a PoSe penalty > 0. I parsed the debug.log to produce this chart - definitely linear, possibly exponential. I don't think it's worth the effort to re-establish my nodes until this is resolved. There's clearly a major problem here.View attachment 9543
Yes. I rebuilt all my servers with V14 back in April and registered them as deterministic. I check their status hourly through cron (via RCP to dashd with masternode list command) to verify their status is indeed ENABLED. This is how I am aware that half of them are down. Weirdly, the other half are still humming along fine, which is why it doesn't seem like a configuration issue (they all have identical config).
This should really be added to that referred docs as it seems to be missing completely.If you have received the PoSe Penalty on your MN and you have updated the dashd to 14.0.1 and have the sentinel on 1.4.0 but still your PoSe score rise be sure you have not put the blsprivkey (operator private key) on both sides, remote server and local.
ONLY blsprivkey (operator private key) must be on dash.conf remote server, on the local side it MUST be a public operator key which always is created in pair when you use "bls generate" command in console.
Please follow the steps precisely when updating/registering the MN especially that this is the FIRST step, to generate the blskeys pair. https://docs.dash.org/en/latest/masternodes/dip3-upgrade.html#generate-a-bls-key-pair
Operator BLS pubkeys are not stored in a local wallet and should not be entered in any configuration files. This serves no purpose, as far as I know there is no code existing to do anything with this information. The BLS pubkey should exist in one place only - in the registration protx written to the blockchain. The BLS privkey should only appear in the dash.conf file of a single masternode. The BLS privkey must be unique and not appear in any other dash.conf for a registered masternode, or both masternodes will be banned.The docs just mentions the adding of BLS privkey to dash.conf on the remote server, it does not mention anything
about putting a BLS public key in a dash.conf of a local client / server.
See post from pin0de (above yours). Should he not have used the BLS private key ? He seems to indicate he is using the BLS public key ?Operator BLS pubkeys are not stored in a local wallet and should not be entered in any configuration files. This serves no purpose, as far as I know there is no code existing to do anything with this information. The BLS pubkey should exist in one place only - in the registration protx written to the blockchain. The BLS privkey should only appear in the dash.conf file of a single masternode. The BLS privkey must be unique and not appear in any other dash.conf for a registered masternode, or both masternodes will be banned.
If you are registering masternodes from a hardware wallet, it is possible to import the owner privkey in an instance of Dash Core Qt wallet in order to monitor your masternodes from the "My masternodes" tab.
Also, to avoid confusion, I have removed the /latest/ branch of the documentation. The documentation for the currently released version of Dash Core is available under the /stable/ URL, and legacy documentation, including the DIP3 upgrade process, is available (and maintained/updated if necessary) under versioned branches, e.g.: https://docs.dash.org/en/0.13.0/masternodes/dip3-upgrade.html You can change the documentation version you are looking at from the menu in the bottom right corner.
If he is actually using the BLS public key there, it could explain his PoSe penalty ?I also have one out of multiple identical nodes that shows a PoSePenalty and I've yet to figure out why.
DMT has the correct operator public key in the config.
#69pin0de, Today at 5:15 AM
1) I no longer run masternodes.@camosoul : i remember you using a restart script / command that lets your masternodes restart every "x" time automatically.
Is that still the case ? I'm wondering if that could have either an impact on your PoSe score or obscure a possible misconfiguration.
That was based on old info back when MNs were a fairly new thing and liked to crash a lot. It turned out to be resultant to an upstream issue. I forget the exact details, but it was a block irregularity the fix for which actually got (thanklessly) upstreamed...What's the point in restarting masternodes regularly? Using the stop command or by killing the process with a SIGTERM?
I generally would recommend to use scripts which only start the daemon when it is not running (based on the PID file or by process name in a single node setup).
I seem to say this regarding any software project done since the early 90s; really bad documentation.As DMT generates the key pair, it has both, the public and private key. You can display either one, so yes that's a bit confusing when asking people whether they have correctly entered the key locally.
It's open source software, feel free to contribute to the documentation to improve it.I seem to say this regarding any software project done since the early 90s; really bad documentation.
I can't possibly know unless I wrote the code... It's the same irrational expectation across the board with coders these days.It's open source software, feel free to contribute to the documentation to improve it.
See : https://www.dash.org/forum/threads/dash-core-v0-14-on-mainnet.45821/page-2#post-212492so the affected node got hit with a penalty again and is banned now ...
would have been paid tomorrow, that is bad
So you are saying everything is broken beyond repair and someone else should fix it asap?[...]
Yup, we have locked inFor DIP8 to activate we need 80% of blocks between 1080577 and 1084608 to be signaling support. That's 3226 blocks or more. Block 1084340 was the 3226th block since 1080577 to signal so DIP8 is now guaranteed to be locked in and will active at 1088640. Since is expected to happen (with 90% probability) on 2019-07-17 between 7:05 and 16:32 UTC.
First window / first 4032 blocks : failure to lock-in DIP8, due to lack of supportFor DIP8 to activate we need 80% of blocks between 1080577 and 1084608 to be signaling support. That's 3226 blocks or more. Block 1084340 was the 3226th block since 1080577 to signal so DIP8 is now guaranteed to be locked in and will active at 1088640. Since is expected to happen (with 90% probability) on 2019-07-17 between 7:05 and 16:32 UTC.
Technically you are right: The period has 260 blocks left. But as we are beyond the 80% threshold already we are past point of no return and i consider it as locked inEdit : strange, thought we still had 10 or 12 hours to go for a lock-in, but according to flare we indeed have already locked in.
That's 4033 blocks.First window / first 4032 blocks : failure to lock-in DIP8, due to lack of support
Started with block 1,076,544
Ended with block 1,080,576
If the first window started at 1,076,544, then the second window must also start at an even number, otherwise you have a window with odd size.Second window / second 4032 blocks : will most likely lock-in DIP8, as miners now seems to show enough support
Starting with block : 1,080,577
Ending with block : 1,084,609
Any information when spork 19 (ChainLock) will be activated ? I exspected this spork to be activated already as the Dash Core v0.14 Planned Upgrade Phases mentions "The DKG Spork (17) will be activated,Technically you are right: The period has 260 blocks left. But as we are beyond the 80% threshold already we are past point of no return and i consider it as locked in![]()