Dash Core v0.14 on Mainnet

Status
Not open for further replies.

camosoul

Grizzled Member
Sep 19, 2014
2,261
1,130
1,183
DIP is not the same as SPORK.

DIP0008 – LLMQ-based ChainLocks.
DIP8 = SPORK 19

https://www.dash.org/releases/
Thanks for the clarification. I already knew that, but pretty much nobody outside of testnet does...

DASH is great, but it's definitely too complicated for cryptotards... You have to target non-crypto people for use. Between the polarized fanboys, and plain stupidity, nobody within the cryptosphere will ever care because they're not smart enough to comprehend in the first place. You can't care about something so far outside of your ability to comprehend that you don't even now it's there.... This is DASH's only real downfall...
 

qwizzie

Grizzled Member
Aug 6, 2014
2,113
1,291
1,183
Now that PoSe Penalty is in effect (impacting 151 212 masternodes so far) maybe someone could refresh our memory on how
that PoSe works and what to do when you get too high of a PoSe Penalty.

I remember having read something about PoSe Penalty score diminishing per hour, after the masternode starts behaving again ?
I also remember something about if that PoSe Penalty gets too high, a Protx Update command needs to be given as your masternode will be in a banned status ?
What is too high of a PoSe Penalty score and how does it show on Dashninja ? PoSe banned ? Or Delisted ?

Current highest PoSe Penalty : "3259" with masternodes "Valid" and colored green on Dashninja.

Edit 1 : i think we should add a more specific chapter about masternode PoSe Penalty here : https://docs.dash.org/en/stable/masternodes/maintenance.html#
and have Dashninja refer to it in its "Masternodes List explanations" info part (currently Dashninja does not refer to PoSe Penalty at all, eventhough it was added to that list).
Once we have a specific chapter for PoSe Penalty, we can start referring people to it when they are experiencing problems with it.
We can then also make a pinned topic with a referral to that newly created chapter here : https://www.dash.org/forum/topic/masternode-questions-and-help.67/

Edit 2 : the latest Dash News article provided some clarity :

* PoSe Penalty can diminish with 1 per block
* In case of a PoSe ban, a ProUpServTx transaction is required
* Where the documentation regarding PoSe and its penalty can be found : https://docs.dash.org/en/stable/masternodes/understanding.html?highlight=PoSe

But its still unclear to me how high a PoSe Penalty can get before it becomes a PoSe ban, even after reading this :
Each failure to provide service results in an increase in the PoSe score relative to the maximum score,
which is equal to the number of masternodes in the valid set. If the score reaches the number of masternodes in the valid set, a PoSe ban is enacted
Does the valid set refer to number of active masternodes ? Which means PoSe Penalty score can go up to +4900 before it becomes a PoSe ban ?

Also i still think we need more visibility and reference with regards to PoSe Penalty for masternode owners.
 
Last edited:

JGCMiner

Active Member
Jun 8, 2014
364
217
113
Now that PoSe Penalty is in effect (impacting 151 212 masternodes so far) maybe someone could refresh our memory on how
that PoSe works and what to do when you get too high of a PoSe Penalty.

I remember having read something about PoSe Penalty score diminishing per hour, after the masternode starts behaving again ?
I also remember something about if that PoSe Penalty gets too high, a Protx Update command needs to be given as your masternode will be in a banned status ?
What is too high of a PoSe Penalty score and how does it show on Dashninja ? PoSe banned ? Or Delisted ?

Current highest PoSe Penalty : "3259" with masternodes "Valid" and colored green on Dashninja.

Edit 1 : i think we should add a more specific chapter about masternode PoSe Penalty here : https://docs.dash.org/en/stable/masternodes/maintenance.html#
and have Dashninja refer to it in its "Masternodes List explanations" info part (currently Dashninja does not refer to PoSe Penalty at all, eventhough it was added to that list).
Once we have a specific chapter for PoSe Penalty, we can start referring people to it when they are experiencing problems with it.
We can then also make a pinned topic with a referral to that newly created chapter here : https://www.dash.org/forum/topic/masternode-questions-and-help.67/

Edit 2 : the latest Dash News article provided some clarity :

* PoSe Penalty can diminish with 1 per block
* In case of a PoSe ban, a ProUpServTx transaction is required
* Where the documentation regarding PoSe and its penalty can be found : https://docs.dash.org/en/stable/masternodes/understanding.html?highlight=PoSe

But its still unclear to me how high a PoSe Penalty can get before it becomes a PoSe ban, even after reading this :

Does the valid set refer to number of active masternodes ? Which means PoSe Penalty score can go up to +4900 before it becomes a PoSe ban ?

Also i still think we need more visibility and reference with regards to PoSe Penalty for masternode owners.
This may not answer all your questions, but this is from Core Dev @thephez on Discord:

“Each failure to participate in DKG results in a PoSe score increase equal to 66% of the max allowable score (max allowable score = # of registered MNs at the time of the infraction). So a single failure in a payment cycle won't hurt you. You can sustain 2 failures provided they are not too close together since your score drops by 1 point each block.

And by cycle I mean the number of blocks required to go through the whole MN list (i.e. the # of masternodes).

So currently it looks like there are 4917 MNs.
- A failure will increase your score by 3245 (4917*0.66).
- So as long as your current score is less than 1672 (4917 - 3245), you can sustain a 2nd failure without being banned (i.e. without your score going over 4917).

However, failing more than twice in the cycle will always result in a ban.

Core reference for anyone that's curious: https://github.com/dashpay/dash/blob/master/src/evo/deterministicmns.cpp#L811-L817
 
  • Like
Reactions: qwizzie

camosoul

Grizzled Member
Sep 19, 2014
2,261
1,130
1,183
This may not answer all your questions, but this is from Core Dev @thephez on Discord:

“Each failure to participate in DKG results in a PoSe score increase equal to 66% of the max allowable score (max allowable score = # of registered MNs at the time of the infraction). So a single failure in a payment cycle won't hurt you. You can sustain 2 failures provided they are not too close together since your score drops by 1 point each block.

And by cycle I mean the number of blocks required to go through the whole MN list (i.e. the # of masternodes).

So currently it looks like there are 4917 MNs.
- A failure will increase your score by 3245 (4917*0.66).
- So as long as your current score is less than 1672 (4917 - 3245), you can sustain a 2nd failure without being banned (i.e. without your score going over 4917).

However, failing more than twice in the cycle will always result in a ban.

Core reference for anyone that's curious: https://github.com/dashpay/dash/blob/master/src/evo/deterministicmns.cpp#L811-L817
Unfortunately, it seems like nodes are being penalized even when they are no failing. Even when they're not involved...

It's only a matter of time before every node gets banned.

Something is seriously wrong and it needs to get fixed while masternodes are still a thing.
 

qwizzie

Grizzled Member
Aug 6, 2014
2,113
1,291
1,183
Unfortunately, it seems like nodes are being penalized even when they are no failing. Even when they're not involved...

It's only a matter of time before every node gets banned.

Something is seriously wrong and it needs to get fixed while masternodes are still a thing.
This got me curious :
A number of metrics are involved in the calculation, so it is not possible to game the system by causing masternodes to be PoSe banned for failing to respond to ping requests by e.g. a DDoS attack just prior to payment
Are these metrics just the failing to participate in DKG and the PoSe Penalty score reducement of 1 per block ? or are more metrics involved ?

Brainstorming here :

* maybe they get punished for being on an older version ? (v13 for example)
* maybe they have connection / propagation problems ?
* closed ports ?
* running inadequate hardware (RAM / CPU wise) ?

Or they were indeed only punished for failing to participate in DKG, but have begun recovering to a much lesser
value thanks to that "1 per block" recovery mechanisme.

Edit : Also i think it would be helpfull if we know why some of the v0.14.0.1 masternodes are failing to participate in DKG and getting penalized for it, what could be the reason behind their failure to participate ?
 
Last edited:

camosoul

Grizzled Member
Sep 19, 2014
2,261
1,130
1,183
This got me curious :
Are these metrics just the failing to participate in DKG and the PoSe Penalty score reducement of 1 per block ? or are more metrics involved ?

Brainstorming here :

* maybe they get punished for being on an older version ? (v13 for example)
* maybe they have connection / propagation problems ?
* closed ports ?
* running inadequate hardware (RAM / CPU wise) ?

Or they were indeed only punished for failing to participate in DKG, but have begun recovering to a much lesser
value thanks to that "1 per block" recovery mechanisme.

Edit : Also i think it would be helpfull if we know why some of the v0.14.0.1 masternodes are failing to participate in DKG and getting penalized for it, what could be the reason behind their failure to participate ?
I've been observing more...

It seems the network is fractured. Nodes on the same host, same connectivity, same network, vastly more resources than necessary, all latest version... Some get false flags, some don't.

The network as a whole is failing to propagate messaging. It seems once a node falls into the false flag hole, there's nothing you can do to save it.

Exactly why it's happening escapes me. It seems to be totally random. I can only report observable symptoms. What I know is that nodes aren't failing to participate, they're simply never getting the message, even though they're on the same network, same physical machine, etc, as nodes not being false flagged. It's not the nodes... Re-registering does not help. The nodes get false flagged off the network very quickly. The network is simply not contacting the nodes.

I'm advising people to stop running masternodes. Fighting this problem is a complete waste of time from the owner/operator end because it's not their fault. It's a very small minority, but it doesn't stop. In a sample size of 150 nodes, about 7% get false flagged every 24 hours... it's creeping slowly. In 8 days I estimate every node I'm monitoring will be gone.

EDIT: My sample is biased, I'm only observing non-amazon. It seems extreme centralization is the the only way to avoid false PoSe flagging due to network blobs preferring their own blobs... and damn near everthing is centralized onto Amazon. Run your node on Amazon, or get kicked off eventually.

It's not merely way too easy to get kicked off of the network by no fault of your own, it seems to be guaranteed, if not slow...

It is observable that they do get the 1-point reduction in PoSe score per block. But, they'll get hit with a penalty again very quickly, over what appears to be a message that never existed. It's almost like nodes are being selected for false flagging. Resources, connectivity and performance clearly have nothing to do with it.
 
Last edited:

masternube

Member
Nov 9, 2017
81
14
48
This sounds concerning. Is anyone at DCG aware of this yet?
FWIW, my node doesn't run on Amazon and is still at 0 PoSe score.
 

qwizzie

Grizzled Member
Aug 6, 2014
2,113
1,291
1,183
I've been observing more...

It seems the network is fractured. Nodes on the same host, same connectivity, same network, vastly more resources than necessary, all latest version... Some get false flags, some don't.
The network as a whole is failing to propagate messaging. It seems once a node falls into the false flag hole, there's nothing you can do to save it.

Exactly why it's happening escapes me. It seems to be totally random. I can only report observable symptoms. What I know is that nodes aren't failing to participate, they're simply never getting the message, even though they're on the same network, same physical machine, etc, as nodes not being false flagged. It's not the nodes... Re-registering does not help. The nodes get false flagged off the network very quickly. The network is simply not contacting the nodes.

It is observable that they do get the 1-point reduction in PoSe score per block. But, they'll get hit with a penalty again very quickly, over what appears to be a message that never existed. It's almost like nodes are being selected for false flagging. Resources, connectivity and performance clearly have nothing to do with it.
I wonder if this pull request from 2 days ago is related your described "failing to propagate messaging" or if it is unrelated : https://github.com/dashpay/dash/pull/2967

This sounds concerning. Is anyone at DCG aware of this yet?
FWIW, my node doesn't run on Amazon and is still at 0 PoSe score.
Same here.
 

camosoul

Grizzled Member
Sep 19, 2014
2,261
1,130
1,183
This sounds concerning. Is anyone at DCG aware of this yet?
FWIW, my node doesn't run on Amazon and is still at 0 PoSe score.
I'm seeing nodes with no score for several days, then suddenly hit twice very quickly. It's rare that they get one hit then left alone to recover score.

It seems like the network arbitrarily decides "I hate this node" and there's nothing you can do about it. Take the node down. It will never work again. You're screwed.

At least one such occurrence is confirmed to be hosted in a major datacenter/hub. Network is as solid as you could ever hope for. Bare metal installation with 8 xeon cores, 12GB RAM and an SSD. Nothing else running on it. Performance, resources and connectivity are absolutely not at issue.
 

camosoul

Grizzled Member
Sep 19, 2014
2,261
1,130
1,183
I wonder if this pull request from 2 days ago is related your described "failing to propagate messaging" or if it is unrelated : https://github.com/dashpay/dash/pull/2967
Maybe two different issues... What knocks a node off may or may not be the same that that makes it permabanned with no hope even after re-register... Scrap the node and start over, hope it doesn't get singled out... But, from what I'm seeing, everything will eventually be banned until there's only one MasterNode left (maybe two fighting over which one is to be banned), it just takes a while...
 

GrandMasterDash

Grizzled Member
Masternode Owner/Operator
Jul 12, 2015
3,423
1,459
1,183
Nothing wrong here and mine are not on amazon.

Really want to hear from what DCG has to say about this.

And why is it just two hits, seems a bit harsh.
 

Rick Seeger

New Member
Jun 8, 2019
12
5
3
52
Half my nodes on Vultr are banned even though they've never experienced any down time. All of them have a PoSe penalty > 0. I parsed the debug.log to produce this chart - definitely linear, possibly exponential. I don't think it's worth the effort to re-establish my nodes until this is resolved. There's clearly a major problem here.
mn-bans.png
 

tungfa

Grizzled Member
Foundation Member
Masternode Owner/Operator
Apr 9, 2014
8,898
6,747
1,283
Half my nodes on Vultr are banned even though they've never experienced any down time. All of them have a PoSe penalty > 0. I parsed the debug.log to produce this chart - definitely linear, possibly exponential. I don't think it's worth the effort to re-establish my nodes until this is resolved. There's clearly a major problem here.View attachment 9543
did you proper register your MN (determanistic and all) ?
BLS operator key ?
 

AjM

Well-known Member
Foundation Member
Jun 23, 2014
1,341
575
283
Finland
Half my nodes on Vultr are banned even though they've never experienced any down time. All of them have a PoSe penalty > 0. I parsed the debug.log to produce this chart - definitely linear, possibly exponential. I don't think it's worth the effort to re-establish my nodes until this is resolved. There's clearly a major problem here.View attachment 9543
Pinging @UdjinM6 @codablock
 

flare

Grizzled Member
May 18, 2014
2,286
2,404
1,183
Germany
Half my nodes on Vultr are banned even though they've never experienced any down time. All of them have a PoSe penalty > 0. I parsed the debug.log to produce this chart - definitely linear, possibly exponential. I don't think it's worth the effort to re-establish my nodes until this is resolved. There's clearly a major problem here.View attachment 9543
What is the output of one of your non-banned, PoSe > 0 nodes for

Code:
$ dash-cli masternode status
?
 
  • Like
Reactions: tungfa

Phil7

New Member
Nov 29, 2018
3
0
1
45
From reading in here, by now it should be clear that it was gross negligence by Core, to enact & enforce this PoSe ban system on live net.

The PoSe scoring should have been introduced first without any immediate banning effects, only for the purpose to be closely
monitored & observed over the course of a grace period, whether the score is really behaving the way it is expected to work and without any flaws or severe bugs.
Only after a close observation period of at least 1-2 months, and if the score behaved just as expected, it should have been enacted & enforced with actual bans.

Seems like for some MNO this is turning out to be a real nightmare.
Especially because it seems to be hard or even outright impossible to fix, if a MN (or the PoSe system itself) produces no specific log-files in order to exactly pinpoint
to the failure (with exact incidence, timestamp etc.) and the recommended measures to repair it.
 

codablock

Active Member
Mar 29, 2017
100
154
93
38
Can everyone who is affected please add these lines to dash.conf of the masternodes:

debug=llmq
debug=llmq-sigs
debug=llmq-dkg

Can you also give me protx hashes of failed MNs that you believe should not have failed? You can send these in private to me.
So far, all cases that have been investigated by core members turned out to be a misconfiguration. If there really is an issue, we'll try to figure out what it is ASAP.
 

qwizzie

Grizzled Member
Aug 6, 2014
2,113
1,291
1,183
I guess we have found at least one other metric thats influencing the PoSe Penalty score : misconfiguration of the deterministic masternode.
It will be interesting to see if this is a major cause of masternodes getting PoSe banned or if there are v14 code problems that need fixing.

In the mean time i suggest to owners of banned masternodes to double-check their deterministic masternode setup, particularly with regards to
BLS operator key setup and if their Sentinel has been updated. Also it can be handy to do a "./dash-cli masternode list full | grep -e IPADDRESS" command
 
Last edited:

flare

Grizzled Member
May 18, 2014
2,286
2,404
1,183
Germany
I guess we have found at least one other metric thats influencing the PoSe Penalty score : misconfiguration of the deterministic masternode.
It will be interesting to see if this is a major cause of masternodes getting PoSe banned or if there are v14 code problems that need fixing.
I can only speak for myself, but for me the code is working flawless. The code was tested for weeks on testnet and was deployed to mainnet when we were sure of its production quality.

On mainnet the PoSe system works as designed - and i am maintaining 100s of masternodes for my customers. Actually I had 3 cases of PoSe penalties the last days and in all cases the collateral owner accidentally changed the operator pubkey, leading to mismatching configuration.

So before calling for pitchforks: Check if the masternodeblsprivkey in your dash.conf matches the operator pubkey first.
 

qwizzie

Grizzled Member
Aug 6, 2014
2,113
1,291
1,183
I can only speak for myself, but for me the code is working flawless. The code was tested for weeks on testnet and was deployed to mainnet when we were sure of its production quality.

On mainnet the PoSe system works as designed - and i am maintaining 100s of masternodes for my customers. Actually I had 3 cases of PoSe penalties the last days and in all cases the collateral owner accidentally changed the operator pubkey, leading to mismatching configuration.

So before calling for pitchforks: Check if the masternodeblsprivkey in your dash.conf matches the operator pubkey first.
Personally i have not experienced any PoSe penalty on my masternodes, so i'm inclined to believe that unintended misconfiguration of masternodes by their owner / operator could have a large impact
on these masternodes getting banned. Makes sense to me. The PoSe Penalty score just brings visibility to it.
 

qwizzie

Grizzled Member
Aug 6, 2014
2,113
1,291
1,183
@camosoul : i remember you using a restart script / command that lets your masternodes restart every "x" time automatically.
Is that still the case ? I'm wondering if that could have either an impact on your PoSe score or obscure a possible misconfiguration.
 

masternube

Member
Nov 9, 2017
81
14
48
@flare I also had a MN banned once when I changed the operator key (thread). Could you explain the point of being able to change the operator key (using protx update_registrar) if it will get you banned anyway?
 
  • Like
Reactions: slamdunk

Figlmüller

Member
Sep 2, 2014
89
48
58
Vienna, Austria
@camosoul : i remember you using a restart script / command that lets your masternodes restart every "x" time automatically.
Is that still the case ? I'm wondering if that could have either an impact on your PoSe score or obscure a possible misconfiguration.
What's the point in restarting masternodes regularly? Using the stop command or by killing the process with a SIGTERM?
I generally would recommend to use scripts which only start the daemon when it is not running (based on the PID file or by process name in a single node setup).
 

JGCMiner

Active Member
Jun 8, 2014
364
217
113
No PoSe errors for me either. Looking at Dashninja, this seems to be the case for the vast majority of other masternodes as well. So I agree that rather than grabbing pitchforks — those having issues should take advantage of @codablock’s offer if they cannot troubleshoot the issue on their own.


@flare I also had a MN banned once when I changed the operator key (thread). Could you explain the point of being able to change the operator key (using protx update_registrar) if it will get you banned anyway?
PoSe banning was enabled a few days ago but your post was from April. I don’t see the relation.
 

splawik21

Yeah, it's me....
Dash Core Group
Foundation Member
Dash Support Group
Apr 8, 2014
1,971
1,339
1,283
All good here, no PoSe_penalty taken (fingers crossed) :cool:
 

qwizzie

Grizzled Member
Aug 6, 2014
2,113
1,291
1,183
What's the point in restarting masternodes regularly? Using the stop command or by killing the process with a SIGTERM?
I generally would recommend to use scripts which only start the daemon when it is not running (based on the PID file or by process name in a single node setup).
I used to stop my masternode and restart it every two weeks (before v13) as dashd would for some mysterious reason crash on me after two weeks, got restarted by my restart script and then ended up
in a false "masternode start required" state. Which then required a manual deletion of the mncache.dat file and a dashd restart to get that masternode active again.
At the time i thought the crashing of my dashd was related to my server's RAM, and as a precaution i restarted the server every two weeks to prevent these crashings and associated extra work.

Since v13 i no longer have any problems (mncache is no longer local and my dashd is running stable for long periods of time), so i just use monit to monitor and restart my dashd if necessary.
So i also dont really see a need (anymore) to restart masternodes on a regular basis.
 
Last edited:

masternube

Member
Nov 9, 2017
81
14
48
@JGCMiner I guess an additional cause for PoSe banning was introduced recently, but my node was PoSe banned in April after I changed the operator key, so PoSe banning definitely existed. If you think it's off topic here, I'd appreciate an answer on the other thread.
 

JGCMiner

Active Member
Jun 8, 2014
364
217
113
@JGCMiner I guess an additional cause for PoSe banning was introduced recently, but my node was PoSe banned in April after I changed the operator key, so PoSe banning definitely existed. If you think it's off topic here, I'd appreciate an answer on the other thread.
I see. However, as this is a v14 release thread I would rather keep this discussion to banning types just introduced to minimize confusion.

Hopefully a dev can get back to you in the other thread.
 
Status
Not open for further replies.