• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

Dash Core v0.14 on Mainnet

Status
Not open for further replies.
"Dash Core v0.14 Rollout Plan" image in that blog is not loading for me, i hope you dont need to be signed in to see it.

only seems to work on a not private browser
(does not work for me ether ; ) pinged liz already)
check this:
Egu4aBk.png
 
How can I see if a specific block is voting for activation?
If I look at Blockchair, all blocks only seem to be voting for BIP9.
 
Looks like the first window of 4032 blocks did not achieve the 80% of miners signalling support, which means we are now entering the second window of 4032 blocks.
Hopefully DIP8 will lock-in during this second window, which will end with block 1,084,608
 
Last edited:
Showing "Unlisted (0%) in the color red on Dashninja for all my masternodes almost gave me a heartattack. I'm not a fan of this specific status indication / active score.
I like the "Valid" or "Active" (100%) indication in color green much more.

Link : https://www.dashninja.pl/deterministic-masternodes.html

Sample of some random active masternodes on first page :

eYz7cs7.jpg
 
Last edited:
Showing "Unlisted (0%) in the color red on Dashninja for all my masternodes almost gave me a heartattack. I'm not a fan of this specific status indication / active score.
I like the "Valid" or "Active" (100%) indication in color green much more.

Link : https://www.dashninja.pl/deterministic-masternodes.html

Sample of some random active masternodes on first page :

dashninja just updated to 14.0.1
there must have gone something wrong , not loading propper for me , pinged elberth already

Edit:
Fixed Now
 
Last edited:
Looks like the conditions for activating spork 17 (QUORUM_DKG_ENABLED) are met :

DKG Spork Activation Criteria

Once at least 50% of masternode owners have updated to Dash Core v0.14.0.1 and 80% of masternode owners have updated to at least the Dash Core v0.14.0.0 version,
we plan to activate the DKG spork. At that time, LLMQs will begin forming and PoSe scoring will occur.


65H6qcp.jpg


v0.14.0.1 > 50%
v0.14.0.1 + v0.14.0 > 80%
 
Last edited:
Isn't 80% miner signaling also a condition for the sprok?

And to answer my own question from earlier, I'm guessing that the signaling can be seen as version 0x20000010 instead of 0x20000000.
 
Isn't 80% miner signaling also a condition for the sprok?

I suspect this specific spork (QUORUM_DKG_ENABLED) is more depending on a broad masternodes support and less on a broad miners support.
While a spork like ChainLocks not only need broad masternodes support for activation, but also need broad miners support to have the enforcement of it switched on.

Please have someone correct me, if my assumption is incorrect or incomplete and we do need to wait for DIP8 activation for this DKG spork to activate.

The way i see it :

* Spork 17 (DKG) can now be manually activated
* After short time Spork 19 (ChainLocks) can be manually activated
* Once DIP8 activates in a few weeks, Chainlocks gets automatically enforced
* After ChainLocks enforcement, Spork 20 (LLMQ-based InstandSend) can be manually activated and monitored for impact

Edit : Spork 17 has been activated : http://178.254.23.111/~pub/Dash/Dash_Info.html (SPORKS)

0pfqnBF.jpg


Which means LLMQs will begin forming and PoSe scoring will occur.
 
Last edited:
DIP is not the same as SPORK.

DIP0008 – LLMQ-based ChainLocks.
DIP8 = SPORK 19

https://www.dash.org/releases/
Thanks for the clarification. I already knew that, but pretty much nobody outside of testnet does...

DASH is great, but it's definitely too complicated for cryptotards... You have to target non-crypto people for use. Between the polarized fanboys, and plain stupidity, nobody within the cryptosphere will ever care because they're not smart enough to comprehend in the first place. You can't care about something so far outside of your ability to comprehend that you don't even now it's there.... This is DASH's only real downfall...
 
Now that PoSe Penalty is in effect (impacting 151 212 masternodes so far) maybe someone could refresh our memory on how
that PoSe works and what to do when you get too high of a PoSe Penalty.

I remember having read something about PoSe Penalty score diminishing per hour, after the masternode starts behaving again ?
I also remember something about if that PoSe Penalty gets too high, a Protx Update command needs to be given as your masternode will be in a banned status ?
What is too high of a PoSe Penalty score and how does it show on Dashninja ? PoSe banned ? Or Delisted ?

Current highest PoSe Penalty : "3259" with masternodes "Valid" and colored green on Dashninja.

Edit 1 : i think we should add a more specific chapter about masternode PoSe Penalty here : https://docs.dash.org/en/stable/masternodes/maintenance.html#
and have Dashninja refer to it in its "Masternodes List explanations" info part (currently Dashninja does not refer to PoSe Penalty at all, eventhough it was added to that list).
Once we have a specific chapter for PoSe Penalty, we can start referring people to it when they are experiencing problems with it.
We can then also make a pinned topic with a referral to that newly created chapter here : https://www.dash.org/forum/topic/masternode-questions-and-help.67/

Edit 2 : the latest Dash News article provided some clarity :

* PoSe Penalty can diminish with 1 per block
* In case of a PoSe ban, a ProUpServTx transaction is required
* Where the documentation regarding PoSe and its penalty can be found : https://docs.dash.org/en/stable/masternodes/understanding.html?highlight=PoSe

But its still unclear to me how high a PoSe Penalty can get before it becomes a PoSe ban, even after reading this :
Each failure to provide service results in an increase in the PoSe score relative to the maximum score,
which is equal to the number of masternodes in the valid set. If the score reaches the number of masternodes in the valid set, a PoSe ban is enacted

Does the valid set refer to number of active masternodes ? Which means PoSe Penalty score can go up to +4900 before it becomes a PoSe ban ?

Also i still think we need more visibility and reference with regards to PoSe Penalty for masternode owners.
 
Last edited:
Now that PoSe Penalty is in effect (impacting 151 212 masternodes so far) maybe someone could refresh our memory on how
that PoSe works and what to do when you get too high of a PoSe Penalty.

I remember having read something about PoSe Penalty score diminishing per hour, after the masternode starts behaving again ?
I also remember something about if that PoSe Penalty gets too high, a Protx Update command needs to be given as your masternode will be in a banned status ?
What is too high of a PoSe Penalty score and how does it show on Dashninja ? PoSe banned ? Or Delisted ?

Current highest PoSe Penalty : "3259" with masternodes "Valid" and colored green on Dashninja.

Edit 1 : i think we should add a more specific chapter about masternode PoSe Penalty here : https://docs.dash.org/en/stable/masternodes/maintenance.html#
and have Dashninja refer to it in its "Masternodes List explanations" info part (currently Dashninja does not refer to PoSe Penalty at all, eventhough it was added to that list).
Once we have a specific chapter for PoSe Penalty, we can start referring people to it when they are experiencing problems with it.
We can then also make a pinned topic with a referral to that newly created chapter here : https://www.dash.org/forum/topic/masternode-questions-and-help.67/

Edit 2 : the latest Dash News article provided some clarity :

* PoSe Penalty can diminish with 1 per block
* In case of a PoSe ban, a ProUpServTx transaction is required
* Where the documentation regarding PoSe and its penalty can be found : https://docs.dash.org/en/stable/masternodes/understanding.html?highlight=PoSe

But its still unclear to me how high a PoSe Penalty can get before it becomes a PoSe ban, even after reading this :

Does the valid set refer to number of active masternodes ? Which means PoSe Penalty score can go up to +4900 before it becomes a PoSe ban ?

Also i still think we need more visibility and reference with regards to PoSe Penalty for masternode owners.

This may not answer all your questions, but this is from Core Dev @thephez on Discord:

“Each failure to participate in DKG results in a PoSe score increase equal to 66% of the max allowable score (max allowable score = # of registered MNs at the time of the infraction). So a single failure in a payment cycle won't hurt you. You can sustain 2 failures provided they are not too close together since your score drops by 1 point each block.

And by cycle I mean the number of blocks required to go through the whole MN list (i.e. the # of masternodes).

So currently it looks like there are 4917 MNs.
- A failure will increase your score by 3245 (4917*0.66).
- So as long as your current score is less than 1672 (4917 - 3245), you can sustain a 2nd failure without being banned (i.e. without your score going over 4917).

However, failing more than twice in the cycle will always result in a ban.

Core reference for anyone that's curious: https://github.com/dashpay/dash/blob/master/src/evo/deterministicmns.cpp#L811-L817
 
This may not answer all your questions, but this is from Core Dev @thephez on Discord:

“Each failure to participate in DKG results in a PoSe score increase equal to 66% of the max allowable score (max allowable score = # of registered MNs at the time of the infraction). So a single failure in a payment cycle won't hurt you. You can sustain 2 failures provided they are not too close together since your score drops by 1 point each block.

And by cycle I mean the number of blocks required to go through the whole MN list (i.e. the # of masternodes).

So currently it looks like there are 4917 MNs.
- A failure will increase your score by 3245 (4917*0.66).
- So as long as your current score is less than 1672 (4917 - 3245), you can sustain a 2nd failure without being banned (i.e. without your score going over 4917).

However, failing more than twice in the cycle will always result in a ban.

Core reference for anyone that's curious: https://github.com/dashpay/dash/blob/master/src/evo/deterministicmns.cpp#L811-L817
Unfortunately, it seems like nodes are being penalized even when they are no failing. Even when they're not involved...

It's only a matter of time before every node gets banned.

Something is seriously wrong and it needs to get fixed while masternodes are still a thing.
 
Unfortunately, it seems like nodes are being penalized even when they are no failing. Even when they're not involved...

It's only a matter of time before every node gets banned.

Something is seriously wrong and it needs to get fixed while masternodes are still a thing.

This got me curious :
A number of metrics are involved in the calculation, so it is not possible to game the system by causing masternodes to be PoSe banned for failing to respond to ping requests by e.g. a DDoS attack just prior to payment
Are these metrics just the failing to participate in DKG and the PoSe Penalty score reducement of 1 per block ? or are more metrics involved ?

Brainstorming here :

* maybe they get punished for being on an older version ? (v13 for example)
* maybe they have connection / propagation problems ?
* closed ports ?
* running inadequate hardware (RAM / CPU wise) ?

Or they were indeed only punished for failing to participate in DKG, but have begun recovering to a much lesser
value thanks to that "1 per block" recovery mechanisme.

Edit : Also i think it would be helpfull if we know why some of the v0.14.0.1 masternodes are failing to participate in DKG and getting penalized for it, what could be the reason behind their failure to participate ?
 
Last edited:
This got me curious :
Are these metrics just the failing to participate in DKG and the PoSe Penalty score reducement of 1 per block ? or are more metrics involved ?

Brainstorming here :

* maybe they get punished for being on an older version ? (v13 for example)
* maybe they have connection / propagation problems ?
* closed ports ?
* running inadequate hardware (RAM / CPU wise) ?

Or they were indeed only punished for failing to participate in DKG, but have begun recovering to a much lesser
value thanks to that "1 per block" recovery mechanisme.

Edit : Also i think it would be helpfull if we know why some of the v0.14.0.1 masternodes are failing to participate in DKG and getting penalized for it, what could be the reason behind their failure to participate ?
I've been observing more...

It seems the network is fractured. Nodes on the same host, same connectivity, same network, vastly more resources than necessary, all latest version... Some get false flags, some don't.

The network as a whole is failing to propagate messaging. It seems once a node falls into the false flag hole, there's nothing you can do to save it.

Exactly why it's happening escapes me. It seems to be totally random. I can only report observable symptoms. What I know is that nodes aren't failing to participate, they're simply never getting the message, even though they're on the same network, same physical machine, etc, as nodes not being false flagged. It's not the nodes... Re-registering does not help. The nodes get false flagged off the network very quickly. The network is simply not contacting the nodes.

I'm advising people to stop running masternodes. Fighting this problem is a complete waste of time from the owner/operator end because it's not their fault. It's a very small minority, but it doesn't stop. In a sample size of 150 nodes, about 7% get false flagged every 24 hours... it's creeping slowly. In 8 days I estimate every node I'm monitoring will be gone.

EDIT: My sample is biased, I'm only observing non-amazon. It seems extreme centralization is the the only way to avoid false PoSe flagging due to network blobs preferring their own blobs... and damn near everthing is centralized onto Amazon. Run your node on Amazon, or get kicked off eventually.

It's not merely way too easy to get kicked off of the network by no fault of your own, it seems to be guaranteed, if not slow...

It is observable that they do get the 1-point reduction in PoSe score per block. But, they'll get hit with a penalty again very quickly, over what appears to be a message that never existed. It's almost like nodes are being selected for false flagging. Resources, connectivity and performance clearly have nothing to do with it.
 
Last edited:
This sounds concerning. Is anyone at DCG aware of this yet?
FWIW, my node doesn't run on Amazon and is still at 0 PoSe score.
 
I've been observing more...

It seems the network is fractured. Nodes on the same host, same connectivity, same network, vastly more resources than necessary, all latest version... Some get false flags, some don't.
The network as a whole is failing to propagate messaging. It seems once a node falls into the false flag hole, there's nothing you can do to save it.

Exactly why it's happening escapes me. It seems to be totally random. I can only report observable symptoms. What I know is that nodes aren't failing to participate, they're simply never getting the message, even though they're on the same network, same physical machine, etc, as nodes not being false flagged. It's not the nodes... Re-registering does not help. The nodes get false flagged off the network very quickly. The network is simply not contacting the nodes.

It is observable that they do get the 1-point reduction in PoSe score per block. But, they'll get hit with a penalty again very quickly, over what appears to be a message that never existed. It's almost like nodes are being selected for false flagging. Resources, connectivity and performance clearly have nothing to do with it.

I wonder if this pull request from 2 days ago is related your described "failing to propagate messaging" or if it is unrelated : https://github.com/dashpay/dash/pull/2967

This sounds concerning. Is anyone at DCG aware of this yet?
FWIW, my node doesn't run on Amazon and is still at 0 PoSe score.

Same here.
 
This sounds concerning. Is anyone at DCG aware of this yet?
FWIW, my node doesn't run on Amazon and is still at 0 PoSe score.
I'm seeing nodes with no score for several days, then suddenly hit twice very quickly. It's rare that they get one hit then left alone to recover score.

It seems like the network arbitrarily decides "I hate this node" and there's nothing you can do about it. Take the node down. It will never work again. You're screwed.

At least one such occurrence is confirmed to be hosted in a major datacenter/hub. Network is as solid as you could ever hope for. Bare metal installation with 8 xeon cores, 12GB RAM and an SSD. Nothing else running on it. Performance, resources and connectivity are absolutely not at issue.
 
Status
Not open for further replies.
Back
Top