• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

Should Platform run on all nodes or should Platform run only on High Performance nodes ?

The way i see it :

If these decision proposals do not pass the treshold of 10% but DCG still uses the yes votes of these failed decision proposals to declare a winner, it will most likely become a case for the Dash Trust Protectors. As it will be a situation where DCG for the first time explicitly goes against the wishes of the network, and with an existing precedent where DCG previously (2 years ago) did adhere to the treshold of 10% (and saw its decision proposal fail spectacularly in the end, during phase 2). I suspect it is the fear of failing again with their decision proposal(s), that is driving their decision to not adhere to the 10% treshold this time.

The Dash Trust Protectors have the power (if certain conditions are met) to reassign the DCG Board in case masternode owners request it and also have a duty to hold DCG accountable to the Dash network and ensure that DCG is working in the best interests of Dash.

See : https://zaimirai.com/dash-trust-protectors/

Hopefully it never comes to that. But seeing the current limited (and in my view rather low quality) DCG options to start Dash Platform and DCG firm decision to not adhere to the 10% treshold, i am not sure DCG is currently working in the best interests of Dash.
 
Last edited:
after an initial reactionary response, where I was concerned about the perceived complexity of the proposed solutions, my concern now is reserved almost exclusively around censorship resistance of platform. If we can get any reasonable guarantee that platform can't be censored by a small handful of HPMNs then I think we have reason to be optimistic about this network upgrade.

My secondary concerns are more downstream and related to future optimizations and changes as new technology and the tokenomics of the network changes. Will we be able to make changes that are a) technically feasible (is the solution extensible) and b) socially appropriate (will there be damage, perceived or otherwise, from making changes to block rewards, etc. going forward)?
 
IMHO you're completely wrong. You can modify the software, promote it, ask other MNOs to run your version, and so on. And every single MNO is free to do that.

Nobody who is lacking the technical expertise can do what you propose. And that´s probably everybody except a few Core/Platform developers.
You know that something like that could even cause one (or more) Forks of the project, which could seriously damage the projects market-cap.
We have the possibility of Voting (and Sporks) in order to prevent unintended Forks.
Why would you advocate something that could potentially result in a Fork?
Perhaps you don´t really advocate a Fork, because you know that only very few people would be technically versed enough to pull it off.

So basically all you say is "You can do nothing against DCG not respecting Governance Proposal outcomes" or "Defend yourself against DCG if you can".
Neat discussion we are having here.
 
My secondary concerns are more downstream and related to future optimizations and changes as new technology and the tokenomics of the network changes. Will we be able to make changes that are a) technically feasible (is the solution extensible) and b) socially appropriate (will there be damage, perceived or otherwise, from making changes to block rewards, etc. going forward)?

HPMN create tokenomic complexity down the line. For example, say the network wants to further expand the number of masternodes, this would no doubt impinge on HPMNOs.

Also, who's to say DCG might want to later introduce a third node class, perhaps archive nodes, or smart contracts or something else. No one is going to provide guarantees, for even if they did, a few years down the line they will say, "I'm not responsible for other people's promises".

I don't know why they can't just build it like the Lightning Network, which has no upper limit to the number of nodes.
 
HPMN create tokenomic complexity down the line. For example, say the network wants to further expand the number of masternodes, this would no doubt impinge on HPMNOs.

Also, who's to say DCG might want to later introduce a third node class, perhaps archive nodes, or smart contracts or something else. No one is going to provide guarantees, for even if they did, a few years down the line they will say, "I'm not responsible for other people's promises".

I don't know why they can't just build it like the Lightning Network, which has no upper limit to the number of nodes.

What makes no sense is to increase collateral for aiming at relatively few Platform Nodes, when the criteria should rather be based on Hardware resources,
like bandwidth score, ping latency score, available space etc.
They want platform to be fast-as-hell, but have obviously totally forgot how to enforce just that.
It would require masternodes to internally have at least a 24hour benchmark score (average) for bandwidth, ping latency, available space etc.
so that masternodes could compete on a hardware-based level.
Coding something like that (hopefully without it requiring another 8 GB of RAM memory) could be tricky only for the reason that bandwidth and latency is usually measured between two destinations (IP addresses), but such fair scores would have to be assigned to single masternodes, in order to distinguish the fastest from the lamest Nodes.
 
It's worth having a read over GNUnet, it already has a lot of those aspects covered and a whole host of other security, storage and networking features. Not saying it's the ideal solution, I'd imagine Platform would need something far more tailored to its needs and there are probably far better starting points out there but it may offer solutions to some of the problems.
 
What makes no sense is to increase collateral for aiming at relatively few Platform Nodes, when the criteria should rather be based on Hardware resources,
like bandwidth score, ping latency score, available space etc.
They want platform to be fast-as-hell, but have obviously totally forgot how to enforce just that.
It would require masternodes to internally have at least a 24hour benchmark score (average) for bandwidth, ping latency, available space etc.
so that masternodes could compete on a hardware-based level.
Coding something like that (hopefully without it requiring another 8 GB of RAM memory) could be tricky only for the reason that bandwidth and latency is usually measured between two destinations (IP addresses), but such fair scores would have to be assigned to single masternodes, in order to distinguish the fastest from the lamest Nodes.
it is not that is makes no sense. It is that it is suboptimal. So the network has to decide does it wait for the capabilities to do this in an optimal more elegant way or does it take the risk of trying to manipulate the blockreward to counter the centralization risk from whales. That is supposing that the HPMN plan doesn't allow for censorship. In that case, it makes no sense.

Basically neither decision is great. I don't know if I'd rather wait any longer or have a suboptimal initial rollout.
 
What makes no sense is to increase collateral for aiming at relatively few Platform Nodes, when the criteria should rather be based on Hardware resources,
like bandwidth score, ping latency score, available space etc.
They want platform to be fast-as-hell, but have obviously totally forgot how to enforce just that.
It would require masternodes to internally have at least a 24hour benchmark score (average) for bandwidth, ping latency, available space etc.
so that masternodes could compete on a hardware-based level.
Coding something like that (hopefully without it requiring another 8 GB of RAM memory) could be tricky only for the reason that bandwidth and latency is usually measured between two destinations (IP addresses), but such fair scores would have to be assigned to single masternodes, in order to distinguish the fastest from the lamest Nodes.


Nope...this benchmark race does not make sense.It is a waste of cpu and bandwidth resources.
What makes sense is a "proof of Service" from the point of view of the client.
The client asks a paid service, the masternodes compete eachother under the rules of the protocol, and the one capable to offer the service to the client the faster, provides the service and gets the reward.
This can be implemented, but of course someone has to propose it to the budget and define how much the developers should be paid for implementing it. The more the bounty is, the better developers you will attract.
 
Last edited:
Since per-node PoSE has proven to be so difficult to implement on Platform, why not have the Core masternode quorums perform a simple check on Platform's reachability? If Platform is up, they get paid. When it is down, that Dash which would normally get burned/locked to create Credits and pay the HPMN would simply get generated as regular Dash to the Core nodes as usual.

That way, the network wouldn't be continually rewarding these proof of stake nodes purely based on their stake even when Platform goes down.
 
Here is a modification of DCGs proposal:

First, we reduce the masternode collateral to 500 dash.

Then, instead of using collateral for HPMNs, we use Liquidity Pools. If a masternode wants to participate in Platform, they register their node and create something similar to a Lightning channel. At this point, anyone can add to the channel until a minimum of 5000 dash is raised. You will note, this is the same setup as a 10K HPMN. You will also note, this is not the same as shared masternodes as there are no penalties or consequences for adding and removing shares. The goal is simply to reach the 5K threshold by any means possible.

From the grand total of all channels, 50% is allocated to the donors and the remaining 50% goes to participating masternodes.

The masternode network can enforce a minimum donation of 10 dash, this is to limit spam payments clogging up payouts. To ensure enough masternodes exist, the network can also enforce a maximum of 5.1K dash.

--------

I'm not sold on HPMNs, I just wanted to improve on the original proposal.

In an ideal world, HPMNs would not be more than 1K dash, but if they must be, then I prefer this method.
 
Last edited:
Funny how this is all coming out only now.
Never during the 7 year long journey of Platform development, has the MNO network been asked or polled on directions by the devs, after explaining the various tradeoffs.

If its true that Platform will not waste resources (i really doubt it), it could as well run on all 1K nodes, according to what was the original Platform/Evo Vision.
But now they claim, all 1K nodes running Platform would somehow slow it down or make Platform lame.
Then this is a serious coding/development shortcoming you should try to overcome and fix, rather than trying to adjust the whole network to your coding deficiencies.

More Platform Nodes (= More Decentralization) and that must never (!) be a bad thing. Otherwise, once Decentralization becomes bad, something is very wrong.
The devs could as well think about a solution which is not burdened by a greater amount of Platform Nodes (all 1K Nodes), to the contrary, a greater amount of Platform Nodes should be something useful and ideally decrease the hardware needs for every single Platform Node.

I understand that for you devs 4K or 10K is the overall easy fix, even if it complicates a couple of other aspects.
But easy is not always the best.
 
An alternative to consider:

1. Make running platform services optional
2. Require collateral lockup to run a platform masternode.
3. Maintain 1,000 DASH collateral requirement for all masternodes:
- 1,000 DASH for core masternodes (as current/normal)
- 1,000 DASH locked for platform masternodes (see below for lock time considerations)
4. Launch conservatively and limit participation (do a beta launch)
- high initial lock time period (e.g. 1 year)
- low initial platform node allocation (e.g. 10% of masternode allocation)
5. Refine over time
- observe data (costs, performance, stability, which nodes have upgraded, etc)
- fix bugs and adjust consensus variables until dialed in:
- decrease lock time period (e.g. to 6 months) to increase node count (e.g. to 400 nodes)
- increase platform allocation (e.g. to 20%) to increase node count (e.g. to 600 nodes)

Example:
(numbers not set in stone, just for illustration)

View attachment 11467

Why?

2. This solution satisfies the main stated goals of the "high performance masternode" proposal:

- It can mitigate the risk of multiple platform nodes dropping off with staggered time locks.

- It can mitigate the risk of multiple platform nodes dropping off with staggered time locks.

Is this to be considered a temporary fix to the risk of multiple platform nodes dropping off, untill DCG comes up with a definitive solution ?
Or do staggered time locks provide a longterm fix to the risk of multiple platform nodes dropping off, even after the timelocks unlock ? (lets say for example the time locks unlock after 1 year)

What i am wondering about is if we still have this problem (the risk of multiple platform nodes dropping off), after that timelock expires in a year or after six months.
Is it a case of buying devs some time to implement something more permanently ?
 
Last edited:
Is this to be considered a temporary fix to the risk of multiple platform nodes dropping off, untill DCG comes up with a definitive solution ?
Or do staggered time locks provide a longterm fix to the risk of multiple platform nodes dropping off, even after the timelocks unlock ? (lets say for example the time locks unlock after 1 year)

It's not a fix in the sense that it guarantees no risk (it just mitigates, not eliminates). The degree of mitigation would depend on the implementation.

My original thought with timelocks is simply that whales (especially 3rd party custodians) would not want to lock up all their nodes at the same time (regardless of timelock duration). Imagine you have 100 nodes, would you want to lock every single one up at the same time? Unlikely, because a) then you then have zero liquidity, and b) you don't want to get into a situation where your platform masternodes are less profitable, for whatever reason, than your normal masternodes (or other non-Dash alternatives for that matter). You might lock only 25% or 50% of your nodes, and/or stagger them over a time period. For example you might spin up 10 now, 10 more in a week, another 10 in another week, and so on until you're satisfied that you have a stable, relatively liquid investment that is more profitable than your alternatives.

The point is, with timelocks, it's very unlikely that you'll ape in with all your nodes at the same time. If that's the case, then there's much less of a risk to the platform that huge amounts of nodes by one entity will suddenly drop off, or be upgraded at the same time. That's the main risk as I understand it, so, problem significantly mitigated.

I imagine under timelocks that we'd see a relatively slow, smooth influx of small amounts of nodes from dedicated operators at first. They would be handsomely rewarded for being the first in (and you don't have to be a quadra- or deca-whale to have this opportunity - just need to be dedicated to holding Dash). Because there's a higher barrier to entry you won't see a mad rush of operators switching over their whole fleets on day one, which is exactly what you want.

So I think it both buys devs more time (without a drastic and hasty collateral increase) and, depending on what we see in terms of which nodes decide to run platform, it may even offer a permanent solution with even a one-time timelock. The timelock requirement could potentially even be removed in the future if we see a well-distributed mix of operators (no 130+ node operators, for example).
 
Last edited:
Funny how this is all coming out only now.
Never during the 7 year long journey of Platform development, has the MNO network been asked or polled on directions by the devs, after explaining the various tradeoffs.

If its true that Platform will not waste resources (i really doubt it), it could as well run on all 1K nodes, according to what was the original Platform/Evo Vision.
But now they claim, all 1K nodes running Platform would somehow slow it down or make Platform lame.
Then this is a serious coding/development shortcoming you should try to overcome and fix, rather than trying to adjust the whole network to your coding deficiencies.

More Platform Nodes (= More Decentralization) and that must never (!) be a bad thing. Otherwise, once Decentralization becomes bad, something is very wrong.
The devs could as well think about a solution which is not burdened by a greater amount of Platform Nodes (all 1K Nodes), to the contrary, a greater amount of Platform Nodes should be something useful and ideally decrease the hardware needs for every single Platform Node.

I understand that for you devs 4K or 10K is the overall easy fix, even if it complicates a couple of other aspects.
But easy is not always the best.
Agree on this!

I would say having 1000 Dash collateral should not be changed as this is the base principal of MN network.
The for HPMN, maybe have this as optional with more incentive so people who like to run HPMN can do it and the ones who dont want an secure the network as they are today. This incentive is also partially and indirectly coming from data storage fee as i understand it...
 
for HPMN, maybe have this as optional with more incentive so people who like to run HPMN can do it and the ones who dont want an secure the network as they are today.
Are you using “HPMN” (high performance masternode) as synonymous with high collateral masternode?

“High performance masternode” is a misnomer. I’ve heard no evidence or even theory why high collateral nodes would be more “performant” than a 1,000 DASH collateralized node. In practice nodes will generally only be as performant as needed to get paid, and this has nothing to do with the collateral backing them.

I suppose Sam likes the “HPMN” term because it helps sell his idea. Until @QuantumExplorer can show that there is an identifiable link between performance and collateral I consider this naming practice no better than politicians who name bills for marketing purposes rather than descriptive purposes, just to gain popularity - it’s a shameful practice.

Please, everyone, stop calling them high performance masternodes. Call them high collateral masternodes (HCMNs) if you want. I prefer the more descriptive and neutral “platform (master)nodes”.

With that out of the way, i like the idea of making high collateral optional. We discussed this very thing on a call a while ago. The idea would be to let operators choose a higher collateral to back a node, so for example instead of running two 1,000 DASH-backed nodes, you could run one 2,000 DASH-backed node. The latter would get paid the same as the former, but would have lower operating costs. That would incentivize multi-node operators to run fewer nodes, which Could be better for the network. It’s definitely something to consider and research more. I’m guessing DCG has thought of it, but I haven’t heard anything about it.
 
Back
Top