• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

Should Platform run on all nodes or should Platform run only on High Performance nodes ?

Last edited:
What ARE the costs then?

When we consider the fees, the most expensive part is the data storage, DCG has some whacked out fee model, but it can be simplified to cost of SSD space to store the data for 10 years times number of nodes. So, a back of the napkin calculation, to say store your profile data together with a small grainy avatar would be.
  • Assume 100GB SSD is $24/month, profile data is 14KB, 100 evo nodes.
Cost to store -> 14kb / 100GB / 1024MB /1024KB * $24 * 12 months *10 years * 100 nodes = $0.0384.
and this is the number QE thinks ends users would be willing to pay for such a transaction. 3.8 cents. Compared to $1.52 for the entire network.

Ok. This is my alternative proposal, in order to preserve both a low transaction fee and strong decentralization (aka many DashPlatform nodes).
  • Assume 100GB SSD is $24/month, profile data is 14KB, 500 evo nodes.
Store the data for 02 years

Cost to store -> 14kb / 100GB / 1024MB /1024KB * $24 * 12 months *2 years * 500 nodes = $0.0384.

But NOT ANY 500 nodes among the 3500 masternodes. Only the nodes-masternodes that participated in recent votings should be allowed to belong to this 500 nodes group and host the DashPlatform (because voting participation is a proof of individuality, and the proof of individuality strengthens decentralization)

Lets launch Evo/DashPlatform that way, with these numbers and prerequisites, and watch how it goes....

This comment is a poll. Rate with a LIKE if you like it, ANGRY if you dont.
 
Last edited:
Sam, this is ridiculous way of thinking in the software industry. You can imagine probably 10 more scenarios when something could go wrong. This is normal and shouldn’t stop any company from building and releasing products. Risks are always in place and they should be simply assessed and mitigated.
There is no perfect software and there won’t be. Platform doesn’t have to be perfect, but it has to exist in a first place. This is critical! We don’t need perfect, extremely secure platform. We need platform released!

Don’t waste your time on this or any other revelation, just finish and release the damned platform quickly.
I don’t think you have a luxury of more delays - I am aware of at least 2 MNOs, who won’t support DCG proposal anymore in case the platform isn’t released by the end of the year. I am also considering the same decision.

While I echo this sentiment, I think we all do, I would caution to rush Evo to the main chain, it is not stable and even as of right now it is down on testnet and has spent most of the past month offline, it's not ready for even testnet let alone mainnet. I would like to see Platform running steadily on testnet for several months without crashing before we think about pushing to mainnet.
 
Here is another opposing voice : if this crypto project indeed goes towards much more centralization and less security (as mentioned in the presentation with regards to Platform Security) in favor of very low Platform fees, then i don't see myself supporting DCG budget proposals anymore. Because in my view this project is then being actively steered into a very wrong direction.

How much security do we actually need for our usernames and contact lists? That is going to depend on what we do with Platform, if it is nothing interesting then why bring out the Rolls Royce of secuirty? Heck a few Postgre SQL databases and we're fine.
 
Ok. This is my alternative proposal, in order to preserve both a low transaction fee and strong decentralization (aka many DashPlatform nodes).
  • Assume 100GB SSD is $24/month, profile data is 14KB, 500 evo nodes.
Store the data for 02 years

Cost to store -> 14kb / 100GB / 1024MB /1024KB * $24 * 12 months *2 years * 500 nodes = $0.0384.

But NOT ANY 500 nodes among the 3500 masternodes. Only the nodes-masternodes that participated in recent votings should be allowed to belong to this 500 nodes group and host the DashPlatform (because voting participation is a proof of individuality, and the proof of individuality strengthens decentralization)

Lets launch Evo/DashPlatform that way, with these numbers and prerequisites, and watch how it goes....

This comment is a poll. Rate with a LIKE if you like it, ANGRY if you dont.

I'm not sure about technical details, but the idea itself of incentivize voting in some economic way is very good.
 
@QuantumExplorer

I guess i will not be getting an answer on my question then about the 1K node solution

Link : https://www.dash.org/forum/threads/...gh-performance-nodes.53374/page-4#post-232275

Would have been nice to know if that could work or not. It avoids messing with the collateral. Also i am wondering about that DDoS attack possibility for 1K, 4K and 10K nodes solution (as those are all smaller groups of nodes, then what we currently have in our masternode system)

Sorry, really very busy today. I will do my best to answer either tomorrow or Wednesday.
 
I believe introducing new, higher collaterals is unlikely to attract new people to Dash, and I think introducing a new lower collateral would have the same effect. If the goal is to reduce the number of masternodes running platform in order to reduce storage fees, wouldn't running platform on 1000 Dash masternodes and introducing a new 500 Dash L1 Core-only masternode class have the same effect? It lowers the barrier of entry to Dash, reduces the number of 1000 Dash masternodes as people participating in shares are now more likely to be able to run a full masternode. It also increases the voter count (an issue of frequent discussion above). I guess I don't understand the argument above that an equilibrium would form at the 1000/4000 collateral price point - if this is true, then wouldn't an equilibrium also form at 500/1000?

I've heard Quantum rebut this with the argument that lowering the collteral would result in higher fees, but fees for the L1 chain are fixed, regardless of the number of masternodes in operation. Can you show me the numbers on this?
 
Manager shouldn't be the one coding. Manager could help with programming from time to time but his role is to work on problems, make improvements, make his team effective and productive by making decisions and applying changes (most of the time painful changes, if the things don't work).

STRONGLY agree with this. We urgently need to strengthen our management and internal communication at DCG, devs are burning out doing management work.
 
Exactly what xkcd said. He's finding bugs/issues because he cares.

Agree on this, it makes more sense to have the concept of how platform works audited, than to have someone cold-read the code and try to reason about it. The chance of finding bugs like that is pretty low, although I can see how it might make sense in more limited code bases like smart contracts.
 
Looks like devs are not wasting any time :

Introduction of HighPerformanceMasternode #5039

Is this to test how much change in code is required ? Or is this devs developing the feature already for testnet, not waiting for the outcome of this discussion / upcoming decision proposals ?

As i remember the same happened with the 5 dash proposal fee reduction, so i assume it is most likely the last.
If it is just for testing how much change in code is required, then that could be kept on a private Github rep for testing purposes ... correct ?
Also no mentioning anywhere that this is just to test how much change in code is required.
 
Last edited:
Looks like devs are not wasting any time :

Introduction of HighPerformanceMasternode #5039

Is this to test how much change in code is required ? Or is this devs developing the feature already for testnet, not waiting for the outcome of this discussion / upcoming decision proposals ?

As i remember the same happened with the 5 dash proposal fee reduction, so i assume it is most likely the last.
If it is just for testing how much change in code is required, then that could be kept on a private Github rep for testing purposes ... correct ?
Also no mentioning anywhere that this is just to test how much change in code is required.


It appears that the real answer is that it is much easier to centralize then decentralize. QED. But we are talking about
Platform/Evo and it is not L1, it seems as if allowing more centralized entitities onto L2/3 may be an appealing option.
Would push issues like sharding down the road, we could direct DCG to decentralize Platform once Mainnet was proven stable,
they can't even provide a stable testnet yet, so I think we are putting the cart before the horse. We R the DASH DAO!
 
It appears that the real answer is that it is much easier to centralize then decentralize. QED. But we are talking about
Platform/Evo and it is not L1, it seems as if allowing more centralized entitities onto L2/3 may be an appealing option.
Would push issues like sharding down the road, we could direct DCG to decentralize Platform once Mainnet was proven stable,
they can't even provide a stable testnet yet, so I think we are putting the cart before the horse. We R the DASH DAO!

I think once we go the centralized direction, it will be very hard to get back to decentralization. Which we most likely are to find out, once we try to direct DCG
to decentralize Platform (once Mainnet was indeed proven stable).

I rather have a decentralized Dash that perhaps needs just a little more time with implementing a stable Dash Platform as intended (on all nodes), then a centralized Dash that compromised on its Dash Platform implementation in such a drastic way. The former (the decentralized Dash) requires everyone (including devs) to be firmly aligned to that direction and working towards that. Not a lot of signals that is the case right now.

With regards to Testnet : hopefully v0.23 will finally bring some stability to Testnet. But the instability of Testnet on itself should not be a reason to then go for centralization, just in order to push something out to Mainnet.
 
Last edited:
I would vote for:
  • HPMNs, because of the mitigation of the risk, that a problem with Platform brings down all MNs.
  • Keeping 1k as collateral for a simple MN, because it makes the transition smooth.
  • 2k as collateral for a HPMN, because:
    • The barrier of entry to Platform shouldn't be too high.
    • Centralisation must be avoided (a big matter for most of us).
    • Hardware costs (and therefore the fees), get lower and lower, so there is no real need for a high collateral.
 
  • Like
Reactions: MN1
I would vote for:
  • HPMNs, because of the mitigation of the risk, that a problem with Platform brings down all MNs.
  • Keeping 1k as collateral for a simple MN, because it makes the transition smooth.
  • 2k as collateral for a HPMN, because:
    • The barrier of entry to Platform shouldn't be too high.
    • Centralisation must be avoided (a big matter for most of us).
    • Hardware costs (and therefore the fees), get lower and lower, so there is no real need for a high collateral.

You could also mitigate the risk of Plaform bringing down all MN's with this solution : https://www.dash.org/forum/threads/...gh-performance-nodes.53374/page-4#post-232275

This leaves the collateral at 1K and no centralization (you would still divide masternodes over two groups though)

However what all these solutions (1K /2K / 4K / 10K) could do is risk increasing the vulnerability to DDoS attacks, as the groups of masternodes gets smaller, in comparison to our current 1 large group of masternodes (3,718). Nobody really addressed that so far. The smaller the group of masternodes get, the more easy it would be to DDoS them... specially if they get centralized in a few datacenters with specialized hardware (thinking mostly about the 10K whales that could be interested in such a setup).
 
Last edited:
I don't think so, because almost any MNO would choose "platform=1", since hardware is so cheap and gets cheaper every day.

I am not so sure, i have seen plenty of masternode operators over the years that setup masternodes on very minimum hardware requirements instead of using recommended hardware requirements. Cost does still seem to play a role, as does SSD space. Platform will require a lot more SSD space (and RAM usage) than current requirements, which means much higher VPS costs. And people are stretched with their income these days anyways, with rising inflation everywhere.

Knipsel.JPG


Left : estimated requirements for Platform running on all nodes
Right : current Minimum and Recommended requirements for masternodes.

People looking to support Dash Platform would need higher CPU speed (from 2GHz to 2.8GHz, a lot more SSD space (from 60 GB to 200+ GB) and more RAM (from 4 GB + 2 GB swap to 8GB + maybe 4GB swap ?)

And there is an incentive to stay with platform=0 because of the shorter MN payment interval there (as that group of masternodes gets smaller). Which means more MN payments.
 
Last edited:
I am not so sure, i have seen plenty of masternode operators over the years that setup masternodes on very minimum hardware requirements
This is just normal optimisation, that will always happen. The rule is: keep the hardware costs as low as possible, but high enough to avoid being banned.

Let's imagine an additional income of 5% for the Platform part:
So 50 Dash per year. With actual Dash price it's more than 2000€/year.
For this amount, you can rent a lot of RAM, SSD and bandwidth.
And what if Dash price is 100€ or 200€? A lot more!

Again: I think almost all MNOs would choose "platform=1"
 
why would you choose platform=1 when you can get more masternode payments on platform=0 ? and pay less for hardware requirements ?
 
And there is an incentive to stay with platform=0 because of the shorter MN payment interval there (as that group of masternodes gets smaller). Which means more MN payments.
No, "platform=1" MNs still run Core. So they are in the same pool.
"platform=1" MNs will get additional payments.
 
Back
Top