• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

Introducing Dash Improvement Proposals, starting with 2MB Block implementation

AndyDark

Well-known member
Hi everyone,

I'm pleased to announce the publication of our first Dash Improvement Proposal, aka DIP, related to the initial scaling of the network to 2MB blocks that was agreed by a network vote last year:

https://github.com/dashpay/dips/blob/master/dip-0001.md

Modelled on the Bitcoin BIPs, we will follow a similar process and start to publish key future changes to Dash as DIPs to enable public peer review prior to implementation.

This is part of our transition to a more open and academic research & development process and we currently have a further 9 DIPs in draft form being prepared covering much of the Evolution architecture that we will publish over the coming months.

DIP 001 was authored by Darren Tap, who joined the Core Team recently. Darren earned his Ph.D. in 2007 studying Algebraic Geometry under Uli Walther at Purdue University. Since graduation he has taught mathematics at several colleges and universities, provided database guidance to a middle size company, and has been studying bitcoin since 2011.

Other core developers who contributed directly to DIP 001 are UdjinM6, Timothy Flynn, Ilya Savinov, Will Wray and Nathan Marley.

DIP 001 development is currently in progress for planned release in an update in early September.

@nmarley who led Sentinel development and is now an Evolution team lead will manage the DIP repository going forward.

Best,
Andy Freer
 
Great news! I'm glad to see a formal process for publicly tracking things like this. Thanks for the update.
 
I dont like the way you describe your DIP.

Currently Dash has a size limit of 1MB per block. With this limit a miner could include a transaction that takes around 3 minutes to verify on a moderate CPU. However, with a 2MB block size the attacking miner could include a transaction that takes around 12 minutes to verify. If an attacker successfully gets an attacking block in the chain then all nodes in the future will need to verify this block when syncing. This attack is even more dramatic as block size increases. Paradoxically, this attack could get so bad that it becomes impossible because propagation would force an attacking block to be orphaned. Even if an attacking block is not included in the main chain, trying to verify attacking blocks can have negative transient effects.

Your talk is not accurate. What is a moderate CPU? Is a moderate CPU of today also a moderate CPU of tomorrow? Could you please use computational theory ( or parameterized complexity) terms? You should describe the problem using this theory, in order to get rid of the inaccurate "nowdays moderate CPU" concept.

Scaling the block size is a timeless problem, and as such it should be described. Dont use initial values (3 minutes to verify, 12 minutes to verify) as arguments. Try to describe the problem by seting these initial values as variables . So that if in the future the "3 minute to verify" becomes "1 second to verify" or "3 hours to verify", you can reconsider your decision.

This attack is quadratic with respect to transaction size. As such, limiting transaction size makes this attack impossible. Currently, transactions over a size of 100kB are nonstandard and dropped by the network.

Ok, now you are using parameterized complexity terms.
But could you please give us a url that points and analyzes your argument.
How comes and this is a quadratic attack. Who said that, where, and how this quadratic term has been proved?

A quadratic hashing attacking transaction of size 100kB would take around 2 seconds to verify. Since transactions are processed in parallel, a 2MB block full of 100kB attacking transactions would take from 4 to 8 seconds to process. For this reason this DIP changes consensus rules so that blocks with a transaction over 100kB are ruled invalid and orphaned.
You are falling again into the "nowdays moderate CPU" pit. 100kb and 2 seconds for what kind of CPU? Is there an analysis about 200kb? For 200kb, how many seconds, and in what kind of CPU?
 
Last edited:
I'm pleased to announce the publication of our first Dash Improvement Proposal, aka DIP, related to the initial scaling of the network to 2MB blocks that was agreed by a network vote last year:

https://github.com/dashpay/dips/blob/master/dip-0001.md

Modelled on the Bitcoin BIPs, we will follow a similar process and start to publish key future changes to Dash as DIPs to enable public peer review prior to implementation.

This is part of our transition to a more open and academic research & development process and we currently have a further 9 DIPs in draft form being prepared covering much of the Evolution architecture that we will publish over the coming months.

Hi Andy,

Sorry but I did not understand the explanation of "Quadratic Hashing Challenge" very well.
What does a mean, when you say:

"Paradoxically, this attack could get so bad that it becomes impossible because propagation would force an attacking block to be orphaned. Even if an attacking block is not included in the main chain, trying to verify attacking blocks can have negative transient effects."

About this:

As such it can’t treat the sample of masternodes as a simple random sample. At current time 4032 masternodes would consist of a sample of around 87% of masternodes. This number is small enough that the sample of masternodes are unique. If this number were larger then there would be a chance that some masternodes would be sampled twice while others aren’t sampled at all. This property is maintained as long as the number of blocks in a round is under 90% of all masternodes. The number 4032 also has the property of being equal to about a week of blocks.

I think this is the master key of the proof! The question is: In case that nothing is permanently and no far away, many things will change (or better, are changing), how it's possible achieve accuracy if the reality of (small)miners are not the same of the reality of the (big)masternodes. We are talking about new machines, new GH/S operators and, based on BIP 009 Process, how can I determine, as a miner, when it's the "Expected activation conditions" Or how can I discover, in fact that I'm not a representative miner, by myself.

Sorry, could you explain this with more details.
I really appreciate your job! Thanks in advance.

I dont like the way you describe your DIP.

"What is a moderate CPU? Is a moderate CPU of today also a moderate CPU of tomorrow? (...)" ==== >> Very and very important!

"Scaling the block size is a timeless problem, and as such it should be described..." ==== >> Thanks! My question too!!!

"(...)100kb and 2 seconds for what kind of CPU? Is there an analysis about 200kb?
For 200kb, how many seconds, and in what kind of CPU? " ==== >> Definitely, this is the most problem of MINERS, WORKERS, POOLS, MASTERNODE, whatever... this is the most problematic situation, when we think about UPGRADE and EVOLUTION, evolution in all forms... code, semantic, work, distribution, governance and etc...

Thanks @demo !!!
 
Back
Top