• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

Next-gen on-chain scalability - Masterblocks


Active member
Hello developers !

Foreword: I have sold all of my Bitcoins for Dash in 2017, after scalability issues got my BTC transactions to stuck in limbo between 3 to 26 hours ! I think scalability is a real blocker if Dash, or any other crypto-currency for that matter, is to replace a legacy banking system. For me, losing faith in Bitcoin was a very painful emotional defeat. But perhaps also a new beginning.

Dash has solved part of the scalability problem in a genius way: Incentivized Masternodes.

The second part of the problem: block-chain always grows in size.
My question is: Do we *really* need to keep an ancient history since the Caesar of Rome ?

My answer is: not ! We can generate Multiple Genesis Blocks (Masterblocks)

I think we may be able to reduce a block chain size dramatically, by re-generating automatically a new genesis block every year, (or every X blocks), and keeping the balance of the previous block chain, last block of it.
Theoretically it will solve the hard disk space issues, And will reduce network bandwidth requirements for setting a new Full node. It will not solve new block propagation delays.

Objective: My primary concern is that with on-chain scaling, our blockchain can grow into a multi-petabyte size, over a few decades, and both network capacity and hard disk space demands will out-grow our ability to sponsor those. (even with incentivized Masternodes)

2016 [block 0]...[block n] --- the last block's balance could transfer into a new genesis block 2017.
2017 [block 0]

Balance (all utxo) of each address are copied into a new Masterblock as one transaction. Masterblocks are created in a deterministic fashion, so every node can see it and reproduce the correctness of a Masterblock.

Afterwards Mining should start build up on the new chain. Just the history of transactions will be lost.

There could be more types of nodes: Archival node, keeping all history of all the previous block chains, and the Full Node, keeping only the current block chain.

In this scenario, only block explorers and scientists will need to keep an Archival Node. It is not be needed for payments and verification at all.

This idea, if it works, will mean that Dash will beat Visa x 10 transactions easily, with a goal to break 1 billion transactions per day, in a decentralized way.

NOTE1: Blockchain pruning, as is done in Bitcoin has very bad side effects: namely inability to start new nodes from them and doing full rescan. My approach doesn't have such downsides. https://news.bitcoin.com/pros-and-cons-on-bitcoin-block-pruning/

Clarification 01: New Term: Let's rename our so-called new Genesis Block into a Masterblock. (for clarity)

Clarification 02: New Masterblock must be generated in a deterministic way, to be reproducible by all miners and Full Nodes.

Clarification 03: tiny dust amounts can be distributed to the last 1000 miners.
Let's define 'dust' as transaction outputs below the median fees for the last 100 blocks.
(this removes a bunch of bloat from the blockchain)

Clarification 04: Old chain doesn't get removed immediately, but only after Y blocks (say after 1 month).
This means that new nodes connecting to the network during this period can download both new chain and old chain, can verify the new Masterblock by recalculating it from the last year's chain.

Clarification 05: Q: What do new nodes do when they come online after this one month has passed? How do they verify the correct chain from scratch?
A: Consensus must work as follow:

  1. Same as nowadays (longest chain) AND
  2. current timestamp (of year 2017) - proof of work must include timestamp.
    (if it finds a very long chain, but with old timestamp like 2015 or 2016, it gets rejected; future timestamps also get rejected)
TODO: Research new attack vector: NTP protocol / wrong time. (For now my idea requires the administrator to manually setup clocks on critical production systems, such as payment processors. Maybe we can solve this problem later.) -- Perhaps we can solve it via an idea of cryptographic time -- a separate block-chain for time-keeping, with the same X11-proof-of-work and same difficulty as the main-chain, that doesn't reset on each Masterblock -- it's continuous. It should be merge-mined together with the primary block chain. Basically Segregated Time - 'SegTime'.

NOTE: New Inconvenience: My idea, as written now, will invalidate masternode tx-outputs, forcing people to restart Masternodes. In the meanwhile MNs can function as Full Nodes. (But perhaps there is an elegant solution to this problem too... -- something like an extended block, that will be written into the Masterblock, and will list all previous transactions with 1000.0 DASH in unspent tx outputs as valid.)

Okay, Let's do some maths:
1 GB block each 2.5 min = 4 GB in 10 min (equivalent of a 4 GB block in Bitcoin). This will allow us for 1 billion tx/day. In Dash it would equal to 576 blocks/day x 1 GB/block = 576 GB/day. (In Bitcoin it would equal to 144 blocks/day x 4 GB/block = 576 GB/day.)

It's 576 GB/day x 365 (year) = 210 TB-per-year.

Without my idea we will grow into a multi-petabyte territory in 5 years. Will be hard, even with incentivized Masternodes.
With my idea, it would take only 18 Hard Disks (okay 20 HDDs, with RAID6) to keep an entire block chain (that's assuming big 12 TB HDDs; that both Seagate and Western Digital started producing this year).

Block propagation ? It takes only 8 seconds on a Google Fiber (1 Gbit/s Internet) -- so I believe it's very much possible and feasible to grow with on-chain transaction scalability.

What do you think of it?

-Technologov (Idea since 04.Mar.2017)
Last edited:
Pretty cool concept how can we test the formation of a masterblock? Are you a developer, have you been trying this out or this this all theory right now?
I had a similar idea, so I like the way you are thinking. Dash won't really need to do this anytime soon, but some sort of blockchain truncating will inevitably be needed in the long run.
Good idea.

The 1st benefit is space.
And the second benefit is speed. The less blocks/transactions to process the faster an action will be completed.
And the third benefit is the cleanup. For example, gathering set of trx outputs into single output for any wallet and so on.

But the problem (as it seems to me) is that I cannot imagine how to make the process technically smooth enough.

- In ideal case it should like following (n - last/current block number):
current blockchain: ...<-|block n-101|<-|block n-100| <- |block n-99| <- .. <- |block n|
Masterblocked chain: |Masterblock|<-|block n-100| <- |block n-99| <- .. <- |block n|
In this case one need to construct Masterblock's header with hash identical to value that the block n-101 has. That's impossible.
May be some protocol changes should be invented to make it possible. For example, the Masterblock's hash has to appear in a special Masterblock verification transaction in 110th (for example) block after it in chain and its hash will be not checked after Masterblock step for full match (but for several number of starting digits extending 'difficulty zeroes', that in turn makes proof of work of some kind).

- The 2 chains support seems to be hard (and vunerable to any kind to DoS).

- The 'clear start' from Masterblock violates data defence that stands on proof of works within blocks of old blockchain ...