• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

Backporting bitcoin 0.13+ performance improvements

codablock

Active member
Hello,

I was already thinking about working for Dash since some time, and now got some time between 2 projects (I'm a freelancer) that I could use for this. In the end, I'd like to deliver something valuable and later create a proposal to fund this work.

I've talked with two core devs on Slack about where I could support and one suggestion was to look into performance improvements of dashd and backporting perf related changes from bitcoin 0.13+.

I started to read through the performance related changes in bitcoin and one thing that I immediately assumed to be worth of backporting is the "assumed valid blocks" feature from the 0.14 release, with an optional addition (which I describe later). Other changes, especially network code related, might be hard to backport as I read that Dash had some major refactoring in the network code.

Is anyone already working on this or a comparable task? Or are there maybe any documents describing the state of backporting tasks?

Regarding "assumed valid blocks": The basic idea is to specify a block hash that is known/trusted to be valid and part of the longest chain. This knowledge lifts the node from full block/transaction validation when doing the initial block download for each block that was mined before the specified block, decreasing load on the CPU and thus reducing IBD time. Bitcoin has also added a default block hash which was confirmed by multiple developers to be valid and part of the longest chain.

As of https://bitcoincore.org/en/2017/03/08/release-0.14.0/#ibd, this change combined with another improvement (Memory-sharing between mempool and UTXO DB cache) reduced the IBD time to around 1/6 of the time needed in previous versions. I'm not sure how much impact the 2 individual improvements have on it's own, but I may put some research into the second one later if needed.

To change the default hash, users may specify the hash on startup, but the problem now is "how to figure out which is a valid block?". Users can check other nodes that they are in possession of (and thus fully trust) or use 3rd party services like block explorers to find a valid block. Everyone not willing to take this hurdle will probably just stick with the default hash, making them dependent on frequent releases to have recent hashes and thus accept slow syncing for each block that comes after the default hash.

And that's where I think the MN network could come into play. Instead of manually specifying the hash (and providing a default), clients/nodes could ask a random MN for a valid block that is part of the longest chain. The MN would check the latest block and go back in time by a few blocks (to make sure the block is really hardened and no reorg/orphanage may happen) and return the hash. After this hash is received by the node/client, it would ask multiple other random MNs if this block is REALLY a good one. Only if all MNs give a positive response, the client/node would start to use the block as an "assumed valid block".

If I'm not missing anything, this should be as safe as what the bitcoin core developers did to manually choose the default hash, but being completely automatic and thus resulting in always recent "assumed valid blocks" used by clients/nodes.

I'm not sure about the upgrade process needed for such a change, as I didn't dig through versioning and upgrade processes in Bitcoin/Dash until now. I'd assume that some kind of protocol upgrade would be required for the MNs before clients could start asking for valid blocks, as all MNs must be able to verify the chosen block when they (the MNs) are chosen randomly. Client side upgrade should be backwards compatible however. I'm happy for suggestions and pointers here :)

Disclaimer: I'm in the middle of digging through code and documentation as I'm quite new to blockchain/bitcoin/Dash. I assume that I already got a good understanding of how the overall tech works, but of course there may still be details still missing in my head. So please, tell me if I'm wrong with anything that I may have misunderstood or missed :)
 
"-assumevalid" option would be nice to have but I'm not sure if it will improve things significantly right now since we had very low number of txes to date. Would be interesting to see some benchmarks :) Re using MNs to set assume valid hash: neat idea, I like that. You probably won't even need new p2p messages or anything, existing getheaders/getblocks should already do the work.
 
The code is backported now (was much easier then expected). I did some simple benchmarking where I started dashd on a fresh data dir and waited until a predefined block (a quite recent one) appeared in the log. I repeated this 8 times with "-assumevalid=0" (disables it) and 8 times with the default assumevalid options. With assumevalid disabled, it takes 48 minutes on average, with assumevalid enabled it gets down to 22 minutes. I tested this on a m4.xlarge AWS instance.

I noticed that it takes quite some time until the check for assumevalid actually fires. This is because the block headers up until the given hash must be present before the check can be performed and fetching of headers is done in parallel to the block fetching. I'm wondering if it wouldn't be better to not start block fetching before the headers fetching is finished. This way the headers fetching would have the full network bandwidth and finish a lot faster, which in turn will make the block fetching faster because assumevalid fires earlier. I'd assume that currently headers fetching nevertheless is always finished earlier and at the same time the blockchain is useless until it's fully up-to-date...so, such a change should not have any bad/unexpected side effect?

I had a short email exchange with @ol and he told me that he has a big pile of PRs in his queue related to bitcoin backporting. I'd prefer to wait for him before submitting my PRs, as otherwise these would make his life a little bit harder due to some refactoring that happened in bitcoin.
 
The code is backported now (was much easier then expected). I did some simple benchmarking where I started dashd on a fresh data dir and waited until a predefined block (a quite recent one) appeared in the log. I repeated this 8 times with "-assumevalid=0" (disables it) and 8 times with the default assumevalid options. With assumevalid disabled, it takes 48 minutes on average, with assumevalid enabled it gets down to 22 minutes. I tested this on a m4.xlarge AWS instance.

I noticed that it takes quite some time until the check for assumevalid actually fires. This is because the block headers up until the given hash must be present before the check can be performed and fetching of headers is done in parallel to the block fetching. I'm wondering if it wouldn't be better to not start block fetching before the headers fetching is finished. This way the headers fetching would have the full network bandwidth and finish a lot faster, which in turn will make the block fetching faster because assumevalid fires earlier. I'd assume that currently headers fetching nevertheless is always finished earlier and at the same time the blockchain is useless until it's fully up-to-date...so, such a change should not have any bad/unexpected side effect?

I had a short email exchange with @ol and he told me that he has a big pile of PRs in his queue related to bitcoin backporting. I'd prefer to wait for him before submitting my PRs, as otherwise these would make his life a little bit harder due to some refactoring that happened in bitcoin.
I think you assumption about headers is correct but even without this additional fix benchmark results look pretty impressive - I wasn't expecting 2x+ improvement on Dash blockchain tbh. Good job!

Yes, Oleg is somewhere in the middle of submitting his large portion of patches, so it's probably better to wait. You still can submit PR right now if you like but most likely you'll have to rebase it later to make it mergable again.
 
Back
Top