• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

Tech issue with Wallet doing 1500+ transactions per day

To to more (partially) specific, modern Bitcoin Core has features like segwit and transaction bundling that reduce confirmation times and thus mitigate the effects of "the same limits."

To be more accurate and prevent a distraction from getting the issue resolved, Dash's confirmation times are 4 times faster than Bitcoin (~2.5 minutes vs 10) and its blocks are nowhere being full. Additionally, if I understand what is being referenced by "transaction bundling" (batching payments), it is merely spending to multiple outputs in a single transaction. Something the spendmany RPC is specifically intended for and not a new feature (in Bitcoin or Dash).
 
Can this be solved by compiling the code and fixing the spam protection limits?

You could, or you just set -limitancestorcount to a higher value which would have the same effect. Not sure if it would help however, as other nodes would still reject the TXs. It's possible that your wallet would then queue up more transactions locally and rebroadcast them when new blocks are mined (which would eventually reduce the ancestor count in your wallet and allow rebroadcasting). But I'm really not sure about the wallets behavior here.

"Very likely" in your prejudiced opinion isn't remotely the same thing as an objectively demonstrated/proven fact. Stop conflating your suppositions and assumptions with actual reality. Prove it or STFU.

It's greater than equally "very likely" Bitcoin Core can handle vastly more tps than Dash because the current Bitcoin Core software is several iterations of optimizations and enhancements more advanced than Dash (which is based on a much older Bitcoin release).

To to more (partially) specific, modern Bitcoin Core has features like segwit and transaction bundling that reduce confirmation times and thus mitigate the effects of "the same limits."

You didn't really read/understand what I wrote, right? You are comparing two different things, global throughput and per-node outgoing throughput. These are two different things. In regard to the second case (which we are dealing with here!), there is a limit in place when it comes to the so called ancestor count for transactions. This limit is defaulted to 25 in Dash. Btw, it's also the same default in Bitcoin: https://github.com/bitcoin/bitcoin/blob/master/src/validation.h#L61, so exactly the same limitation when it comes to local nodes and the spending behavior.

If you want to accuse me of not using facts based on my use of "Very likely", please apply the same rules to your own posts. Ok, you don't use "Very likely", but you present your misunderstandings as facts. Throwing in SegWit into the conversation and ignoring that it has nothing to do with the topic and the underlying limitation is not very helpful.

If Bitcoin Core had to be restarted ever 5 minutes just to handle a measly 1000 tx/day, don't you think someone would be complaining about that on the dev-list and/or subreddit? Of course they would; you need to stop and think before spewing reflexively pro-Dash apologia and waving away valid concerns with appeals to nonsense speculation about how 'maybe Bitcoin is just as bad.' Do you really think Coinbase and Kraken restart their BTC nodes every 5 minutes? No, of course not. So GTFO with that self-serving false equivalence excuse for Dash's abysmally poor real-world performance.

Again, different spending behavior is the keyword here. Did you notice that exchanges don't transact on a per-withdrawal basis but in batches only? You ever asked yourself why they do that? If your answer is "because of the fees!", good one, as it might be the most important reason for them. But are there maybe positive side-effects of such a spending behavior? Maybe reduction of unconfirmed ancestors? Think about that and then think about the comparisons you are doing here.
 
To be more accurate and prevent a distraction from getting the issue resolved, Dash's confirmation times are 4 times faster than Bitcoin (~2.5 minutes vs 10) and its blocks are nowhere being full. Additionally, if I understand what is being referenced by "transaction bundling" (batching payments), it is merely spending to multiple outputs in a single transaction. Something the spendmany RPC is specifically intended for and not a new feature (in Bitcoin or Dash).

Thanks. This sounds like what would actually solve the problem (without needing so many split outputs). If sendmany is used to send payments to multiple recipients, the number of unconfirmed ancestors should be reduced dramatically.
 
Today I am posting an update to the original ticket. I added 4 new addresses to the wallet for a total of 5. Previously, there was only 1. The Dash.red website is currently at 1300 transactions per day. In the past month, I have only seen the error appear twice, which is a massive improvement. Furthermore, when the error occurs, in only takes one of the 5 addresses to zero, so - unless the wallet is really low on all of the inputs, it is now not such a big problem.
I am certainly happy with the results!!! Thank you!
 
A new update: I increased the number of receiving addresses to 10 and I manually restart the dashd process once per day. We are now completing 1800+ transactions per day without any problems!!!
 
I found some time to re-investigate this issue. The underlying issue is known in Bitcoin, and described in https://github.com/bitcoin/bitcoin/issues/10004
To sum it up: The limitation on maximum number of chained transactions is just one part of the problem. The problem that makes this so bad (requiring a wallet restart) is that the last transaction that you try to create, results in the chained Tx limit to be reached and the TX not being added to the mempool. The wallet however still adds it to the local wallet, without retrying to re-add it to the mempool when blocks are being mined (which reduces the number of chaines TXs in your wallet). The wallet only does this when the whole application is being restarted. This results in your wallets UTXOs to become unusable (which reduces your balance) until you restart your wallet.

I'm trying to figure out why this Bug is still unfixed in Bitcoin and how we can solve this. Until then, you could periodically (e.g. once per minute) call the RPC command "resendwallettransactions", which will then try to re-add the transactions which previously failed to be added.
 
I found some time to re-investigate this issue. The underlying issue is known in Bitcoin, and described in https://github.com/bitcoin/bitcoin/issues/10004
To sum it up: The limitation on maximum number of chained transactions is just one part of the problem. The problem that makes this so bad (requiring a wallet restart) is that the last transaction that you try to create, results in the chained Tx limit to be reached and the TX not being added to the mempool. The wallet however still adds it to the local wallet, without retrying to re-add it to the mempool when blocks are being mined (which reduces the number of chaines TXs in your wallet). The wallet only does this when the whole application is being restarted. This results in your wallets UTXOs to become unusable (which reduces your balance) until you restart your wallet.

"The node is probably running with the default limitancestorcount / limitdescendant count, which prevents chains of >25 transactions entering your mempool. He can override those with command line arguments. "

What if he compiles the code, with an increased chained TX limit value ? Provided of course he has enough memory. Or even better, override this limit with the above mentioned command line arguments? WIll this temporarily solve the problem and increase the number of succesfull transactions ?
I'm trying to figure out why this Bug is still unfixed in Bitcoin and how we can solve this. Until then, you could periodically (e.g. once per minute) call the RPC command "resendwallettransactions", which will then try to re-add the transactions which previously failed to be added.
How this can solve the problem, in case the memory is full? I think resendwallettransactions may cause more problems in case the system already runs in its memory limits.

But if he compiles the code in his system (or override the chained TX limit with command line arguments as jnewbery recommended) and do testings in order to discover the appropriate chained TX limit value that fits better to the system, he may then discover the real limits of his computer, and thus know how many transactions his computer is capable to support.

This is another case of vote the numbers. Hardcoded numbers are always a bad practice.
 
Last edited:
A similar problem with fork. In the faucet after several transactions the balance is empty. Solves the repair of the wallet.
 
Back
Top