Tech issue with Wallet doing 1500+ transactions per day

ec1warc1

Active Member
Jul 26, 2016
318
170
113
60
Hello everyone! As you may know, I run a website, dash.red, that is a fun way for people to win duffs. We are currently doing about 1600 transactions ever 24 hours.

We have a technical issue.

Sometimes, the wallet will be doing its thing - which is fairly constant transactions paying 30,000 duffs - when suddenly, there are very little or no funds available. We go from 30 million duffs to 16 thousand duffs in an instant. Now, there are not enough funds to pay anyone.

Scary!

The solution is to stop the dashd, and start the dashd again. There, all the funds are back in place. It takes about 30 seconds.

The problem with this solution is that I cannot be checking the wallet every 5 minutes. I cannot be restarting it every 5 minutes either.

The more transactions per day that we do, the more frequent this is going to occur. I assume it has something to do with one transaction jumping on top of another, but that may not be the case.

Two questions:
  1. Is there a dash-cli command to get the wallet to quickly review its funds again without restarting dashd?
  2. Is there anyone from Core team that wants to look at our logs to see if there is a problem with a potential solution or perhaps even a bug?
Thanks,

ec1warc1
 

UdjinM6

Official Dash Dev
Dash Core Group
May 20, 2014
3,639
3,537
1,183
...
Two questions:
  1. Is there a dash-cli command to get the wallet to quickly review its funds again without restarting dashd?
  2. Is there anyone from Core team that wants to look at our logs to see if there is a problem with a potential solution or perhaps even a bug?
...
1. Don't think so, but it's hard to tell because it's not clear what could cause the issue
2. Absolutely. Please send me debug.log via email.
 
  • Like
Reactions: solarguy

ec1warc1

Active Member
Jul 26, 2016
318
170
113
60
I just had this happen again - twice in about 30 minutes. Could this be an example of the problem:
2018-01-11 17:47:23 keypool keep 9173
2018-01-11 17:47:23 AddToWallet 9b07d9b096580861e58ae70504df8e42a3da3f021079cda5369a30f8bc65b9ec new
2018-01-11 17:47:23 AddToWallet 9b07d9b096580861e58ae70504df8e42a3da3f021079cda5369a30f8bc65b9ec
2018-01-11 17:47:23 Relaying wtx 9b07d9b096580861e58ae70504df8e42a3da3f021079cda5369a30f8bc65b9ec
2018-01-11 17:47:32 keypool added key 10173, size=1000, internal=0
2018-01-11 17:47:32 init message: Loading wallet... (1016.28 %)
2018-01-11 17:47:32 keypool reserve 9174
2018-01-11 17:47:32 CommitTransaction:
CTransaction(hash=f1b72c4f5e, ver=1, vin.size=1, vout.size=2, nLockTime=802501)
CTxIn(COutPoint(9b07d9b096580861e58ae70504df8e42a3da3f021079cda5369a30f8bc65b9ec, 1), scriptSig=47304402206b3bbff31f14de, nSequence=4294967294)
CTxOut(nValue=0.00030000, scriptPubKey=76a9148961b2aa63739d6161f981e6)
CTxOut(nValue=0.10338955, scriptPubKey=76a914cb918afe6e7d991eed7f1af2)
2018-01-11 17:47:32 keypool keep 9174
2018-01-11 17:47:32 AddToWallet f1b72c4f5e53d36b46cdbfd58001520bced9bc36584db5e285b5a6ab477c7d0f new
2018-01-11 17:47:32 CommitTransaction(): Error: Transaction not valid
2018-01-11 17:47:54 CMasternodeMan::CheckAndRemove
 

demo

Well-known Member
Apr 23, 2016
3,113
263
153
Dash Address
XnpT2YQaYpyh7F9twM6EtDMn1TCDCEEgNX
Do you use dash-cli/dash-tx to connect to dashd?
 

ec1warc1

Active Member
Jul 26, 2016
318
170
113
60
It just happened again, third time today. It seems that the error is in the logs with this text:
"2018-01-11 18:12:08 CommitTransaction(): Error: Transaction not valid"
I was running version dashcore-0.12.2.1 and just updated it to dashcore-0.12.2.2.

It would be great if the update of the software version fixed the problem. I will let everyone know here in this thread.

Hello demo! Nice to hear from you. I use rpc connection written in php to connect from the application server across the network to the dashd.
 

demo

Well-known Member
Apr 23, 2016
3,113
263
153
Dash Address
XnpT2YQaYpyh7F9twM6EtDMn1TCDCEEgNX
It just happened again, third time today. It seems that the error is in the logs with this text:
"2018-01-11 18:12:08 CommitTransaction(): Error: Transaction not valid"
I was running version dashcore-0.12.2.1 and just updated it to dashcore-0.12.2.2.

It would be great if the update of the software version fixed the problem. I will let everyone know here in this thread.

Hello demo! Nice to hear from you. I use rpc connection written in php to connect from the application server across the network to the dashd.
What kind of application server? You may have to reduce the throughput of your application server towards dashd.

Read below:

https://bitcoin.stackexchange.com/q...transaction-throughput-of-the-bitcoin-network

If people are right in the above stackexchange thread, bitcoind allows 3-7 transactions per second. Dashd use the same database than bitcoin uses (I think it is berkleydb or leveldb), but the blocksize of dash is doubled, so thats the reason the problem occurs.

@UdjinM6 may inform us what is the maximum throughput of dashd.
 
Last edited:

ec1warc1

Active Member
Jul 26, 2016
318
170
113
60
When the error occurs, I don't think we are not getting contention from other transactions. Here is one way to view and compare the last two transactions which happened 8 seconds apart and the second one with the error:
[email protected]:~/.dashcore $ grep "2018-01-11 18:12:00" debug.log
2018-01-11 18:12:00 keypool added key 10197, size=1000, internal=0
2018-01-11 18:12:00 init message: Loading wallet... (1018.68 %)
2018-01-11 18:12:00 keypool reserve 9198
2018-01-11 18:12:00 CommitTransaction:
2018-01-11 18:12:00 keypool keep 9198
2018-01-11 18:12:00 AddToWallet 5ac310924802a71e84b51717bef48af1a3eec2fccc86ddf31099756ab8a6f04d new
2018-01-11 18:12:00 AddToWallet 5ac310924802a71e84b51717bef48af1a3eec2fccc86ddf31099756ab8a6f04d
2018-01-11 18:12:00 Relaying wtx 5ac310924802a71e84b51717bef48af1a3eec2fccc86ddf31099756ab8a6f04d
[email protected]:~/.dashcore $ grep "2018-01-11 18:12:08" debug.log
2018-01-11 18:12:08 keypool added key 10198, size=1000, internal=0
2018-01-11 18:12:08 init message: Loading wallet... (1018.78 %)
2018-01-11 18:12:08 keypool reserve 9199
2018-01-11 18:12:08 CommitTransaction:
2018-01-11 18:12:08 keypool keep 9199
2018-01-11 18:12:08 AddToWallet e91caf5031419c45a662e66badb7d62989b45795649079ed3ff058ed70895369 new
2018-01-11 18:12:08 CommitTransaction(): Error: Transaction not valid
[email protected]:~/.dashcore $
 

ec1warc1

Active Member
Jul 26, 2016
318
170
113
60
Regarding tuning the app server and other servers. I have the three servers isolated, talking to each other over the network. The app is LAMP for wordpress, without the DB. Then, there are the DB server and the wallet server. Each on its own OS.
 

ec1warc1

Active Member
Jul 26, 2016
318
170
113
60
In a private conversation, I got this information from UdjinM6, which after some study looks promising. I am going to post an image of his answer here for others to read:
upload_2018-1-11_18-3-38.png
 
  • Like
Reactions: ghowdy

ec1warc1

Active Member
Jul 26, 2016
318
170
113
60
It looks like a good start of solving this is to simply send duffs to different addresses in the wallet, and not all duffs to the same address as I have done in the past.
Perhaps also to include the parameter spendzeroconfchange=0 to the config file.

Do I have that right @UdjinM6? Thanks sooo much!!!!
 
  • Like
Reactions: Dandy and UdjinM6

codablock

Active Member
Mar 29, 2017
100
154
93
38
@ec1warc1 Most likely unrelated to your actual problem, but from the log snippets I see that your keypool has ran out of pooled keys since quite some time. That means, if you don't do VERY regular backups atm, you'll loose some (or all?) of your Dash in case you loose your hot wallet. Old backups will be useless as they won't contain newly generated keys.

I'd suggest to at least call "keypoolrefill 20000" right now. Please do a backup before and after you do this. Only the newest backup after that call will help you in future restores.
 

demo

Well-known Member
Apr 23, 2016
3,113
263
153
Dash Address
XnpT2YQaYpyh7F9twM6EtDMn1TCDCEEgNX
It looks like a good start of solving this is to simply send duffs to different addresses in the wallet, and not all duffs to the same address as I have done in the past.
Perhaps also to include the parameter spendzeroconfchange=0 to the config file.

Do I have that right @UdjinM6? Thanks sooo much!!!!
Even better run two or more dashd and make your application server do a round-robin to them.
 

ec1warc1

Active Member
Jul 26, 2016
318
170
113
60
Just as an update... it has been a week. I added 4 addresses to the wallet and started distributing the duffs among the addresses. Today, we had our first error in 7 days, so a big improvement, just not perfect yet. Here is today's error:
2018-01-19 23:00:37 CommitTransaction(): Error: Transaction not valid
 

Dashmaximalist

Active Member
Mar 16, 2017
1,008
248
133
39
maptags.in
hi guys hi @ec1warc1

we are trying to process dash transactions as well ,we got about 1000-2000 per day, let me know how you have achieved , i have tried very many things but failed

if you have a detailed process or something i would really appreciate that.
 

demo

Well-known Member
Apr 23, 2016
3,113
263
153
Dash Address
XnpT2YQaYpyh7F9twM6EtDMn1TCDCEEgNX
hi guys hi @ec1warc1

we are trying to process dash transactions as well ,we got about 1000-2000 per day, let me know how you have achieved , i have tried very many things but failed

if you have a detailed process or something i would really appreciate that.
Step 1. Buy three linux dedicated servers.
Regarding tuning the app server and other servers. I have the three servers isolated, talking to each other over the network. The app is LAMP for wordpress, without the DB. Then, there are the DB server and the wallet server. Each on its own OS.
 
Last edited:

codablock

Active Member
Mar 29, 2017
100
154
93
38
hi guys hi @ec1warc1

we are trying to process dash transactions as well ,we got about 1000-2000 per day, let me know how you have achieved , i have tried very many things but failed

if you have a detailed process or something i would really appreciate that.
Did you verify that you do not suffer from the same problem as reported by ec1warc1? As UdjinM6 explained, the problem is most likely that too many transactions are sitting unconfirmed in the mempool while you try to send new transactions which spend the unconfirmed transaction outputs. The mempool is able to store many transactions, but seems to have limits when it comes to chained unconfirmed transactions. I did not check that specific code, but it looks like this might be intended as spam protection.

To solve this, you have to split your balances into multiple outputs, so that your wallet has more confirmed outputs to choose from when building new transactions. To achieve this, you can use the "split outputs/utxos" (I don't have a running wallet at hand atm and don't exactly know how the text is) checkbox and enter the number of outputs in the right field of it. Try to send some Dash to yourself with this checkbox set and a reasonable amount of splitted outputs. The amount of Dash and number of outputs depends on how large the outgoing transactions and the amount of them you expect. You should verify the number of available UTXOs from time to time and repeat this process.

So Dash is marketed as being ready to scale, yet it utterly fails when asked to process a trivial 1000-2000 transactions per day?

That sounds like false advertising to me. Bitcoin Core has no trouble at all with doing that many transactions in an hour, much less in an entire day.

What is so wrong with Dash Core that they managed to break the code they copied from Bitcoin Core so badly that it can't even do less than 100 transactions per hour?
You are making a bad comparison here. The numbers you always hear about Bitcoin being able to process is GLOBALLY and not PER NODE. If you follow the same spending behavior as described by UdjinM6, you'll very likely hit the same limits, maybe even worse due to longer confirmation times.
 

demo

Well-known Member
Apr 23, 2016
3,113
263
153
Dash Address
XnpT2YQaYpyh7F9twM6EtDMn1TCDCEEgNX
Did you verify that you do not suffer from the same problem as reported by ec1warc1? As UdjinM6 explained, the problem is most likely that too many transactions are sitting unconfirmed in the mempool while you try to send new transactions which spend the unconfirmed transaction outputs. The mempool is able to store many transactions, but seems to have limits when it comes to chained unconfirmed transactions. I did not check that specific code, but it looks like this might be intended as spam protection.
Can this be solved by compiling the code and fixing the spam protection limits?
 

thephez

Member
Dash Core Group
Jan 23, 2016
140
98
78
To to more (partially) specific, modern Bitcoin Core has features like segwit and transaction bundling that reduce confirmation times and thus mitigate the effects of "the same limits."
To be more accurate and prevent a distraction from getting the issue resolved, Dash's confirmation times are 4 times faster than Bitcoin (~2.5 minutes vs 10) and its blocks are nowhere being full. Additionally, if I understand what is being referenced by "transaction bundling" (batching payments), it is merely spending to multiple outputs in a single transaction. Something the spendmany RPC is specifically intended for and not a new feature (in Bitcoin or Dash).
 

codablock

Active Member
Mar 29, 2017
100
154
93
38
Can this be solved by compiling the code and fixing the spam protection limits?
You could, or you just set -limitancestorcount to a higher value which would have the same effect. Not sure if it would help however, as other nodes would still reject the TXs. It's possible that your wallet would then queue up more transactions locally and rebroadcast them when new blocks are mined (which would eventually reduce the ancestor count in your wallet and allow rebroadcasting). But I'm really not sure about the wallets behavior here.

"Very likely" in your prejudiced opinion isn't remotely the same thing as an objectively demonstrated/proven fact. Stop conflating your suppositions and assumptions with actual reality. Prove it or STFU.

It's greater than equally "very likely" Bitcoin Core can handle vastly more tps than Dash because the current Bitcoin Core software is several iterations of optimizations and enhancements more advanced than Dash (which is based on a much older Bitcoin release).

To to more (partially) specific, modern Bitcoin Core has features like segwit and transaction bundling that reduce confirmation times and thus mitigate the effects of "the same limits."
You didn't really read/understand what I wrote, right? You are comparing two different things, global throughput and per-node outgoing throughput. These are two different things. In regard to the second case (which we are dealing with here!), there is a limit in place when it comes to the so called ancestor count for transactions. This limit is defaulted to 25 in Dash. Btw, it's also the same default in Bitcoin: https://github.com/bitcoin/bitcoin/blob/master/src/validation.h#L61, so exactly the same limitation when it comes to local nodes and the spending behavior.

If you want to accuse me of not using facts based on my use of "Very likely", please apply the same rules to your own posts. Ok, you don't use "Very likely", but you present your misunderstandings as facts. Throwing in SegWit into the conversation and ignoring that it has nothing to do with the topic and the underlying limitation is not very helpful.

If Bitcoin Core had to be restarted ever 5 minutes just to handle a measly 1000 tx/day, don't you think someone would be complaining about that on the dev-list and/or subreddit? Of course they would; you need to stop and think before spewing reflexively pro-Dash apologia and waving away valid concerns with appeals to nonsense speculation about how 'maybe Bitcoin is just as bad.' Do you really think Coinbase and Kraken restart their BTC nodes every 5 minutes? No, of course not. So GTFO with that self-serving false equivalence excuse for Dash's abysmally poor real-world performance.
Again, different spending behavior is the keyword here. Did you notice that exchanges don't transact on a per-withdrawal basis but in batches only? You ever asked yourself why they do that? If your answer is "because of the fees!", good one, as it might be the most important reason for them. But are there maybe positive side-effects of such a spending behavior? Maybe reduction of unconfirmed ancestors? Think about that and then think about the comparisons you are doing here.
 

codablock

Active Member
Mar 29, 2017
100
154
93
38
To be more accurate and prevent a distraction from getting the issue resolved, Dash's confirmation times are 4 times faster than Bitcoin (~2.5 minutes vs 10) and its blocks are nowhere being full. Additionally, if I understand what is being referenced by "transaction bundling" (batching payments), it is merely spending to multiple outputs in a single transaction. Something the spendmany RPC is specifically intended for and not a new feature (in Bitcoin or Dash).
Thanks. This sounds like what would actually solve the problem (without needing so many split outputs). If sendmany is used to send payments to multiple recipients, the number of unconfirmed ancestors should be reduced dramatically.
 

ec1warc1

Active Member
Jul 26, 2016
318
170
113
60
Today I am posting an update to the original ticket. I added 4 new addresses to the wallet for a total of 5. Previously, there was only 1. The Dash.red website is currently at 1300 transactions per day. In the past month, I have only seen the error appear twice, which is a massive improvement. Furthermore, when the error occurs, in only takes one of the 5 addresses to zero, so - unless the wallet is really low on all of the inputs, it is now not such a big problem.
I am certainly happy with the results!!! Thank you!
 

ec1warc1

Active Member
Jul 26, 2016
318
170
113
60
A new update: I increased the number of receiving addresses to 10 and I manually restart the dashd process once per day. We are now completing 1800+ transactions per day without any problems!!!
 

codablock

Active Member
Mar 29, 2017
100
154
93
38
I found some time to re-investigate this issue. The underlying issue is known in Bitcoin, and described in https://github.com/bitcoin/bitcoin/issues/10004
To sum it up: The limitation on maximum number of chained transactions is just one part of the problem. The problem that makes this so bad (requiring a wallet restart) is that the last transaction that you try to create, results in the chained Tx limit to be reached and the TX not being added to the mempool. The wallet however still adds it to the local wallet, without retrying to re-add it to the mempool when blocks are being mined (which reduces the number of chaines TXs in your wallet). The wallet only does this when the whole application is being restarted. This results in your wallets UTXOs to become unusable (which reduces your balance) until you restart your wallet.

I'm trying to figure out why this Bug is still unfixed in Bitcoin and how we can solve this. Until then, you could periodically (e.g. once per minute) call the RPC command "resendwallettransactions", which will then try to re-add the transactions which previously failed to be added.
 
  • Like
Reactions: demo

demo

Well-known Member
Apr 23, 2016
3,113
263
153
Dash Address
XnpT2YQaYpyh7F9twM6EtDMn1TCDCEEgNX
I found some time to re-investigate this issue. The underlying issue is known in Bitcoin, and described in https://github.com/bitcoin/bitcoin/issues/10004
To sum it up: The limitation on maximum number of chained transactions is just one part of the problem. The problem that makes this so bad (requiring a wallet restart) is that the last transaction that you try to create, results in the chained Tx limit to be reached and the TX not being added to the mempool. The wallet however still adds it to the local wallet, without retrying to re-add it to the mempool when blocks are being mined (which reduces the number of chaines TXs in your wallet). The wallet only does this when the whole application is being restarted. This results in your wallets UTXOs to become unusable (which reduces your balance) until you restart your wallet.
"The node is probably running with the default limitancestorcount / limitdescendant count, which prevents chains of >25 transactions entering your mempool. He can override those with command line arguments. "

What if he compiles the code, with an increased chained TX limit value ? Provided of course he has enough memory. Or even better, override this limit with the above mentioned command line arguments? WIll this temporarily solve the problem and increase the number of succesfull transactions ?
I'm trying to figure out why this Bug is still unfixed in Bitcoin and how we can solve this. Until then, you could periodically (e.g. once per minute) call the RPC command "resendwallettransactions", which will then try to re-add the transactions which previously failed to be added.
How this can solve the problem, in case the memory is full? I think resendwallettransactions may cause more problems in case the system already runs in its memory limits.

But if he compiles the code in his system (or override the chained TX limit with command line arguments as jnewbery recommended) and do testings in order to discover the appropriate chained TX limit value that fits better to the system, he may then discover the real limits of his computer, and thus know how many transactions his computer is capable to support.

This is another case of vote the numbers. Hardcoded numbers are always a bad practice.
 
Last edited:

homesoft

New Member
Jun 17, 2018
8
0
1
46
A similar problem with fork. In the faucet after several transactions the balance is empty. Solves the repair of the wallet.