Version 12.1 Release

Status
Not open for further replies.

t0dd

Active Member
Mar 21, 2016
150
132
103
keybase.io
Dash Address
XyxQq4qgp9B53QWQgSqSxJb4xddhzk5Zhh
[NOTE: These are semi-official builds]
v0.12.1.1 tested and built for the Red Hat family of Linuxes

https://github.com/taw00/dashcore-rpm

A few have been testing early builds of 12.1.1 and I watched the release candidate on mainnet for a day+ before building the new production build.

If you are running my RPMs on 12.1.0...
If running as a systemd-managed service...

sudo systemctl stop dashd
sudo dnf upgrade -y --refresh
sudo systemctl start dashd


The order of commands is not dramatically important, but it is the cleanest method.

If running as a ~/.dashcore configured wallet or node...
  • Shutdown wallet
    ....or shutdown the node: dash-cli stop
  • sudo dnf upgrade -y refresh
  • Restart wallet
    ...or restart the node: dashd

So far, MUCH more stable and it syncs in a timely manner. There are a host of other improvements. For the Red Hat versions, I tweaked the default systemd config settings a bit, and I made it easy for you if you want your system to email you upon reboots, restarts, etc.

Also, Dash now builds against Fedora 26 (which is still in rawhide testing)
This was possible because someone on the core team made Dash 0.12.1.1 able to build against OpenSSL 1.1 versus the old OpenSSL 1.0. Fedora 26 defaults to v1.1. I'm sure Ubuntu will eventually move to this new and much improved OpenSSL at some point (though they tend to move slower). Therefore, Dash should be ready for the next gen version of Linux -- No, I have no idea what this means for Windows or Mac.

Happy dashing. -t
 
Last edited:

tungfa

Grizzled Member
Foundation Member
Masternode Owner/Operator
Apr 9, 2014
8,898
6,747
1,283
as always
ONLY download from dash.org releases !!
anything else could be "tinkered" with

@t0dd please stop posting these things and wait until the team - complied - signed and officially released !
tx
 

t0dd

Active Member
Mar 21, 2016
150
132
103
keybase.io
Dash Address
XyxQq4qgp9B53QWQgSqSxJb4xddhzk5Zhh
as always
ONLY download from dash.org releases !!
anything else could be "tinkered" with

@t0dd please stop posting these things and wait until the team - complied - signed and officially released !
tx
Added a notice that these are unofficial builds. Which they are.

As for stopping posting these things... I have been posting these builds for close to a year with encouragement from the community and, of late, patronage from the treasury. And now, today, is the day you take exception to it? Regardless, I will talk to Flare about how to proceed, but, um... this request kinda just came out of ... where?

And, oh... ha! @flare, I didn't realized I preempted you. Apologies. I saw the official tag in github and marched forward. :)
 
Last edited:
  • Like
Reactions: tungfa

tungfa

Grizzled Member
Foundation Member
Masternode Owner/Operator
Apr 9, 2014
8,898
6,747
1,283
Added a notice that these are unofficial builds. Which they are.

As for stopping posting these things... I have been posting these builds for close to a year with encouragement from the community and, of late, patronage from the treasury. And now, today, is the day you take exception to it? Regardless, I will talk to Flare about how to proceed, but, um... this request kinda just came out of ... where?

And, oh... ha! @flare, I didn't realized I preempted you. Apologies. I saw the official tag in github and marched forward. :)
lol
all good brother - sorry - heard as well u are doing office builds for fedora
(would be good to coordinate those with the general webpage release - then we are all on the same page )
tx
 

holgum

New Member
Jan 7, 2017
17
7
3
Any particular reason the dates on the files (at least in the Linux version) are all 2017-01-01?
 

flare

Grizzled Member
May 18, 2014
2,286
2,404
1,183
Germany
Any particular reason the dates on the files (at least in the Linux version) are all 2017-01-01?
Yes, we are using fixed reference times to ensure deterministic builds.

--> https://github.com/dashpay/dash/blob/master/contrib/gitian-descriptors/gitian-linux.yml#L21

Every user can recreate the exact identical binaries from the source code to verify if something has been spoofed. For this to work the files need to have identical timestamps, otherwise the hashes of the tar.gz archives would be different.
 
  • Like
Reactions: UdjinM6

holgum

New Member
Jan 7, 2017
17
7
3
Thanks for clarifying!
I wouldn't have a problem with different signatures for different binaries but ok - I can see that the current mechanism would make things easier to manage.
At first I thought I goofed somewhere when I saw the file dates didn't change, so it might be something to note on the download page.
 

UdjinM6

Official Dash Dev
Dash Core Group
May 20, 2014
3,639
3,537
1,183
On 0.12.1.1, still having the mixing crashes...
:(
I can't reproduce...
Can you please either start it with -debug=privatesend cmd-line option or switch to debug mode from console "debug privatesend"?
That should spam in debug.log a bit more and probably would help to resolve the issue when it crashes next time.
Well, at least if it's really mixing-related and not just a coincidence and you are lucky enough to catch some rare flowing bug...
 

camosoul

Grizzled Member
Sep 19, 2014
2,261
1,130
1,183
:(
I can't reproduce...
Can you please either start it with -debug=privatesend cmd-line option or switch to debug mode from console "debug privatesend"?
That should spam in debug.log a bit more and probably would help to resolve the issue when it crashes next time.
Well, at least if it's really mixing-related and not just a coincidence and you are lucky enough to catch some rare flowing bug...
Doing so now.

I also noticed that it likes to stop mixing on it's own. Button switches to "start mixing." As if someone clicked the button to stop, but I was nowhere near the machine for hours...
 
  • Like
Reactions: UdjinM6

UdjinM6

Official Dash Dev
Dash Core Group
May 20, 2014
3,639
3,537
1,183
How long time 12.1.0 will still be paid?
0.12.1.0 MNs are legit, 0.12.1.1 just has some bug fixes and performance improvements.

Doing so now.

I also noticed that it likes to stop mixing on it's own. Button switches to "start mixing." As if someone clicked the button to stop, but I was nowhere near the machine for hours...
It could stop mixing on its own in 2 cases:
1) not enough space (~50MB free disk space is required)
2) wasn't able to create autobackup for whatever reason when wallet ran out of keys - could be permissions or (1)

You should really check logs next time it happens.
 
  • Like
Reactions: tungfa

camosoul

Grizzled Member
Sep 19, 2014
2,261
1,130
1,183
It could stop mixing on its own in 2 cases:
1) not enough space (~50MB free disk space is required)
2) wasn't able to create autobackup for whatever reason when wallet ran out of keys - could be permissions or (1)

You should really check logs next time it happens.
I did, but since neither of those cases are true, I didn't mention.

1) Plenty of disk space.
2) I run with keypool=100000 because I have an extraordinarily huge amount of DASH, and I like my backups to actually be good for something. The crash/stop mix are happening far more frequently than the ability to run out of keys. I never run out of keys because it re-fills every time I have to restart the client.

I'm seeing nothing useful in the debug.log, but will report anyway (using -debug=privatesend on this session, so maybe we'll see something).

I also run a variety of tail -f debug | grep XXX to alert me to things I'm specifically watching for.

Speaking of, it bombed again as I was typing this.

This is the end of debug.log for 0.12.1.1 running -debug=privatesend CLI option:
Code:
2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- nLowestDenom: 0.010000, nBalanceNeedsAnonymized: 999.999023
2017-02-22 16:04:16 ThreadSocketHandler -- removing node: peer=369 addr=188.40.62.22:9999 nRefCount=3 fNetworkNode=1 fInbound=0 fMasternode=1
2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- 'nBalanceNeedsDenominated > 0' (999.999023 - (0.000000 + 2029.310303 - 979.899780 = 1049.410522) ) = -49.411491
2017-02-22 16:04:16 CDarksendPool::IsCollateralValid -- CTransaction(hash=d7a1e33fd8, ver=1, vin.size=1, vout.size=1, nLockTime=0)
    CTxIn(COutPoint(70d5bbd9b43d4a501ff6ab48ef9c4a3b877245569b12f0fbfadf73691f6ef4c5, 0), scriptSig=47304402205926a771abe795)
    CTxOut(nValue=0.00300000, scriptPubKey=76a91447951f22405304e1e0bcb7b3)
2017-02-22 16:04:16 Checking vecMasternodesUsed: size: 58, threshold: 3220
2017-02-22 16:04:16 CMasternodeMan::FindRandomNotInVec -- 3578 enabled masternodes, 3520 masternodes to choose from
2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- Too early to mix on this masternode! masternode=a9778ef7410843a4634887f6cffbaa4adac05be71fb8e5ba4571ecedde217eb9-1  addr=109.235.65.95:9999  nLastDsq=44771  CountEnabled/5=715  nDsqCount=45369
2017-02-22 16:04:16 CMasternodeMan::FindRandomNotInVec -- 3578 enabled masternodes, 3519 masternodes to choose from
2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- attempt 1 connection to Masternode 185.69.52.122:9999
2017-02-22 16:04:16 SOCKS5 connecting 185.69.52.122
2017-02-22 16:04:17 SOCKS5 connecting 188.227.75.148
2017-02-22 16:04:17 SOCKS5 connected 185.69.52.122
2017-02-22 16:04:20 SOCKS5 connected 188.227.75.148
dash-qt: /home/ubuntu/build/dash/depends/x86_64-unknown-linux-gnu/share/../include/boost/thread/pthread/recursive_mutex.hpp:113: void boost::recursive_mutex::lock(): Assertion `!pthread_mutex_lock(&m)' failed.
2017-02-22 16:04:20 CDarksendPool::DoAutomaticDenominating -- connected, addr=185.69.52.122:9999
2017-02-22 16:04:20 ThreadSocketHandler -- removing node: peer=370 addr=185.69.52.122:9999 nRefCount=2 fNetworkNode=1 fInbound=0 fMasternode=1
er... I tried to bold what looked important, but apparently the code tag strips it...

You can see that PS quantity is maxed above 2k (line 3). This seems relevant to me because sessions which have PS quantity below 2k never crash. I have never seen PS pick exactly 2k, so I can only report my observations > and <, but not =. I doubt many people encounter this condition, so are not experiencing it...

I'm also using SOCKS5 proxy, but, it seems to be working perfectly fine and not relevant.

Other users might not be doing those things.

The terminal failure appears to be reported in the 3rd line from the bottom.
 
Last edited:
  • Like
Reactions: UdjinM6

UdjinM6

Official Dash Dev
Dash Core Group
May 20, 2014
3,639
3,537
1,183
I did, but since neither of those cases are true, I didn't mention.

1) Plenty of disk space.
2) I run with keypool=100000 because I have an extraordinarily huge amount of DASH, and I like my backups to actually be good for something. The crash/stop mix are happening far more frequently than the ability to run out of keys. I never run out of keys because it re-fills every time I have to restart the client.

I'm seeing nothing useful in the debug.log, but will report anyway (using -debug=privatesend on this session, so maybe we'll see something).

I also run a variety of tail -f debug | grep XXX to alert me to things I'm specifically watching for.

Speaking of, it bombed again as I was typing this.

This is the end of debug.log for 0.12.1.1 running -debug=privatesend CLI option:
Code:
2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- nLowestDenom: 0.010000, nBalanceNeedsAnonymized: 999.999023
2017-02-22 16:04:16 ThreadSocketHandler -- removing node: peer=369 addr=188.40.62.22:9999 nRefCount=3 fNetworkNode=1 fInbound=0 fMasternode=1
2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- 'nBalanceNeedsDenominated > 0' (999.999023 - (0.000000 + 2029.310303 - 979.899780 = 1049.410522) ) = -49.411491
2017-02-22 16:04:16 CDarksendPool::IsCollateralValid -- CTransaction(hash=d7a1e33fd8, ver=1, vin.size=1, vout.size=1, nLockTime=0)
    CTxIn(COutPoint(70d5bbd9b43d4a501ff6ab48ef9c4a3b877245569b12f0fbfadf73691f6ef4c5, 0), scriptSig=47304402205926a771abe795)
    CTxOut(nValue=0.00300000, scriptPubKey=76a91447951f22405304e1e0bcb7b3)
2017-02-22 16:04:16 Checking vecMasternodesUsed: size: 58, threshold: 3220
2017-02-22 16:04:16 CMasternodeMan::FindRandomNotInVec -- 3578 enabled masternodes, 3520 masternodes to choose from
2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- Too early to mix on this masternode! masternode=a9778ef7410843a4634887f6cffbaa4adac05be71fb8e5ba4571ecedde217eb9-1  addr=109.235.65.95:9999  nLastDsq=44771  CountEnabled/5=715  nDsqCount=45369
2017-02-22 16:04:16 CMasternodeMan::FindRandomNotInVec -- 3578 enabled masternodes, 3519 masternodes to choose from
2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- attempt 1 connection to Masternode 185.69.52.122:9999
2017-02-22 16:04:16 SOCKS5 connecting 185.69.52.122
2017-02-22 16:04:17 SOCKS5 connecting 188.227.75.148
2017-02-22 16:04:17 SOCKS5 connected 185.69.52.122
2017-02-22 16:04:20 SOCKS5 connected 188.227.75.148
dash-qt: /home/ubuntu/build/dash/depends/x86_64-unknown-linux-gnu/share/../include/boost/thread/pthread/recursive_mutex.hpp:113: void boost::recursive_mutex::lock(): Assertion `!pthread_mutex_lock(&m)' failed.
2017-02-22 16:04:20 CDarksendPool::DoAutomaticDenominating -- connected, addr=185.69.52.122:9999
2017-02-22 16:04:20 ThreadSocketHandler -- removing node: peer=370 addr=185.69.52.122:9999 nRefCount=2 fNetworkNode=1 fInbound=0 fMasternode=1
er... I tried to bold what looked important, but apparently the code tag strips it...

You can see that PS quantity is maxed above 2k (line 3). This seems relevant to me because sessions which have PS quantity below 2k never crash. I have never seen PS pick exactly 2k, so I can only report my observations > and <, but not =. I doubt many people encounter this condition, so are not experiencing it...

I'm also using SOCKS5 proxy, but, it seems to be working perfectly fine and not relevant.

Other users might not be doing those things.

The terminal failure appears to be reported in the 3rd line from the bottom.
The crash is actually this failed assert
Code:
dash-qt: /home/ubuntu/build/dash/depends/x86_64-unknown-linux-gnu/share/../include/boost/thread/pthread/recursive_mutex.hpp:113: void boost::recursive_mutex::lock(): Assertion `!pthread_mutex_lock(&m)' failed.
and I'm not quite sure how that's possible unless it's a lib bug...
 

qwizzie

Grizzled Member
Aug 6, 2014
2,119
1,295
1,183
Maybe its a Dash wallet compilation error somehow ? (assuming Camosoul compiled his Dash wallet from source), it could be interesting to see if grabbing an already pre-compiled binary would help in his case ?
It could also be interesting to see if this behaviour is only limited to that specific Ubuntu systeem, maybe try the Dash mixing of that specific wallet in a clean Ubuntu environment ? (a new virtual Ubuntu instance for example).
One last suggestion : maybe starting a new wallet and have your funds transferred to it helps ?

I'm just thinking outloud here..
 
Last edited:

camosoul

Grizzled Member
Sep 19, 2014
2,261
1,130
1,183
Fresh 16.04 using the official distributed binaries (my gentoo machines annoy the hell out of me already, I can't be asked to build something unless I absolutely have to).

Now that I think of it, looks like the exact same thing I reported on 0.12.1.0

I've got it going on two machines... I set them both to 2000/8. the one that keeps crashing is the one that reports overshoot above 2000. The other one is just shy of 2000 and never crashes.

Maybe those two conditions are not actually related, but I figured I'd report it.

Under what conditions is "!pthread_mutex_lock(&m)" used by dash-qt? What's it trying to do when this is called? Maybe an unexpected/out of range/unhandled/wrong type "%m" (whatever it is) is making it blow up?

Also, mixing absolutely SUCKS on 0.12.1.1... Was hauling ass on 0.12.1.0. I was unaware that anything relating to mixing was altered. Just reporting observation.

Also, an interesting observation... Even with two clients on two different machines sitting right next to each other mixing... You'd expect you might be polluting the mix by being 2 out of the 3 needed parties. I've been watching them sitting next to me as I work for a week now. I've never seen them join in on the same mix. So, it might actually not hurt anything to split your stack and denom/mix in multiple clients to speed up the process.
 
Last edited:

demo

Well-known Member
Apr 23, 2016
3,113
263
153
Dash Address
XnpT2YQaYpyh7F9twM6EtDMn1TCDCEEgNX
The crash is actually this failed assert
Code:
dash-qt: /home/ubuntu/build/dash/depends/x86_64-unknown-linux-gnu/share/../include/boost/thread/pthread/recursive_mutex.hpp:113: void boost::recursive_mutex::lock(): Assertion `!pthread_mutex_lock(&m)' failed.
and I'm not quite sure how that's possible unless it's a lib bug...
https://en.wikipedia.org/wiki/Mutual_exclusion

http://stackoverflow.com/questions/...-mutual-exclusion-among-more-than-two-threads

http://stackoverflow.com/questions/187761/recursive-lock-mutex-vs-non-recursive-lock-mutex

http://antonym.org/2012/02/threading-with-boost-part-iii-mutexes.html

Recursive Mutexes
Normally a mutex is locked only once, then unlocked. Depending on the structure of your application, there may be times when it would be useful to be able to lock a mutex multiple times on the one thread (in very special circumstances, such as nested method calls). For example, you have two (or more) methods which may be called independently, and another method which itself calls one of the other two. If they all need the mutex held to function safely, this can cause complications when determining when the mutex needs to be locked and released. However, by using a recursive mutex, it can be locked as many times as necessary, provided it is unlocked the same number of times. Thus all these methods can be called individually, as they all lock the resources, and in a nested fashion, as the mutex can be locked multiple times. Provided the mutex is unlocked the same number of times (which is a matter of care and thought), the mutex will be correctly released by the end of the nested operation.
Somewhere in your code you forgot to unlock.
Start counting how many times you locked and unlocked the recursive mutex.

My sense is that the interoperability betwen python and c++ may cause the problem.
 
Last edited:
  • Like
Reactions: qwizzie

demo

Well-known Member
Apr 23, 2016
3,113
263
153
Dash Address
XnpT2YQaYpyh7F9twM6EtDMn1TCDCEEgNX
My sense is that the interoperability betwen python and c++ may cause the problem.
If this is the case, then maybe you have to rewrite the code by using condition variables in your threads. whenever your c++ code waits for python (aka sentinel) data.
http://antonym.org/2012/02/threading-with-boost-part-v-condition-variables.html

The complexity that often occurs when using pthreads made some blockchain developers to swich to node.js. I dont know whether node.js is a better choice, but it seems to be the easy choice. On the other hand, mastering condition variables may gives to the developers the ability to master conditional votes.
 
Last edited:

UdjinM6

Official Dash Dev
Dash Core Group
May 20, 2014
3,639
3,537
1,183
Fresh 16.04 using the official distributed binaries (my gentoo machines annoy the hell out of me already, I can't be asked to build something unless I absolutely have to).

Now that I think of it, looks like the exact same thing I reported on 0.12.1.0

I've got it going on two machines... I set them both to 2000/8. the one that keeps crashing is the one that reports overshoot above 2000. The other one is just shy of 2000 and never crashes.

Maybe those two conditions are not actually related, but I figured I'd report it.

Under what conditions is "!pthread_mutex_lock(&m)" used by dash-qt? What's it trying to do when this is called? Maybe an unexpected/out of range/unhandled/wrong type "%m" (whatever it is) is making it blow up?

Also, mixing absolutely SUCKS on 0.12.1.1... Was hauling ass on 0.12.1.0. I was unaware that anything relating to mixing was altered. Just reporting observation.

Also, an interesting observation... Even with two clients on two different machines sitting right next to each other mixing... You'd expect you might be polluting the mix by being 2 out of the 3 needed parties. I've been watching them sitting next to me as I work for a week now. I've never seen them join in on the same mix. So, it might actually not hurt anything to split your stack and denom/mix in multiple clients to speed up the process.
Let's try it that way...
Open Terminal and execute these commands:
- "ulimit -c unlimited"
- "cat /proc/sys/kernel/core_pattern", if it says "|/usr/share/apport/apport %p %s %c %P" - execute "sudo service apport stop" and check again, should say "core" now
- run dash-qt from cmd-line (it's important to run it from the same Terminal you executed "ulimit ..." unless you are root user iirc)

Next time dash-qt crashes it should create "core" file in the folder you started dash-qt from. This file which should give some deep info about the exact place of the crash, so zip it and pass it via some file-sharing service (https://transfer.sh is a nice one).

EDIT:
you can verify that it will create core file on next crash by forcing it to segfault: open another terminal and execute "killall -11 dash-qt".
 

camosoul

Grizzled Member
Sep 19, 2014
2,261
1,130
1,183
Next time it dies I'll do that. It's probably something stupid about my machine... If something bizarre to the point of impossible is going to happen, I'm the guy it will happen to...

As a note on mixing...

Today I've used 28 keys. Previously, I was using 3000+ a day. It's also only mixing denoms of 10. 1 .1 and .01 don't happen anymore.
 
Status
Not open for further replies.