Added a notice that these are unofficial builds. Which they are.as always
ONLY download from dash.org releases !!
anything else could be "tinkered" with
@t0dd please stop posting these things and wait until the team - complied - signed and officially released !
tx
lolAdded a notice that these are unofficial builds. Which they are.
As for stopping posting these things... I have been posting these builds for close to a year with encouragement from the community and, of late, patronage from the treasury. And now, today, is the day you take exception to it? Regardless, I will talk to Flare about how to proceed, but, um... this request kinda just came out of ... where?
And, oh... ha! @flare, I didn't realized I preempted you. Apologies. I saw the official tag in github and marched forward.![]()
Yes, we are using fixed reference times to ensure deterministic builds.Any particular reason the dates on the files (at least in the Linux version) are all 2017-01-01?
On 0.12.1.1, still having the mixing crashes...
How long time 12.1.0 will still be paid?12.1.0 will still be paid
Doing so now.
I can't reproduce...
Can you please either start it with -debug=privatesend cmd-line option or switch to debug mode from console "debug privatesend"?
That should spam in debug.log a bit more and probably would help to resolve the issue when it crashes next time.
Well, at least if it's really mixing-related and not just a coincidence and you are lucky enough to catch some rare flowing bug...
Perhaps perpetually, since it looks like the bug fixes don't impact MN network service.How long time 12.1.0 will still be paid?
0.12.1.0 MNs are legit, 0.12.1.1 just has some bug fixes and performance improvements.How long time 12.1.0 will still be paid?
It could stop mixing on its own in 2 cases:Doing so now.
I also noticed that it likes to stop mixing on it's own. Button switches to "start mixing." As if someone clicked the button to stop, but I was nowhere near the machine for hours...
I did, but since neither of those cases are true, I didn't mention.It could stop mixing on its own in 2 cases:
1) not enough space (~50MB free disk space is required)
2) wasn't able to create autobackup for whatever reason when wallet ran out of keys - could be permissions or (1)
You should really check logs next time it happens.
2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- nLowestDenom: 0.010000, nBalanceNeedsAnonymized: 999.999023
2017-02-22 16:04:16 ThreadSocketHandler -- removing node: peer=369 addr=188.40.62.22:9999 nRefCount=3 fNetworkNode=1 fInbound=0 fMasternode=1
2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- 'nBalanceNeedsDenominated > 0' (999.999023 - (0.000000 + 2029.310303 - 979.899780 = 1049.410522) ) = -49.411491
2017-02-22 16:04:16 CDarksendPool::IsCollateralValid -- CTransaction(hash=d7a1e33fd8, ver=1, vin.size=1, vout.size=1, nLockTime=0)
CTxIn(COutPoint(70d5bbd9b43d4a501ff6ab48ef9c4a3b877245569b12f0fbfadf73691f6ef4c5, 0), scriptSig=47304402205926a771abe795)
CTxOut(nValue=0.00300000, scriptPubKey=76a91447951f22405304e1e0bcb7b3)
2017-02-22 16:04:16 Checking vecMasternodesUsed: size: 58, threshold: 3220
2017-02-22 16:04:16 CMasternodeMan::FindRandomNotInVec -- 3578 enabled masternodes, 3520 masternodes to choose from
2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- Too early to mix on this masternode! masternode=a9778ef7410843a4634887f6cffbaa4adac05be71fb8e5ba4571ecedde217eb9-1 addr=109.235.65.95:9999 nLastDsq=44771 CountEnabled/5=715 nDsqCount=45369
2017-02-22 16:04:16 CMasternodeMan::FindRandomNotInVec -- 3578 enabled masternodes, 3519 masternodes to choose from
2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- attempt 1 connection to Masternode 185.69.52.122:9999
2017-02-22 16:04:16 SOCKS5 connecting 185.69.52.122
2017-02-22 16:04:17 SOCKS5 connecting 188.227.75.148
2017-02-22 16:04:17 SOCKS5 connected 185.69.52.122
2017-02-22 16:04:20 SOCKS5 connected 188.227.75.148
dash-qt: /home/ubuntu/build/dash/depends/x86_64-unknown-linux-gnu/share/../include/boost/thread/pthread/recursive_mutex.hpp:113: void boost::recursive_mutex::lock(): Assertion `!pthread_mutex_lock(&m)' failed.
2017-02-22 16:04:20 CDarksendPool::DoAutomaticDenominating -- connected, addr=185.69.52.122:9999
2017-02-22 16:04:20 ThreadSocketHandler -- removing node: peer=370 addr=185.69.52.122:9999 nRefCount=2 fNetworkNode=1 fInbound=0 fMasternode=1
The crash is actually this failed assertI did, but since neither of those cases are true, I didn't mention.
1) Plenty of disk space.
2) I run with keypool=100000 because I have an extraordinarily huge amount of DASH, and I like my backups to actually be good for something. The crash/stop mix are happening far more frequently than the ability to run out of keys. I never run out of keys because it re-fills every time I have to restart the client.
I'm seeing nothing useful in the debug.log, but will report anyway (using -debug=privatesend on this session, so maybe we'll see something).
I also run a variety of tail -f debug | grep XXX to alert me to things I'm specifically watching for.
Speaking of, it bombed again as I was typing this.
This is the end of debug.log for 0.12.1.1 running -debug=privatesend CLI option:
er... I tried to bold what looked important, but apparently the code tag strips it...Code:2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- nLowestDenom: 0.010000, nBalanceNeedsAnonymized: 999.999023 2017-02-22 16:04:16 ThreadSocketHandler -- removing node: peer=369 addr=188.40.62.22:9999 nRefCount=3 fNetworkNode=1 fInbound=0 fMasternode=1 2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- 'nBalanceNeedsDenominated > 0' (999.999023 - (0.000000 + 2029.310303 - 979.899780 = 1049.410522) ) = -49.411491 2017-02-22 16:04:16 CDarksendPool::IsCollateralValid -- CTransaction(hash=d7a1e33fd8, ver=1, vin.size=1, vout.size=1, nLockTime=0) CTxIn(COutPoint(70d5bbd9b43d4a501ff6ab48ef9c4a3b877245569b12f0fbfadf73691f6ef4c5, 0), scriptSig=47304402205926a771abe795) CTxOut(nValue=0.00300000, scriptPubKey=76a91447951f22405304e1e0bcb7b3) 2017-02-22 16:04:16 Checking vecMasternodesUsed: size: 58, threshold: 3220 2017-02-22 16:04:16 CMasternodeMan::FindRandomNotInVec -- 3578 enabled masternodes, 3520 masternodes to choose from 2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- Too early to mix on this masternode! masternode=a9778ef7410843a4634887f6cffbaa4adac05be71fb8e5ba4571ecedde217eb9-1 addr=109.235.65.95:9999 nLastDsq=44771 CountEnabled/5=715 nDsqCount=45369 2017-02-22 16:04:16 CMasternodeMan::FindRandomNotInVec -- 3578 enabled masternodes, 3519 masternodes to choose from 2017-02-22 16:04:16 CDarksendPool::DoAutomaticDenominating -- attempt 1 connection to Masternode 185.69.52.122:9999 2017-02-22 16:04:16 SOCKS5 connecting 185.69.52.122 2017-02-22 16:04:17 SOCKS5 connecting 188.227.75.148 2017-02-22 16:04:17 SOCKS5 connected 185.69.52.122 2017-02-22 16:04:20 SOCKS5 connected 188.227.75.148 dash-qt: /home/ubuntu/build/dash/depends/x86_64-unknown-linux-gnu/share/../include/boost/thread/pthread/recursive_mutex.hpp:113: void boost::recursive_mutex::lock(): Assertion `!pthread_mutex_lock(&m)' failed. 2017-02-22 16:04:20 CDarksendPool::DoAutomaticDenominating -- connected, addr=185.69.52.122:9999 2017-02-22 16:04:20 ThreadSocketHandler -- removing node: peer=370 addr=185.69.52.122:9999 nRefCount=2 fNetworkNode=1 fInbound=0 fMasternode=1
You can see that PS quantity is maxed above 2k (line 3). This seems relevant to me because sessions which have PS quantity below 2k never crash. I have never seen PS pick exactly 2k, so I can only report my observations > and <, but not =. I doubt many people encounter this condition, so are not experiencing it...
I'm also using SOCKS5 proxy, but, it seems to be working perfectly fine and not relevant.
Other users might not be doing those things.
The terminal failure appears to be reported in the 3rd line from the bottom.
dash-qt: /home/ubuntu/build/dash/depends/x86_64-unknown-linux-gnu/share/../include/boost/thread/pthread/recursive_mutex.hpp:113: void boost::recursive_mutex::lock(): Assertion `!pthread_mutex_lock(&m)' failed.
https://en.wikipedia.org/wiki/Mutual_exclusionThe crash is actually this failed assert
and I'm not quite sure how that's possible unless it's a lib bug...Code:dash-qt: /home/ubuntu/build/dash/depends/x86_64-unknown-linux-gnu/share/../include/boost/thread/pthread/recursive_mutex.hpp:113: void boost::recursive_mutex::lock(): Assertion `!pthread_mutex_lock(&m)' failed.
Somewhere in your code you forgot to unlock.Recursive Mutexes
Normally a mutex is locked only once, then unlocked. Depending on the structure of your application, there may be times when it would be useful to be able to lock a mutex multiple times on the one thread (in very special circumstances, such as nested method calls). For example, you have two (or more) methods which may be called independently, and another method which itself calls one of the other two. If they all need the mutex held to function safely, this can cause complications when determining when the mutex needs to be locked and released. However, by using a recursive mutex, it can be locked as many times as necessary, provided it is unlocked the same number of times. Thus all these methods can be called individually, as they all lock the resources, and in a nested fashion, as the mutex can be locked multiple times. Provided the mutex is unlocked the same number of times (which is a matter of care and thought), the mutex will be correctly released by the end of the nested operation.
If this is the case, then maybe you have to rewrite the code by using condition variables in your threads. whenever your c++ code waits for python (aka sentinel) data.My sense is that the interoperability betwen python and c++ may cause the problem.
Let's try it that way...Fresh 16.04 using the official distributed binaries (my gentoo machines annoy the hell out of me already, I can't be asked to build something unless I absolutely have to).
Now that I think of it, looks like the exact same thing I reported on 0.12.1.0
I've got it going on two machines... I set them both to 2000/8. the one that keeps crashing is the one that reports overshoot above 2000. The other one is just shy of 2000 and never crashes.
Maybe those two conditions are not actually related, but I figured I'd report it.
Under what conditions is "!pthread_mutex_lock(&m)" used by dash-qt? What's it trying to do when this is called? Maybe an unexpected/out of range/unhandled/wrong type "%m" (whatever it is) is making it blow up?
Also, mixing absolutely SUCKS on 0.12.1.1... Was hauling ass on 0.12.1.0. I was unaware that anything relating to mixing was altered. Just reporting observation.
Also, an interesting observation... Even with two clients on two different machines sitting right next to each other mixing... You'd expect you might be polluting the mix by being 2 out of the 3 needed parties. I've been watching them sitting next to me as I work for a week now. I've never seen them join in on the same mix. So, it might actually not hurt anything to split your stack and denom/mix in multiple clients to speed up the process.
Syncing works now quite nice and fast, nice job devs!Dash 0.12.1.2
cd .dashcore/sentinel && git pullIs there a guide, how to update the sentinel? Just redo the 'git clone https://github.com/dashpay/sentinel.git' inside of .dashcore?
Does this 'pull' over write sentinel.conf ?cd .dashcore/sentinel && git pull
also make sure sentinel runs every minute i.e. there are "5 stars" in cron, see https://github.com/dashpay/sentinel#3-set-up-cron
Every minute, every two minutes, or not so important?cd .dashcore/sentinel && git pull
also make sure sentinel runs every minute i.e. there are "5 stars" in cron, see https://github.com/dashpay/sentinel#3-set-up-cron