• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

Version 12.1 Release

Status
Not open for further replies.
The crash is actually this failed assert
Code:
dash-qt: /home/ubuntu/build/dash/depends/x86_64-unknown-linux-gnu/share/../include/boost/thread/pthread/recursive_mutex.hpp:113: void boost::recursive_mutex::lock(): Assertion `!pthread_mutex_lock(&m)' failed.
and I'm not quite sure how that's possible unless it's a lib bug...

https://en.wikipedia.org/wiki/Mutual_exclusion

http://stackoverflow.com/questions/...-mutual-exclusion-among-more-than-two-threads

http://stackoverflow.com/questions/187761/recursive-lock-mutex-vs-non-recursive-lock-mutex

http://antonym.org/2012/02/threading-with-boost-part-iii-mutexes.html

Recursive Mutexes
Normally a mutex is locked only once, then unlocked. Depending on the structure of your application, there may be times when it would be useful to be able to lock a mutex multiple times on the one thread (in very special circumstances, such as nested method calls). For example, you have two (or more) methods which may be called independently, and another method which itself calls one of the other two. If they all need the mutex held to function safely, this can cause complications when determining when the mutex needs to be locked and released. However, by using a recursive mutex, it can be locked as many times as necessary, provided it is unlocked the same number of times. Thus all these methods can be called individually, as they all lock the resources, and in a nested fashion, as the mutex can be locked multiple times. Provided the mutex is unlocked the same number of times (which is a matter of care and thought), the mutex will be correctly released by the end of the nested operation.

Somewhere in your code you forgot to unlock.
Start counting how many times you locked and unlocked the recursive mutex.

My sense is that the interoperability betwen python and c++ may cause the problem.
 
Last edited:
My sense is that the interoperability betwen python and c++ may cause the problem.

If this is the case, then maybe you have to rewrite the code by using condition variables in your threads. whenever your c++ code waits for python (aka sentinel) data.
http://antonym.org/2012/02/threading-with-boost-part-v-condition-variables.html

The complexity that often occurs when using pthreads made some blockchain developers to swich to node.js. I dont know whether node.js is a better choice, but it seems to be the easy choice. On the other hand, mastering condition variables may gives to the developers the ability to master conditional votes.
 
Last edited:
Fresh 16.04 using the official distributed binaries (my gentoo machines annoy the hell out of me already, I can't be asked to build something unless I absolutely have to).

Now that I think of it, looks like the exact same thing I reported on 0.12.1.0

I've got it going on two machines... I set them both to 2000/8. the one that keeps crashing is the one that reports overshoot above 2000. The other one is just shy of 2000 and never crashes.

Maybe those two conditions are not actually related, but I figured I'd report it.

Under what conditions is "!pthread_mutex_lock(&m)" used by dash-qt? What's it trying to do when this is called? Maybe an unexpected/out of range/unhandled/wrong type "%m" (whatever it is) is making it blow up?

Also, mixing absolutely SUCKS on 0.12.1.1... Was hauling ass on 0.12.1.0. I was unaware that anything relating to mixing was altered. Just reporting observation.

Also, an interesting observation... Even with two clients on two different machines sitting right next to each other mixing... You'd expect you might be polluting the mix by being 2 out of the 3 needed parties. I've been watching them sitting next to me as I work for a week now. I've never seen them join in on the same mix. So, it might actually not hurt anything to split your stack and denom/mix in multiple clients to speed up the process.
Let's try it that way...
Open Terminal and execute these commands:
- "ulimit -c unlimited"
- "cat /proc/sys/kernel/core_pattern", if it says "|/usr/share/apport/apport %p %s %c %P" - execute "sudo service apport stop" and check again, should say "core" now
- run dash-qt from cmd-line (it's important to run it from the same Terminal you executed "ulimit ..." unless you are root user iirc)

Next time dash-qt crashes it should create "core" file in the folder you started dash-qt from. This file which should give some deep info about the exact place of the crash, so zip it and pass it via some file-sharing service (https://transfer.sh is a nice one).

EDIT:
you can verify that it will create core file on next crash by forcing it to segfault: open another terminal and execute "killall -11 dash-qt".
 
Next time it dies I'll do that. It's probably something stupid about my machine... If something bizarre to the point of impossible is going to happen, I'm the guy it will happen to...

As a note on mixing...

Today I've used 28 keys. Previously, I was using 3000+ a day. It's also only mixing denoms of 10. 1 .1 and .01 don't happen anymore.
 
7PhoqX4.png


Dash 0.12.1.2
(Not Mandatory but encouraged to update)
https://www.dash.org/wallets/

Release Notes:
- Dash Core v0.12.1.2
https://github.com/dashpay/dash/releases/tag/v0.12.1.2
- Sentinel v1.0.1
(added internal Scheduler and bypass option)
https://github.com/dashpay/sentinel/tags
 
Does this 'pull' over write sentinel.conf ?
'pull' does not work:

Code:
remote: Counting objects: 16, done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 16 (delta 4), reused 4 (delta 3), pack-reused 2
Unpacking objects: 100% (16/16), done.
From https://github.com/dashpay/sentinel
 + 69383a6...d51e172 master     -> origin/master  (forced update)
 * [new tag]         v1.0.1     -> v1.0.1

*** Please tell me who you are.

Run

  git config --global user.email "[email protected]"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

fatal: empty ident name (for <xxx@xxxxxxx>) not allowed
 
'pull' does not work:
This is because sentinel.conf was modified and to pull you need to commit your changes, that is why git is asking you to configure who you are. Do it and you will be able to pull after autocommit.

For Sentinel repo :
having sentinel.conf.example in the repo and sentinel.conf in .gitignore would be much better...
 
This is because sentinel.conf was modified and to pull you need to commit your changes, that is why git is asking you to configure who you are. Do it and you will be able to pull after autocommit.

For Sentinel repo :
having sentinel.conf.example in the repo and sentinel.conf in .gitignore would be much better...
Aha, ok, thanks.
 
cd .dashcore/sentinel && git pull
also make sure sentinel runs every minute i.e. there are "5 stars" in cron, see https://github.com/dashpay/sentinel#3-set-up-cron

With regards to the cronjob that is referenced above, the following cronjob did not work on my side as my masternodes promptly lost connection to sentinel after a few minutes :
* * * * * cd /home/YOURUSERNAME/sentinel && ./venv/bin/python bin/sentinel.py >/dev/null 2>&1

So i put my old cronjob back in (with a little edit from 2 minutes to 1 minute) :

*/1 * * * * cd /home/YOURUSERNAME/.dashcore/sentinel && ./venv/bin/python bin/sentinel.py 2>&1 >> sentinel-cron.log

No problem anymore
The only difference i noticed is the >/dev/null part in the code, should that even be in there ?
 
Last edited:
that explains it.. and as sentinel is installed there by default this should be corrected in here: https://github.com/dashpay/sentinel#3-set-up-cron
No, sentinel is installed to your home dir in default, if you follow that quide.
There is no command 'cd ./dashcore'.
I did that error first, but if you install sentinel to your .dashcore,
it wont work unless you modify sentinel.conf and database path.
And if you do that, then updating (pull) wont work.

Edit: is this correct @UdjinM6
 
Status
Not open for further replies.
Back
Top