• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

Dash Core v20.1.0 Release Announcement

Pasta

Active member
Dash Core Group
Thank you for this update.

I am currently in the middle of testing / verifying a possible issue with dashd process memory reserve on Dash Mainnet through Docker / Dashmate (Evonodes), where dashd process memory reserve just keeps increasing over time (dashd process memory reserve currently at 78% of total RAM).
I am curious at what point the dashd process memory reserve will stabilize and show no further increase (basically i am trying to find out Docker upper limit for the dashd process memory reserve).

Link : https://github.com/dashpay/dash/issues/5872

Once that testing has concluded one way or another (maybe it will crash my servers once dashd process memory reserve gets high enough, or maybe it will stabilize around a certain percentage), i will update to this minor Dash Core v20.1 update.

Am i correct in assuming that the Platform bug fixes from stress testing at this point does not break Dash Core in such a way that the activation of Platform features currently laying dormant in v20.0.4 can be merged and activated through a future minor Dash Core update ? (v20.2 ?), or does it require Dash Core v21, due to breaking changes ?
 
Last edited:
Once that testing has concluded one way or another (maybe it will crash my servers once dashd process memory reserve gets high enough, or maybe it will stabilize around a certain percentage), i will update to this minor Dash Core v20.1 update.

I am curious on your outcomes from this testing, in my case I rebooted server with 41 days uptime, dashd was around 12GB VM size, stable in my regard. I will of course monitor RAM usage on the v20.1 as well.

Am i correct in assuming that the Platform bug fixes from stress testing at this point does not break Dash Core in such a way that the activation of Platform features currently laying dormant in v20.0.4 can be merged and activated through a future minor Dash Core update ? (v20.2 ?), or does it require Dash Core v21, due to breaking changes ?

Yeah, you are, these changes do not impact the activation plan for platform, however, I think we should push in a another fork in v21 before platform goes live, because platform is still a ways off and we don't want to be in a situation where core is held back because of tortoise, but QE is not agreeing with this and wants everyone to wait for them. 😖
 
I am currently in the middle of testing / verifying a possible issue with dashd process memory reserve on Dash Mainnet through Docker / Dashmate (Evonodes), where dashd process memory reserve just keeps increasing over time (dashd process memory reserve currently at 78% of total RAM).
Might this be something similar/the same/regression as my old problem:
 
No, there is no memory expansion issue in v20.1,
Maybe not for users of Dash Masternode Zeus (where dashd process memory reserve increase to 12GB and then stop increasing),
but i think it is premature to conclude that for Dashmate / Docker users.

One of my Evonodes setup through Dashmate / Docker on Mainnet is currently already reserving 19.3GB for the dashd process memory (80% of total RAM !) and is not showing signs of stopping. I have seen nothing in Dash Core v20.1 or Dashmate version 0.25.22 that addresses this.

Knipsel.JPG


RAM : 24GB
Dash Core v20.0.4
Dashmate : v0.25.21

At this point i am starting to wonder if there really is an upper limit set for the dashd process memory, or if Dashmate / Docker will just allow a continuous increase of dashd process memory reserve. I have set a trigger to alert me in case dashd process memory reserve exceeds 85% of system RAM, as to me that would indicate a serious memory issue and would most likely force me to reboot the server every few months to stay on the safe side.
 
Last edited:
Maybe not for users of Dash Masternode Zeus (where dashd process memory reserve increase to 12GB and then stop increasing),
but i think it is premature to conclude that for Dashmate / Docker users.

One of my Evonodes setup through Dashmate / Docker on Mainnet is currently already reserving 19.3GB for the dashd process memory (80% of total RAM !) and is not showing signs of stopping. I have seen nothing in Dash Core v20.1 or Dashmate version 0.25.22 that addresses this.

View attachment 12364

RAM : 24GB
Dash Core v20.0.4
Dashmate : v0.25.21

At this point i am starting to wonder if there really is an upper limit set for the dashd process memory, or if Dashmate / Docker will just allow a continuous increase of dashd process memory reserve. I have set a trigger to alert me in case dashd process memory reserve exceeds 85% of system RAM, as to me that would indicate a serious memory issue and would most likely force me to reboot the server every few months to stay on the safe side.


Hey qwizzie, I had not seriously considered that Docker/Dashmate could be a factor in the memory over run, I hope you are collecting per process statistics this time for the dashd, so we can analyze it together.

Here I have a cluster of three evonodes after running for 5 days on v20.1 and the VM size is stable at around 12GB.

evonodes.png
 
Hey qwizzie, I had not seriously considered that Docker/Dashmate could be a factor in the memory over run, I hope you are collecting per process statistics this time for the dashd, so we can analyze it together.

Here I have a cluster of three evonodes after running for 5 days on v20.1 and the VM size is stable at around 12GB.

View attachment 12366

Knipsel.JPG

Source : https://github.com/dashpay/dash/issues/5872#issuecomment-1952300731

I made some changes a few days ago to the above, so that dashd Process Memory Reserve (a rename of process_memory_usage) is more clearly split from Memory Usage and shown as a percentage of total system RAM as well, together with a trigger to alert when exceeding 85% reserve....

# Function to get memory usage
get_memory_usage() {
memory_info=$(free -m || true)
if [ -n "$memory_info" ]; then
memory_usage=$(echo "$memory_info" | awk '/Mem/ {print $3}')
memory_usage_gb=$(echo "scale=2; $memory_usage / 1024" | bc)
echo "Memory Usage: ${memory_usage_gb}GB"
else
echo "Error: Unable to retrieve memory usage."
fi
}

# Function to get process memory reserve (renamed from process_memory_usage to process_memory_reserve)
get_process_memory_reserve() {
process_memory_reserve=$(ps -opid,vsz,cmd -C dashd | awk 'NR>1 {sum += $2} END {print sum}')
process_memory_reserve_gb=$(echo "scale=2; $process_memory_reserve / (1024 * 1024)" | bc)
echo "Process Memory Reserve: ${process_memory_reserve_gb}GB"

# Calculate the percentage of process memory reserve
total_ram_gb=24 # Total RAM in GB
process_memory_reserve_percentage=$(echo "scale=2; $process_memory_reserve_gb / $total_ram_gb * 100" | bc)
echo "Percentage: ${process_memory_reserve_percentage}%"

# Check if dashd Process Memory Reserve exceeds 85%
if (( $(echo "$process_memory_reserve_percentage > 85" | bc -l) )); then
echo -e "\e[1;31mAlert: dashd Process Memory Reserve exceeds 85%!\e[0m"
echo -e "\e[1;31mAlert: dashd Process Memory Reserve exceeds 85%!\e[0m" >> hardware.log
# You can add additional actions here, such as sending an email or triggering a notification.
fi
}

So yes, i am collecting per process statistics for dashd since 19th of February 2024.

Evonode 1 (dashd.pid startdate 18th of January 2024)
24GB RAM

OS : Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-169-generic x86_64)
Dashmate : 0.25.21
Dash Core : 20.0.4

1.JPG

Evonode 2 (dashd.pid startdate 21st of January 2024) :
24GB RAM

OS : Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-169-generic x86_64)
Dashmate : 0.25.21
Dash Core : 20.0.4

2.JPG
 
Last edited:
Back
Top