Welcome to the Dash Forum!

Please sign up to discuss the most innovative cryptocurrency!

Out of memory: Kill process ... (dashd)

Discussion in 'Daemon and QT Wallet Support' started by tamias, Mar 9, 2018.

  1. tamias

    tamias New Member

    Joined:
    Dec 9, 2017
    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    1
    Hi everyone!
    I launched Vultr server with 1GB of memory, 25GB SSD drive on Ubuntu 17.0.
    And started dash masternode (Dash Core:0.12.2.3) a month ago.
    During this time, two crashes occurred due to insufficient memory for the dashd During this time, the masternode stopped twice because of a lack of memory for the dashd process.
    Is there a solution?
    Maybe go to Ubuntu version 14 or 16, increase the memory to 2 GB?

    Look at my log file:

    Code:
    Mar  2 11:51:35 vultr kernel: [354600.088593] dash-ps invoked oom-killer: gfp_mask=0x14201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), nodemask=(null),  order=0, oom_score_adj=0
    Mar  2 11:51:35 vultr kernel: [354600.088639] dash-ps cpuset=/ mems_allowed=0
    Mar  2 11:51:35 vultr kernel: [354600.088680] CPU: 0 PID: 1756 Comm: dash-ps Not tainted 4.13.0-25-generic #29-Ubuntu
    Mar  2 11:51:35 vultr kernel: [354600.088682] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014
    Mar  2 11:51:35 vultr kernel: [354600.088687] Call Trace:
    Mar  2 11:51:35 vultr kernel: [354600.088759]  dump_stack+0x63/0x8b
    Mar  2 11:51:35 vultr kernel: [354600.088781]  dump_header+0x97/0x225
    Mar  2 11:51:35 vultr kernel: [354600.088791]  ? do_try_to_free_pages+0x2c2/0x350
    Mar  2 11:51:35 vultr kernel: [354600.088809]  ? security_capable_noaudit+0x45/0x60
    Mar  2 11:51:35 vultr kernel: [354600.088812]  oom_kill_process+0x20b/0x410
    Mar  2 11:51:35 vultr kernel: [354600.088814]  out_of_memory+0x2a9/0x4d0
    Mar  2 11:51:35 vultr kernel: [354600.088817]  __alloc_pages_slowpath+0xa64/0xe30
    Mar  2 11:51:35 vultr kernel: [354600.088820]  ? get_page_from_freelist+0xa3/0xb20
    Mar  2 11:51:35 vultr kernel: [354600.088823]  __alloc_pages_nodemask+0x25d/0x280
    Mar  2 11:51:35 vultr kernel: [354600.088833]  alloc_pages_current+0x6a/0xe0
    Mar  2 11:51:35 vultr kernel: [354600.088841]  __page_cache_alloc+0x86/0x90
    Mar  2 11:51:35 vultr kernel: [354600.088843]  filemap_fault+0x214/0x5e0
    Mar  2 11:51:35 vultr kernel: [354600.088847]  ? page_add_file_rmap+0x134/0x180
    Mar  2 11:51:35 vultr kernel: [354600.088853]  ? alloc_set_pte+0x470/0x530
    Mar  2 11:51:35 vultr kernel: [354600.088855]  ? filemap_map_pages+0x220/0x320
    Mar  2 11:51:35 vultr kernel: [354600.088868]  ext4_filemap_fault+0x31/0x50
    Mar  2 11:51:35 vultr kernel: [354600.088870]  __do_fault+0x1e/0xb0
    Mar  2 11:51:35 vultr kernel: [354600.088872]  __handle_mm_fault+0xba7/0x1020
    Mar  2 11:51:35 vultr kernel: [354600.088875]  handle_mm_fault+0xb1/0x200
    Mar  2 11:51:35 vultr kernel: [354600.088888]  __do_page_fault+0x24d/0x4d0
    Mar  2 11:51:35 vultr kernel: [354600.088891]  do_page_fault+0x22/0x30
    Mar  2 11:51:35 vultr kernel: [354600.088897]  ? async_page_fault+0x36/0x60
    Mar  2 11:51:35 vultr kernel: [354600.088905]  do_async_page_fault+0x51/0x80
    Mar  2 11:51:35 vultr kernel: [354600.088908]  async_page_fault+0x4c/0x60
    Mar  2 11:51:35 vultr kernel: [354600.088916] RIP: 0033:0x56508e3d6080
    Mar  2 11:51:35 vultr kernel: [354600.088917] RSP: 002b:00007fdbab7fdc58 EFLAGS: 00010246
    Mar  2 11:51:35 vultr kernel: [354600.088922] RAX: 0000000000000000 RBX: 000056509b45b4b8 RCX: 00007fdb980018f0
    Mar  2 11:51:35 vultr kernel: [354600.088923] RDX: 0000000000000000 RSI: 00007fdbab7fddc0 RDI: 00007fdb81a5da30
    Mar  2 11:51:35 vultr kernel: [354600.088923] RBP: 000056509b45b5d8 R08: 00007fdb9ac12950 R09: 00007fdbab7fd707
    Mar  2 11:51:35 vultr kernel: [354600.088924] R10: cccccccccccccccd R11: 0000000000000001 R12: 00007fdbab7fddc0
    Mar  2 11:51:35 vultr kernel: [354600.088925] R13: 00007fdbab7fdc80 R14: 000056509b45b4c8 R15: 000056508e981031
    Mar  2 11:51:35 vultr kernel: [354600.088930] Mem-Info:
    Mar  2 11:51:35 vultr kernel: [354600.088937] active_anon:217191 inactive_anon:2598 isolated_anon:0
    Mar  2 11:51:35 vultr kernel: [354600.088937]  active_file:41 inactive_file:69 isolated_file:0
    Mar  2 11:51:35 vultr kernel: [354600.088937]  unevictable:1411 dirty:0 writeback:0 unstable:0
    Mar  2 11:51:35 vultr kernel: [354600.088937]  slab_reclaimable:3620 slab_unreclaimable:5950
    Mar  2 11:51:35 vultr kernel: [354600.088937]  mapped:1424 shmem:2687 pagetables:1311 bounce:0
    Mar  2 11:51:35 vultr kernel: [354600.088937]  free:12178 free_pcp:103 free_cma:0
    Mar  2 11:51:35 vultr kernel: [354600.088944] Node 0 active_anon:868764kB inactive_anon:10392kB active_file:164kB inactive_file:276kB unevictable:5644kB isolated(anon):0kB isolated(file):0kB mapped:5696kB dirty:0kB writeback:0kB shmem:10748kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
    Mar  2 11:51:35 vultr kernel: [354600.088955] Node 0 DMA free:4416kB min:748kB low:932kB high:1116kB active_anon:7472kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB kernel_stack:0kB pagetables:8kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
    Mar  2 11:51:35 vultr kernel: [354600.088965] lowmem_reserve[]: 0 917 917 917 917
    Mar  2 11:51:35 vultr kernel: [354600.088968] Node 0 DMA32 free:44296kB min:44304kB low:55380kB high:66456kB active_anon:861292kB inactive_anon:10392kB active_file:164kB inactive_file:276kB unevictable:5644kB writepending:0kB present:1032048kB managed:993400kB mlocked:5644kB kernel_stack:1936kB pagetables:5236kB bounce:0kB free_pcp:412kB local_pcp:412kB free_cma:0kB
    Mar  2 11:51:35 vultr kernel: [354600.088973] lowmem_reserve[]: 0 0 0 0 0
    Mar  2 11:51:35 vultr kernel: [354600.088976] Node 0 DMA: 12*4kB (UE) 10*8kB (UME) 26*16kB (UM) 41*32kB (UME) 12*64kB (UME) 8*128kB (ME) 3*256kB (UME) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 4416kB
    Mar  2 11:51:35 vultr kernel: [354600.088987] Node 0 DMA32: 368*4kB (UMEH) 459*8kB (UMEH) 297*16kB (UME) 157*32kB (MEH) 75*64kB (UMEH) 40*128kB (UME) 34*256kB (UMEH) 19*512kB (UME) 1*1024kB (M) 0*2048kB 0*4096kB = 44296kB
    Mar  2 11:51:35 vultr kernel: [354600.089014] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
    Mar  2 11:51:35 vultr kernel: [354600.089015] 3793 total pagecache pages
    Mar  2 11:51:35 vultr kernel: [354600.089021] 0 pages in swap cache
    Mar  2 11:51:35 vultr kernel: [354600.089022] Swap cache stats: add 0, delete 0, find 0/0
    Mar  2 11:51:35 vultr kernel: [354600.089023] Free swap  = 0kB
    Mar  2 11:51:35 vultr kernel: [354600.089023] Total swap = 0kB
    Mar  2 11:51:35 vultr kernel: [354600.089024] 262010 pages RAM
    Mar  2 11:51:35 vultr kernel: [354600.089025] 0 pages HighMem/MovableOnly
    Mar  2 11:51:35 vultr kernel: [354600.089025] 9683 pages reserved
    Mar  2 11:51:35 vultr kernel: [354600.089026] 0 pages cma reserved
    Mar  2 11:51:35 vultr kernel: [354600.089026] 0 pages hwpoisoned
    Mar  2 11:51:35 vultr kernel: [354600.089027] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
    Mar  2 11:51:35 vultr kernel: [354600.089036] [  371]     0   371    18994      730      33       3        0             0 systemd-journal
    Mar  2 11:51:35 vultr kernel: [354600.089039] [  388]     0   388    24310      265      16       3        0             0 lvmetad
    Mar  2 11:51:35 vultr kernel: [354600.089048] [  391]     0   391    11264      815      22       3        0         -1000 systemd-udevd
    Mar  2 11:51:35 vultr kernel: [354600.089050] [  457]   100   457    35813      312      41       3        0             0 systemd-timesyn
    Mar  2 11:51:35 vultr kernel: [354600.089051] [  470]   101   470    18619      171      37       3        0             0 systemd-network
    Mar  2 11:51:35 vultr kernel: [354600.089053] [  550]     0   550     7064      451      19       3        0             0 atd
    Mar  2 11:51:35 vultr kernel: [354600.089055] [  553]     0   553   158987      562      32       4        0             0 lxcfs
    Mar  2 11:51:35 vultr kernel: [354600.089057] [  554]   104   554    64134      397      27       3        0             0 rsyslogd
    Mar  2 11:51:35 vultr kernel: [354600.089059] [  556]     0   556     7884      541      21       3        0             0 cron
    Mar  2 11:51:35 vultr kernel: [354600.089061] [  557]   102   557    16428      594      36       3        0             0 systemd-resolve
    Mar  2 11:51:35 vultr kernel: [354600.089063] [  559]     0   559    16406      225      36       3        0             0 systemd-logind
    Mar  2 11:51:35 vultr kernel: [354600.089064] [  561]     0   561    71905      210      43       3        0             0 accounts-daemon
    Mar  2 11:51:35 vultr kernel: [354600.089066] [  563]   105   563    11895      213      28       3        0          -900 dbus-daemon
    Mar  2 11:51:35 vultr kernel: [354600.089068] [  574]     0   574   117965     2036      33       5        0          -900 snapd
    Mar  2 11:51:35 vultr kernel: [354600.089070] [  575]     0   575    72324      358      44       4        0             0 polkitd
    Mar  2 11:51:35 vultr kernel: [354600.089072] [  780]     0   780     6321       59      17       3        0             0 iscsid
    Mar  2 11:51:35 vultr kernel: [354600.089074] [  781]     0   781     6447     1295      19       3        0           -17 iscsid
    Mar  2 11:51:35 vultr kernel: [354600.089083] [  789]     0   789    18034      567      39       3        0         -1000 sshd
    Mar  2 11:51:35 vultr kernel: [354600.089085] [  984]     0   984     4104      366      13       3        0             0 agetty
    Mar  2 11:51:35 vultr kernel: [354600.089087] [ 1599]     0  1599    17881      687      38       3        0             0 systemd
    Mar  2 11:51:35 vultr kernel: [354600.089089] [ 1600]     0  1600    47559      503      56       3        0             0 (sd-pam)
    Mar  2 11:51:35 vultr kernel: [354600.089091] [ 1727]     0  1727   426284   203653     566       5        0             0 dashd
    Mar  2 11:51:35 vultr kernel: [354600.089094] [29213]     0 29213     1152      206       8       3        0             0 apt.systemd.dai
    Mar  2 11:51:35 vultr kernel: [354600.089096] [29218]     0 29218     1152      396       8       3        0             0 apt.systemd.dai
    Mar  2 11:51:35 vultr kernel: [354600.089101] [29226]     0 29226    34156     7754      40       3        0             0 apt-get
    Mar  2 11:51:35 vultr kernel: [354600.089102] Out of memory: Kill process 1727 (dashd) score 785 or sacrifice child
    Mar  2 11:51:35 vultr kernel: [354600.089381] Killed process 1727 (dashd) total-vm:1705136kB, anon-rss:814612kB, file-rss:0kB, shmem-rss:0kB
    Mar  2 11:51:35 vultr kernel: [354600.239352] oom_reaper: reaped process 1727 (dashd), now anon-rss:332kB, file-rss:0kB, shmem-rss:0kB
    Mar  2 11:51:35 vultr systemd[1]: Stopping User Manager for UID 0...
    Mar  2 11:51:35 vultr systemd[1599]: Stopped target Default.
    Mar  2 11:51:35 vultr systemd[1599]: Stopped target Basic System.
    Mar  2 11:51:35 vultr systemd[1599]: Stopped target Paths.
    Mar  2 11:51:35 vultr systemd[1599]: Stopped target Timers.
    Mar  2 11:51:35 vultr systemd[1599]: Stopped target Sockets.
    Mar  2 11:51:35 vultr systemd[1599]: Reached target Shutdown.
    Mar  2 11:51:35 vultr systemd[1599]: Starting Exit the Session...
    Mar  2 11:51:35 vultr systemd[1599]: Received SIGRTMIN+24 from PID 29232 (kill).
    Mar  2 11:51:35 vultr systemd[1]: Stopped User Manager for UID 0.
    Mar  2 11:51:35 vultr systemd[1]: Removed slice User Slice of root.
    I'm browsing forums and they say:

    But in my case, total-vm: 1705136kb is 1.7GB of virtual and real memory anon-rss: 814612kb - 0.8GB !!

    I will be glad to any advice!
     
    #1 tamias, Mar 9, 2018
    Last edited: Mar 9, 2018
  2. tungfa

    tungfa Administrator
    Dash Core Team Foundation Member Masternode Owner/Operator Moderator

    Joined:
    Apr 9, 2014
    Messages:
    8,964
    Likes Received:
    6,736
    Trophy Points:
    1,283
  3. tamias

    tamias New Member

    Joined:
    Dec 9, 2017
    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    1
    Thank you! I'll wait for the payment and I will update the server. I have already lost two payments. After that, connected monitoring from DashCentral and its script yesterday restored work after the third dasd crash.
     
  4. AjM

    AjM Well-known Member
    Foundation Member

    Joined:
    Jun 23, 2014
    Messages:
    1,334
    Likes Received:
    571
    Trophy Points:
    283
  5. tamias

    tamias New Member

    Joined:
    Dec 9, 2017
    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    1
    Oh! I left this line in the default :
    maxconnections = 256
    Now changed to maxconnections = 64
    I'll see!
    Do I need to restart dashd ?
     
  6. splawik21

    splawik21 Grizzled Member
    Dash Core Team Foundation Member Dash Support Group Moderator

    Joined:
    Apr 8, 2014
    Messages:
    1,916
    Likes Received:
    1,273
    Trophy Points:
    1,283
    Yes, restart the deamon so it load the new config.
    In next dash update the maxconnections=128 will be minimum and will be obligatory. Also be sure to get the 2 GB of ram + some swap. Last thing put the autorestart script so you don't loose your payments anymore.
     
  7. tamias

    tamias New Member

    Joined:
    Dec 9, 2017
    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    1
    Many thanks for the detailed answer!
    On the server vultr i can increase the memory and disk space without reinstalling the system, only hard reset.
    And I have prepared another server with ubuntu 14.0 under instructions from the Dashman with the "monit" script. Script launches a dashd in both cases, as with a hard reset and a crash dashd process.
    Or just increase the memory on the existing server with DashCentral monitoring and scriipt. I have not decided yet what is safer.
     
  8. strophy

    strophy Administrator
    Dash Core Team Dash Support Group Moderator

    Joined:
    Feb 13, 2016
    Messages:
    700
    Likes Received:
    397
    Trophy Points:
    133
    Just increase the memory, you're going to need it very soon anyway. $5 more per month is worth it when you consider the opportunity cost of the reward you are missing out on.
     
    • Agree Agree x 1
  9. tamias

    tamias New Member

    Joined:
    Dec 9, 2017
    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    1
    Yes! Of course !
     
  10. tamias

    tamias New Member

    Joined:
    Dec 9, 2017
    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    1
    Now my Payment queue rank: 180/4751 and does not change a few days (176 -181, depending on the total number of masternodes). Active duration 6d16h49m55s (info from dashninja).
    Why the queue does not decrease?
     
  11. tamias

    tamias New Member

    Joined:
    Dec 9, 2017
    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    1
    I think this is incorrect information and is based only on Last payment.For example, a node with the last payout in 2017-12-19 has Payment queue rank: 162/4752
     
  12. solo7861

    solo7861 New Member

    Joined:
    Aug 3, 2014
    Messages:
    34
    Likes Received:
    15
    Trophy Points:
    8
    Hmm do you have any swap space setup yet? even if you have the 1GB ram, you can still setup 2GB of swap space and use this until you want to upgrade
    your server:

    https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-16-04

    You can follow the guide above, or i can assist in setting it up for you providing you get access etc sorted.
     
  13. tamias

    tamias New Member

    Joined:
    Dec 9, 2017
    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    1
    #13 tamias, Mar 10, 2018
    Last edited: Mar 14, 2018
    • Informative Informative x 1
  14. tamias

    tamias New Member

    Joined:
    Dec 9, 2017
    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    1
  15. strophy

    strophy Administrator
    Dash Core Team Dash Support Group Moderator

    Joined:
    Feb 13, 2016
    Messages:
    700
    Likes Received:
    397
    Trophy Points:
    133
  16. tamias

    tamias New Member

    Joined:
    Dec 9, 2017
    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    1
    On the 8 day the same thing happened ! Out of memory: Kill process ... (dashd)

    So I migrated to a new server with 2GB of memory. On the first day of work, the memory allocation was as follows:
    Code:
    free -h
                 total       used       free     shared    buffers     cached
    Mem:          2.0G       1.2G       751M       380K        49M       564M
    -/+ buffers/cache:       636M       1.3G
    Swap:           0B         0B         0B
    Today (two days later):
    Code:
    @vultr:~$ free -h
                 total       used       free     shared    buffers     cached
    Mem:          2.0G       1.9G        75M       380K        63M       1.1G
    -/+ buffers/cache:       697M       1.3G
    Swap:           0B         0B         0B
    Will there be a new " kill...." ?

    Now I added the swap !
    Code:
    @vultr:~$ free -h
                 total       used       free     shared    buffers     cached
    Mem:          2.0G       1.9G        68M       388K        63M       1.1G
    -/+ buffers/cache:       716M       1.3G
    Swap:         2.0G         0B       2.0G
    But the question remains open !
     
    #16 tamias, Mar 14, 2018
    Last edited: Mar 14, 2018
    • Informative Informative x 1
  17. strophy

    strophy Administrator
    Dash Core Team Dash Support Group Moderator

    Joined:
    Feb 13, 2016
    Messages:
    700
    Likes Received:
    397
    Trophy Points:
    133
    You should be fine now you have swap, but you should expect you will need to upgrade your server again for future versions. For reference, here similar output from a masternode with ~2 months uptime:

    Code:
                  total        used        free      shared  buff/cache   available
    Mem:           2.0G        729M        128M         20M        1.1G        1.0G
    Swap:          4.0G         62M        3.9G
    
     
    • Informative Informative x 1
  18. tamias

    tamias New Member

    Joined:
    Dec 9, 2017
    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    1
    It's not a problem! I have a Storage disk:
    40 GB SSD

    Code:
    Filesystem      Size  Used Avail Use% Mounted on
    udev            990M  4.0K  990M   1% /dev
    tmpfs           201M  380K  200M   1% /run
    /dev/vda1        40G   14G   24G  38% /
    none            4.0K     0  4.0K   0% /sys/fs/cgroup
    none            5.0M     0  5.0M   0% /run/lock
    none           1001M     0 1001M   0% /run/shm
    none            100M     0  100M   0% /run/user
    Thank you!
     
    • Informative Informative x 1