dashpay / dash

Dash - Reinventing Cryptocurrency
https://www.dash.org
MIT License
1.49k stars 1.2k forks source link

Memory leak? #5012

Closed jerrybb closed 2 years ago

jerrybb commented 2 years ago

Hi, I am running dash 18.0.1 (x64) in docker container. The dashd process takes all RAM and swap available.

VM running docker engine has 16GB RAM and 8GB swap space currently (I have been increasing resources from 4GB onwards).

Sync stops now at syncing 2022-02.

2022-09-13T08:50:09Z UpdateTip: new best=00000000000000041420fc06922275fb994daad19f6db0cf99329e734e49f78c height=1623047 version=0x20000000 log2_work=78.754471 tx=40941988 date='2022-02-16T10:41:50Z' progress=0.925776 cache=282.5MiB(1354768txo) evodb_cache=447.3MiB
2022-09-13T08:50:41Z ThreadSocketHandler -- removing node: peer=75 nRefCount=1 fInbound=0 m_masternode_connection=0 m_masternode_iqr_connection=0
2022-09-13T08:52:23Z UpdateTip: new best=0000000000000012debb3c66b970651ed3f0f0d214ee038ee5476162c763d064 height=1623048 version=0x20000000 log2_work=78.754473 tx=40941995 date='2022-02-16T10:42:11Z' progress=0.925776 cache=282.5MiB(1354771txo) evodb_cache=447.3MiB
2022-09-13T08:53:07Z ThreadSocketHandler -- removing node: peer=76 nRefCount=1 fInbound=0 m_masternode_connection=0 m_masternode_iqr_connection=0
Killed

Sync started today at 5:30. RAM consumption was normal. At one point RAM consumption starts to increase dramatically. Swap usage:

image

Ram usage:

image

Config file is empty. Here are CLI arguments I am using: /usr/local/bin/dashd -addressindex=1 -zapwallettxes -minrelaytxfee=0.00001 -server=1 -listen=1 -maxmempool=800 -rpcallowip=127.0.0.1 -rpcallowip=128.0.0.1 -rpcport=9998 -rpcuser=user -rpcpassword=password -rpcbind=0.0.0.0:9998 -onlynet=IPV4 -dbcache=2048 -rpcthreads=4 -rpcworkqueue=32

Same issue happens even if I omit -dbcache=2048 -rpcthreads=4 -rpcworkqueue=32 from CLI.

Is this some known issue?

thephez commented 2 years ago

Not sure, but this may be related to https://github.com/dashpay/dash/pull/5007

UdjinM6 commented 2 years ago

Yes, we are trying to figure out why leveldb behaves like this. So far the best solution (imo) is to limit evodb cache i.e. #5007. You can achieve similar results with 18.0.1 by specifying very low -dbcache and -maxmempool params e.g. -dbcache=64 -maxmempool=64. This might slow things down a bit but at least this should you keep mem usage under control and you should be able to sync without waiting for 18.0.2.

jerrybb commented 2 years ago

This workaround solved the problem, indeed. Thank you very much for help.