Closed Stonygan closed 2 months ago
@Stonygan: Thanks for opening an issue, it is currently awaiting triage.
The triage/accepted label can be added by foundation members by writing /triage accepted in a comment.
Thank you for this report. Could you please share more info if you have, like the config that you use and hardware specs? This is quite interesting info, as this seems to indicate some kind of a leak, however, this is the first report of such.
Note: We are aware of reports of regression since 2.7.0 that causes things to slow down on very high load conditions, but we didn't have co-relation yet, I'm keen to understand if these could be related.
I'll take a sample configuration of a server that has had the problem for the last days. Fullnode with 2 cores, 8 GB memory, 160GB SSD. KVM virtualization. Debian 11 bullseye. UFW is enabled and every 5 minutes we use some RPC-Commands for monitoring.
defi.conf: daemon=1 testnet=0 [main] rpcuser=xxx rpcpassword=xxx rpcbind=127.0.0.1 rpcport=xxx gen=0 spv=1 txindex=1
RAM Usage last 30 days:
In the last 2 days Memoryusage grow from 80% to 97% and defid crashes, no error reported in debug.log
I've been seeing similar behavior. Defid did not crash - or at least I did not notice it - but memory consumption is going up during time.
My machine:
defi.conf:
rpcuser=****
rpcpassword=****
daemon=1
gen=1
spv=1
Maybe I can manage to get some anonymized data from the Master Node Health users, especially their memory usage over time.
my machine get nearly daily a RAM warning (>87% used)
daemon=1
gen=1
spv=1
masternode_operator=XXXXXX
addnode=seed.mydeficha.in:8555
Closing outdated / stale issues.
What happened:
After a few days of running, the memory of our server runs through the defid to almost 100%. Only a restart of defid fixes the problem. No errors in debug.log available.
What you expected to happen:
Free up memory
How to reproduce it (as minimally and precisely as possible):
defid is running, nothing else.
What are your environment parameters:
Servers with and without operators are affected. We use txindex=1 and spv=1 Debian 11 buster/Ubuntu 20.04.4 focal. No difference if we have 8 or 16 GB memory
Anything else we need to know?:
We heard from many other users (e.g. Masternode Health Monitor Servers) that they have the same problem.