Closed shufps closed 5 months ago
Same underlying deadlock in the DDR-Scheduler as in #936
goroutine 8456281 [sync.RWMutex.RLock, 1150 minutes]:
sync.runtime_SemacquireRWMutexR(0xc00048bb08?, 0xa0?, 0xc0004ef560?)
/usr/local/go/src/runtime/sema.go:82 +0x25
sync.(*RWMutex).RLock(...)
/usr/local/go/src/sync/rwmutex.go:70
github.com/iotaledger/iota-core/pkg/protocol/engine/congestioncontrol/scheduler/drr.(*Scheduler).ReadyBlocksCount(0xc000346fa0)
We have two nodes that crashed on out of memory.
It seems they started to log this error message:
About 50k times per hour.
Memory inflated at the time:
We have a log file when it started: faucet.h.iota2-alphanet_2024-04-24-09.log
Unfortunately it happened at night, so we have no memory profile of this node.
But we have profile of another node that started at the same time but "recovered" later on (while memory usage still is high)
pprof.validator-2_20240425-075134_all.zip
Maybe it shows something :see_no_evil: