celo-org / celo-blockchain

Official repository for the golang Celo Blockchain
https://celo.org
GNU Lesser General Public License v3.0
561 stars 200 forks source link

Memory Leak when node is acting as proxy #1197

Open lahabana opened 4 years ago

lahabana commented 4 years ago

Expected Behavior

Proxies shouldn't be leaking memory

Current Behavior

Over a long period of time the proxy grows in memory utilization until we reach an OOM and Geth restarts.

image

We've seen manifestations of this in both Baklava and Mainnet at multiple times. The historic pattern is very clear.

I've taken a memory dump of a node:

Showing nodes accounting for 1.25GB, 95.65% of 1.30GB total
Dropped 258 nodes (cum <= 0.01GB)
Showing top 10 nodes out of 138
      flat  flat%   sum%        cum   cum%
    0.41GB 31.23% 31.23%     0.41GB 31.23%  github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).Get
    0.36GB 27.40% 58.63%     0.45GB 34.89%  github.com/ethereum/go-ethereum/p2p/discv5.(*Network).lookup
    0.21GB 16.19% 74.82%     0.21GB 16.19%  github.com/syndtr/goleveldb/leveldb/memdb.New
    0.10GB  7.50% 82.32%     0.10GB  7.50%  github.com/ethereum/go-ethereum/p2p/discv5.(*Network).reqQueryFindnode
    0.07GB  5.51% 87.83%     0.07GB  5.51%  github.com/deckarep/golang-set.(*threadUnsafeSet).Add
    0.04GB  3.26% 91.09%     0.08GB  6.40%  github.com/ethereum/go-ethereum/eth.(*peer).AsyncSendTransactions
    0.02GB  1.27% 92.36%     0.02GB  1.27%  github.com/ethereum/go-ethereum/p2p/discv5.(*nodeNetGuts).deferQuery
    0.02GB  1.24% 93.60%     0.02GB  1.24%  github.com/syndtr/goleveldb/leveldb/memdb.(*DB).Put
    0.01GB  1.07% 94.67%     0.01GB  1.07%  github.com/VictoriaMetrics/fastcache.(*bucket).Set
    0.01GB  0.98% 95.65%     0.01GB  1.05%  github.com/ethereum/go-ethereum/trie.(*Database).insertPreimage

after restart:

Dropped 99 nodes (cum <= 1.31MB)
Showing top 10 nodes out of 113
      flat  flat%   sum%        cum   cum%
  216.01MB 82.55% 82.55%   216.01MB 82.55%  github.com/syndtr/goleveldb/leveldb/memdb.New
   37.97MB 14.51% 97.06%    37.97MB 14.51%  github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).Get
    0.53MB   0.2% 97.26%     1.53MB  0.59%  github.com/VictoriaMetrics/fastcache.New
         0     0% 97.26%   220.41MB 84.23%  github.com/ethereum/go-ethereum/cmd/utils.RegisterEthService.func2
         0     0% 97.26%   232.94MB 89.02%  github.com/ethereum/go-ethereum/cmd/utils.StartNode
         0     0% 97.26%     2.02MB  0.77%  github.com/ethereum/go-ethereum/consensus/istanbul/backend.(*Backend).RefreshValPeers
         0     0% 97.26%     3.52MB  1.34%  github.com/ethereum/go-ethereum/consensus/istanbul/backend.(*Backend).RegisterPeer
         0     0% 97.26%     3.52MB  1.34%  github.com/ethereum/go-ethereum/consensus/istanbul/backend.(*Backend).getAllVersionCertificates
         0     0% 97.26%     3.53MB  1.35%  github.com/ethereum/go-ethereum/consensus/istanbul/backend.(*Backend).handleValEnodesShareMsg

after ~12h:

File: geth
Type: inuse_space
Time: Oct 28, 2020 at 8:45am (CET)
Showing nodes accounting for 463.78MB, 95.20% of 487.17MB total
Dropped 191 nodes (cum <= 2.44MB)
      flat  flat%   sum%        cum   cum%
  216.01MB 44.34% 44.34%   216.01MB 44.34%  github.com/syndtr/goleveldb/leveldb/memdb.New
  166.95MB 34.27% 78.61%   166.95MB 34.27%  github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).Get
      18MB  3.70% 82.30%       23MB  4.72%  github.com/ethereum/go-ethereum/p2p/discv5.(*Network).lookup
   13.08MB  2.69% 84.99%    13.08MB  2.69%  github.com/syndtr/goleveldb/leveldb/memdb.(*DB).Put
   11.60MB  2.38% 87.37%    11.60MB  2.38%  github.com/deckarep/golang-set.(*threadUnsafeSet).Add
      10MB  2.05% 89.42%    15.40MB  3.16%  github.com/ethereum/go-ethereum/eth.(*peer).AsyncSendTransactions
       5MB  1.03% 90.45%        5MB  1.03%  github.com/ethereum/go-ethereum/trie.(*shortNode).copy
       5MB  1.03% 91.48%        5MB  1.03%  github.com/ethereum/go-ethereum/p2p/discv5.(*Network).reqQueryFindnode
    4.72MB  0.97% 92.44%     4.72MB  0.97%  bytes.makeSlice
    3.80MB  0.78% 93.22%     3.80MB  0.78%  github.com/ethereum/go-ethereum/trie.(*Database).insert
    3.60MB  0.74% 93.96%     5.10MB  1.05%  github.com/ethereum/go-ethereum/trie.(*Database).insertPreimage
       3MB  0.62% 94.58%        3MB  0.62%  github.com/ethereum/go-ethereum/trie.(*hasher).makeHashNode
       1MB  0.21% 94.79%     3.56MB  0.73%  github.com/ethereum/go-ethereum/eth.(*peer).AsyncSendNewBlockHash
    0.51MB   0.1% 94.89%   157.39MB 32.31%  github.com/syndtr/goleveldb/leveldb/table.(*Reader).find
    0.50MB   0.1% 94.99%      157MB 32.23%  github.com/syndtr/goleveldb/leveldb/table.(*Reader).readBlock
    0.50MB   0.1% 95.10%     3.10MB  0.64%  github.com/ethereum/go-ethereum/eth.(*peer).MarkBlock

We can see slow growth on things in allocations from p2p/discv5.

Look at a graph of the first heap:

image

It seems something might be keeping a lot of reqQueryFindnode around. Digging at the code here's my theory:

I believe the problem is that for some nodes never reach a state where canQuery is true which is necessary for start() to return true as show here: https://github.com/celo-org/celo-blockchain/blob/master/p2p/discv5/net.go#L798

States where canQuery is true are: known, unresponsive, contested.

We clearly see from start() that unknown and verifyinit are reachable. It seems like it's possible to stay locked in a loop of unknown, verifyinit, verifywait and remoteverifywait if the node succeeds in doing a full ping, pong flow.

The result of this meaning we stay with a deferQuery that we never get rid of. However, it seems like a node whenever it enters an unknown state it aborts all deferredQueries so it seems like we might get stuck in a loop verifyinit, verifywait, remoteverifywait.

As a disclaimer this is the first time I dig in the code of geth. I am familiar with the celo protocol but I am no expert.

There are a few odd things in the state machine:

Neither of these explain what happens. But it really seems that the state machine gets stuck in a state that prevents deferedQueries from being removed.

System information

Seen on both 1.0.1 and 1.1.0

haferman commented 3 years ago

I've observed this for all full nodes, but the proxy leaks more rapidly than others.

erNail commented 1 year ago

Since there are no new comments since 2020 i'm not sure if i am experiencing the exact same issue.

I am also experiencing memory leak issues, with proxies, validators and fullnodes (So the Celo Geth Client). 12GB of available memory fill up within 5 to 6 days. The memory usage is at 6GB when first starting, then it's rising by about 1GB per day.

EDIT: Using Docker Image us.gcr.io/celo-org/geth:1.7.4

hbandura commented 1 year ago

Which release version are you using? @erNail

erNail commented 1 year ago

Which release version are you using? @erNail

Version 1.7.4 (Docker Image us.gcr.io/celo-org/geth:1.7.4)

erNail commented 1 year ago

I might have some new information on this. Not sure if this is related to the problem described above though.

In my case i assumed a memory leak, but the issue might be something different. I am running the Celo Geth clients in a Kubernetes environment with containerd as container runtime engine. A memory limit is set for the pod/container.

Here is the problem: The running application is not aware of the memory limits i have set for the pod/container. From my understanding, Go tries to use all the for the available memory and processors. So in the end, the Geth client thinks he has all the memory of the machine it's running on available, without considering the memory limit i have set.

This behavior is also supported by multiple comments in the Celo Discord. These describe that no matter how much memory the machine has, the geth client is always trying to use almost all of it. One user described a memory usage of around 42GB on a 48GB machine.

A possible solution, if Go 1.19 would be used, would be to set a Memory Limit on Application Level via Go Environment variable GOMEMLIMIT.