Open peterargue opened 2 months ago
Profiles point directly to the DHT as the source of the memory growth. In particular, the in-memory db used to store the list of providers for bitswap data.
What is the order of growth of your network and of the content (number of provider records in the DB)?
Profiles point directly to the DHT as the source of the memory growth. In particular, the in-memory db used to store the list of providers for bitswap data.
What is the order of growth of your network and of the content (number of provider records in the DB)?
@guillaumemichel this network only had 4 nodes participating in the DHT. We don't expose any metrics on the provider records, so I'm not sure what the total count is, but we are producing 3-4 new records per second, and each node held ~10M blobs in their datastore.
this network only had 4 nodes participating in the DHT
This means that all nodes are getting allocated all the data. For how long do you keep the provider records before flushing them? (ProvideValidity
)
Do you keep republishing all the records over time?
What datastore are you using?
If you are using an in-memory data store, and the number of records keeps increasing, that may explain increased memory usage.
@guillaumemichel
This means that all nodes are getting allocated all the data. For how long do you keep the provider records before flushing them? (ProvideValidity)
we use the default, which appears to be 48 hours
Do you keep republishing all the records over time?
yes, we had bitswap configured to reprovide every 12 hours. I don't believe we set a specific setting in the DHT, so whatever the default is for that.
What datastore are you using?
we're using the memory map db
If you are using an in-memory data store, and the number of records keeps increasing, that may explain increased memory usage.
it definitely explains some general increases, but we were see 20+ GB spikes in memory for nodes with only a few million entries. Interestingly, only seeing this on one of our networks (luckily not mainnet)
🐞 Bug Report
Over the last year or so, we've had incidents of high resource utilization (memory, cpu, goroutines) across all node roles and networks (mainnet, testnet, lower environments). Profiles have all pointed to the IPFS DHT as the main culprit.
Context
DHT stands for Distributed Hash Table and is a library provided by the team behind IPFS for decentralized peer discovery and content routing on the IPFS network.
It's used for 2 usecases on the Flow network
Originally, it was enabled for both the staked and public network for all nodes, even though it was only needed for Access and Execution nodes on staked network.
Past Issues
Originally, the issue manifested with a linear, unbounded goroutine leak observed on all node types. Nodes would eventually run out of memory and crash.
libp2p
ResourceManager
limits were tuned (https://github.com/onflow/flow-go/pull/4846) and eventually the DHT was disabled on all nodes that did not require it (https://github.com/onflow/flow-go/pull/5797). This resolved the issue for those nodes. However, the intermittent leaks persisted for Access and Execution nodes.This shows the resource manager blocking new streams which limited how high the goroutine leaks got
Upgrading
libp2p
and the DHT library (libp2p/go-libp2p-kad-dht
) (https://github.com/onflow/flow-go/pull/5417), resolved the goroutine leak, but the issue then manifested in different ways. We started to observe spikes in goroutines that were capped around 8500, but memory utilization remained highCurrent Issues
Recently, we've been seeing 2 more issues that seem to be related:
Disabling the DHT
The main intention of the DHT is to make it efficient to disseminate the routing table of which blocks of data are stored on which nodes. The basic design makes a few assumptions:
On the Flow staked bitswap network, none of those assumptions are true.
Additionally, bitswap already has a built-in mechanism for discovering peers that have data the client wants. This mechanism is used first before looking into the DHT, so the DHT is rarely used in practice.
Given all of that, it seems there is limited value to run the DHT on the staked network, especially with amount of overhead.
See these comments on an analysis of disabling the DHT (https://github.com/onflow/flow-go/issues/5798#issuecomment-2081504760, https://github.com/onflow/flow-go/issues/5798#issuecomment-2081508055, https://github.com/onflow/flow-go/issues/5798#issuecomment-2081512147, https://github.com/onflow/flow-go/issues/5798#issuecomment-2081516931)
Next steps
Disabling the DHT seems to be a viable option for nodes on the staked network. It is still needed on the public network for peer discovery, though possibly not for bitswap. More investigation is needed to understand if these issues will also appear there.
We could also try these options for reducing the memory utilization:
We could also explore options for limiting which blobs are reprovided. e.g. only reprovide blobs from the last N blocks.