onflow / flow-go

A fast, secure, and developer-friendly blockchain built to support the next generation of games, apps, and the digital assets that power them.
GNU Affero General Public License v3.0
531 stars 170 forks source link

[State Sync] DHT causing high resource utilization #5799

Open peterargue opened 2 months ago

peterargue commented 2 months ago

🐞 Bug Report

Over the last year or so, we've had incidents of high resource utilization (memory, cpu, goroutines) across all node roles and networks (mainnet, testnet, lower environments). Profiles have all pointed to the IPFS DHT as the main culprit.

Context

DHT stands for Distributed Hash Table and is a library provided by the team behind IPFS for decentralized peer discovery and content routing on the IPFS network.

It's used for 2 usecases on the Flow network

  1. Peer discovery on the public network On the public network node identities are not required to be static, so nodes need a peer-to-peer method to discover peers and maintain a map of the network.
  2. Content routing for bitswap on the staked and public networks Bitswap is a protocol for sharing arbitrary data among decentralized nodes. On Flow, it's used to share execution data between Execution and Access nodes. The DHT allows peers to announce which blobs of data they have, which makes syncing more efficient.

Originally, it was enabled for both the staked and public network for all nodes, even though it was only needed for Access and Execution nodes on staked network.

Past Issues

Originally, the issue manifested with a linear, unbounded goroutine leak observed on all node types. Nodes would eventually run out of memory and crash.

libp2p ResourceManager limits were tuned (https://github.com/onflow/flow-go/pull/4846) and eventually the DHT was disabled on all nodes that did not require it (https://github.com/onflow/flow-go/pull/5797). This resolved the issue for those nodes. However, the intermittent leaks persisted for Access and Execution nodes.

image

This shows the resource manager blocking new streams which limited how high the goroutine leaks got

Upgrading libp2p and the DHT library (libp2p/go-libp2p-kad-dht) (https://github.com/onflow/flow-go/pull/5417), resolved the goroutine leak, but the issue then manifested in different ways. We started to observe spikes in goroutines that were capped around 8500, but memory utilization remained high

image image

Current Issues

Recently, we've been seeing 2 more issues that seem to be related:

  1. High memory on previewnet ANs (https://github.com/onflow/flow-go/issues/5798) Profiles point directly to the DHT as the source of the memory growth. In particular, the in-memory db used to store the list of providers for bitswap data.
  2. Resource contention on public ANs (https://github.com/onflow/flow-go/issues/5747) There isn't a direct link between this issue and the DHT other than that 20-30% of CPU on the nodes was used for DHT related activities. Also, a high amount of allocations on the heap result in more frequent garbage collection.

Disabling the DHT

The main intention of the DHT is to make it efficient to disseminate the routing table of which blocks of data are stored on which nodes. The basic design makes a few assumptions:

  1. Nodes are connected to a subset of peers on the network
  2. Nodes only host a subset of data
  3. Peers are joining and leaving the network regularly, thus it's necessary to remind the network which blocks of data you have.
  4. Data is equally relevant over time, so we should remind peers of all data we have

On the Flow staked bitswap network, none of those assumptions are true.

  1. Participating nodes are connected to all other participating peers
  2. All nodes have all recent data
  3. Staked nodes will generally be available throughout an epoch
  4. Only the most recent data is generally needed by peers

Additionally, bitswap already has a built-in mechanism for discovering peers that have data the client wants. This mechanism is used first before looking into the DHT, so the DHT is rarely used in practice.

Given all of that, it seems there is limited value to run the DHT on the staked network, especially with amount of overhead.

See these comments on an analysis of disabling the DHT (https://github.com/onflow/flow-go/issues/5798#issuecomment-2081504760, https://github.com/onflow/flow-go/issues/5798#issuecomment-2081508055, https://github.com/onflow/flow-go/issues/5798#issuecomment-2081512147, https://github.com/onflow/flow-go/issues/5798#issuecomment-2081516931)

Next steps

Disabling the DHT seems to be a viable option for nodes on the staked network. It is still needed on the public network for peer discovery, though possibly not for bitswap. More investigation is needed to understand if these issues will also appear there.

We could also try these options for reducing the memory utilization:

We could also explore options for limiting which blobs are reprovided. e.g. only reprovide blobs from the last N blocks.

guillaumemichel commented 1 month ago

Profiles point directly to the DHT as the source of the memory growth. In particular, the in-memory db used to store the list of providers for bitswap data.

What is the order of growth of your network and of the content (number of provider records in the DB)?

peterargue commented 1 month ago

Profiles point directly to the DHT as the source of the memory growth. In particular, the in-memory db used to store the list of providers for bitswap data.

What is the order of growth of your network and of the content (number of provider records in the DB)?

@guillaumemichel this network only had 4 nodes participating in the DHT. We don't expose any metrics on the provider records, so I'm not sure what the total count is, but we are producing 3-4 new records per second, and each node held ~10M blobs in their datastore.

guillaumemichel commented 1 month ago

this network only had 4 nodes participating in the DHT

This means that all nodes are getting allocated all the data. For how long do you keep the provider records before flushing them? (ProvideValidity)

Do you keep republishing all the records over time?

What datastore are you using?

If you are using an in-memory data store, and the number of records keeps increasing, that may explain increased memory usage.

peterargue commented 1 month ago

@guillaumemichel

This means that all nodes are getting allocated all the data. For how long do you keep the provider records before flushing them? (ProvideValidity)

we use the default, which appears to be 48 hours

Do you keep republishing all the records over time?

yes, we had bitswap configured to reprovide every 12 hours. I don't believe we set a specific setting in the DHT, so whatever the default is for that.

What datastore are you using?

we're using the memory map db

If you are using an in-memory data store, and the number of records keeps increasing, that may explain increased memory usage.

it definitely explains some general increases, but we were see 20+ GB spikes in memory for nodes with only a few million entries. Interestingly, only seeing this on one of our networks (luckily not mainnet)