ipfs / kubo

An IPFS implementation in Go
https://docs.ipfs.tech/how-to/command-line-quick-start/
Other
16.05k stars 3.01k forks source link

Bitswap Improvement Plan #5723

Closed hannahhoward closed 5 years ago

hannahhoward commented 5 years ago

Goals

This is a meta issue to track and discuss improving the Bitswap implementation

. Currently, what we have is:

Our goal is to measure the current behavior of the various implementations and to improve them. All of them suffer from duplicate blocks to differing degrees.

Out of scope:

Tasks

Things we want to do to improve performance:

hannahhoward commented 5 years ago

Reasons we think @whyrusleeping 's PR is not working with Wikipedia:

  1. First, dupl does not increase a lot. However, it increases near the beginning to either 2 or 3 and it's not clear if that is a good idea
  2. Wikipedia has up to around 2000 peers. This presents significant challenges to the current implementation which picks peers at random
  3. Because peers are picked at random, even with a dupl factor of 3, there are common scenarios in which the picked peers are actually dead peers who don't respond, causing a broadcast
  4. Multiple broadcasts of wants to 2000 peers produces a ton of duplicates.

Stats gathered on a run of LS (incomplete, but a long way in): 2279 Peers, 44440 Fetches, 5560 Broadcasts -- the broadcast ratio to fetch ration is almost 10:1 but 5560 x 2000 == 12.2 million

whyrusleeping commented 5 years ago

@hannahhoward one point, the peers we are selecting from here should not ever be dead peers.

whyrusleeping commented 5 years ago

It may be worth the extra effort to select peers based on how many blocks they have sent us so far, instead of just random selection. The random selection stuff works pretty well when there are a small number of peers in the potential set, but 2000 is absurd, and we need to be smarter (we should never broadcast to 2000 peers...)

whyrusleeping commented 5 years ago

Also, over 2000 peers feels sketchy. Do we have even 2000 connections? Maybe something is up there...

hannahhoward commented 5 years ago

@whyrusleeping

yea I now think the problem is not dead peers but slow peers vs fast peers.

I'm going to post a PR to bitswap with a test that replicates the issue

2279 is the length of session.activePeersArray.... I dunno if maybe we're not checking for uniqueness or something? seems unlikely though.

whyrusleeping commented 5 years ago

Yeah, 2279 feels very wrong. We shouldnt even have that many connections at all in general, let alone connections to peers that have the data we're interested in.

eingenito commented 5 years ago

Supersedes #2111

hannahhoward commented 5 years ago

Just want to say this issue is still open, and the things that are not checked off need to be done.

ghost commented 5 years ago

Would a possible workaround be to limit the Wants which contain each block request to a single node per block such that, initially, each node receives a Want containing a unique block which is not sent to other nodes unless in a given timeout period expires. Then when a node fails to provide the block requested the block is added to the Want list provided to a different node, with the previously failed node being blacklisted for the duration of the operation, until all the nodes in the DHT have been tried.

This would significantly reduce the overhead of requesting that every node containing a block should send it, ensuring that all but the first download to finish would be wasted duplication. And would in effect result in striping block requests across the available nodes.

This has the obvious downside of meaning that if for any reason you have a large number of nodes advertising a block which they refuse, or fail, to provide; that the client could end up waiting a multiple of the timeout period multiplied by the number of bad nodes until it reaches a node which can/will satisfy the request. This could be mitigated by introducing a sliding window, where after a configurable number of failed requests the client includes the block in multiple Wants, to a number that increases over time. For example the client could include the block to only one unique node for 3 attempts, then to 2 nodes for 3 attempts, then to 4 nodes, etc until the client has either excluded all the nodes as being unable to provide the block or is including the block in a Want sent to every node.

Stebalien commented 5 years ago

That's almost exactly what we now do.

Stebalien commented 5 years ago

Closing as this sprint has finished.