It may make sense to request multiple chain heads in parallel instead of blocking on a single mutex in the syncer's HandleNewTipSet function.
For example if one block is only served by a peer that conveniently disconnects during fetching that peer can cause the graphsync system to retry for the entire requestTimeout, preventing the fetching and processing of other blocks.
Acceptance criteria
We have good reason to believe this will improve performance or DOS resistance
We can safely fetch multiple chains in parallel
Risks + pitfalls
This will use more resources and so we should make sure it is worth the cost
If this is done incorrectly it will be wasteful. For example we should never be fetching N fetcher requests for the same head at once (we may want to have redundant fetches -- see #3372 -- but those should be managed carefully on a per request basis, not allowed to blowup based on repeated network messages for the same head)
Correctly determining that two chains do not intersect before fetch is actually not something we can do right now. We'd need more information (i.e. (cid, height) for every X blocks) from the hello protocol or peer tracker before this makes sense to do.
This is only for parallelizing fetches, not block processing which presents other concerns
Description
It may make sense to request multiple chain heads in parallel instead of blocking on a single mutex in the syncer's
HandleNewTipSet
function.For example if one block is only served by a peer that conveniently disconnects during fetching that peer can cause the graphsync system to retry for the entire
requestTimeout
, preventing the fetching and processing of other blocks.Acceptance criteria
Risks + pitfalls
N
fetcher requests for the same head at once (we may want to have redundant fetches -- see #3372 -- but those should be managed carefully on a per request basis, not allowed to blowup based on repeated network messages for the same head)Where to begin
Discussion with @frrist @anorth @hannahhoward