Closed hannahhoward closed 2 years ago
Lgtm! One quick q, how would rate limiting base on size(for example bytes/second to transfer) work with this? Just wanna make sure we are designing something that can be extended for this need (which is another high requested feature)
@jennijuju this is a fixed "number of requests in progress" limit. However, it doesn't inhibit doing something more complex like size/seconds to transfer. I am not sure that kind of limiting belongs in graphsync though -- it may need to go at a higher level like go-fil-markets. @raulk may have more perspective.
@jennijuju this is a fixed "number of requests in progress" limit. However, it doesn't inhibit doing something more complex like size/seconds to transfer. I am not sure that kind of limiting belongs in graphsync though -- it may need to go at a higher level like go-fil-markets. @raulk may have more perspective.
makes sense!
An absolute limit on the number of simultaneous requests is the simplest way to solve this problem, and I think it's worth implementing 👍
In practice the connection to each peer will have varying bandwidth and latency. A future, more sophisticated rate limiter could
For example:
ONGOING Peer A *
ONGOING Peer B * *
QUEUED Peer C
QUEUED Peer D
Once the block for Peer A has been processed we can open a request to Peer C
ONGOING Peer A
ONGOING Peer B * *
ONGOING Peer C
QUEUED Peer D
etc...
going to move forward on this soon.
Sounds like a way forward.
Out of curiosity, @jennijuju, did we survey providers and find that they'd want a global limit instead of separate inbound and outbound limits?
@raulk I am almost done implementing this, and the transition to unified combined limit should it be needed is possible.
It looked like initial miner sentiment was for seperate configuration
Sounds like a way forward.
Out of curiosity, @jennijuju, did we survey providers and find that they'd want a global limit instead of separate inbound and outbound limits?
we did and they want separate settings.
What
Add an option to limit the number of in progress outgoing requests.
Why
Currently, we have a limit on the number of incoming requests processed at the same time in the Responder implementation in Graphsync.
However, there are several cases where we'd like to limit the number of outgoing requests as well. It certainly makes sense as a protocol implementation to have these controls.
How
MaxInProgressRequests is really MaxInProgressIncomingRequests and will be renamed as such.
We will add MaxInProgressOutgoingRequests which will have the same effect for the requestor side.
We will also implement the requestor side with a PeerTaskQueue, meaning that requests will be balanced between peers, so that one slow peer does not cause congestion in the queue.
For discussion
This implementation will support solutions for https://github.com/filecoin-project/lotus/pull/7276. It will allow us to implement option 4 (
N in progress simultaneous Storage Deals and M in progress simultaneous Retrieval Deals separately.
) for Miners. It is NOT a complete solution to the problem -- it sounds like there needs to some significantly more robust solutions that control the overall flow between transfer and sealing. However, it will be a good start for now.Note that we are NOT implementing a single global transfer limit for now that encompasses both incoming and outgoing requests.