This PR addresses a long standing design issue with processing incoming requests to support https://github.com/filecoin-project/go-data-transfer/issues/310. Essentially, when requests come in, we queue them for processing, even if the request will ultimately be rejected.
This seems like poor behavior: under heavy load, a client might wait minutes simply to be told they won't get the response they wanted. This leads to very usual behavior as well up the stack -- data transfer requests can sit in a "requested" state for a very long time for example.
It also means we're wasting resources on holding data on requests we ultimately aren't going to process.
Implementation
remove request queued hooks
move the AugmentContext function on request queued hooks to the request hooks
add a new listener (no hook) to handle the moment when requests begin processing
call request hooks immediately at the moment an incoming request is made
if the request is:
accepted: queue as normal
rejected for any reason: don't queue, only hold request data till the request failure finishes sending
paused: don't queue (previously, we'd queue, dequeue, run request hooks, and then do nothing but put the request in a paused state -- eek)
remove the query preparer struct which is now just a pure function
construct the traverser directly when we first start processing the request
add a couple additional pieces of data in the response struct so we can construct the traverser later
For discussion
While I am pretty sure this is the right change -- in fact I'm very sure -- it's none the less significant as it makes all requests get rejected or accepted immediately. Generally, I think this has better DDOS implications -- we don't hold around requests we won't process for a long time. Nevertheless, perhaps it's more risky in certain situations -- if you get DDOS'd with requests, and you have very slow hooks, possibly it takes longer to process? Ultimately, we're processing requests immediately anyway, and now we're just running a slightly different set of hooks, so I'm not super worried here. Nevertheless, it merits discussion.
Goals
This PR addresses a long standing design issue with processing incoming requests to support https://github.com/filecoin-project/go-data-transfer/issues/310. Essentially, when requests come in, we queue them for processing, even if the request will ultimately be rejected.
This seems like poor behavior: under heavy load, a client might wait minutes simply to be told they won't get the response they wanted. This leads to very usual behavior as well up the stack -- data transfer requests can sit in a "requested" state for a very long time for example.
It also means we're wasting resources on holding data on requests we ultimately aren't going to process.
Implementation
For discussion
While I am pretty sure this is the right change -- in fact I'm very sure -- it's none the less significant as it makes all requests get rejected or accepted immediately. Generally, I think this has better DDOS implications -- we don't hold around requests we won't process for a long time. Nevertheless, perhaps it's more risky in certain situations -- if you get DDOS'd with requests, and you have very slow hooks, possibly it takes longer to process? Ultimately, we're processing requests immediately anyway, and now we're just running a slightly different set of hooks, so I'm not super worried here. Nevertheless, it merits discussion.