Open martinthomson opened 1 week ago
Some applications want zero buffering. These are the ones that are probably most likely to expose the system to problems like Slowloris or other attacks.
I think we can go for zero buffering at best effort basis (like CONNECT).
Slowloris is an attack against the concurrency limit. It does send data slowly, but that is to prevent requests inflight from getting closed due to timeouts.
As long as intermediaries can reduce the frequency of I/O operations when under load, I think the security risk imposed to the server would be manageable.
Buffering can be about the time that the first byte is held waiting for the last byte or the time that the first byte is held waiting for $ENOUGH bytes to arrive.
Some applications want incremental delivery of any amount of delay, so long as $t{last}-t{first}$ isn't effectively infinite. Especially for intermediaries where time is not the factor they use in buffering. That is, as noted, some intermediaries likely allocate a finite buffer and only forward once that buffer hits a predetermined threshold, which might not happen in the sorts of use cases we're talking about.
Consider a pub-sub arrangement that uses SSE, which probably won't care if there is some amount of buffering or delay.
Some applications want zero buffering. These are the ones that are probably most likely to expose the system to problems like slowloris or other attacks. These are probably not that common though.
Most are probably looking for some finite, but reasonable, amount of buffering delay. In a way, asking for incremental delivery is about bounding the buffering delay.
Is there some amount of guidance we can give about this? Is there any point in trying to indicate what sort of forwarding delay is a) desired and b) applied?