Closed kazuho closed 4 years ago
Do you feel this operates substantially different to how such an intermediary might choose to pick from the queue with frame-based prioritization?
One reason why absolute priorities are useful is because such a proxy may break a single client connection into multiple origin connections. Breaking a client dependency tree and keeping it up to date is hard.
@LPardue
Do you feel this operates substantially different to how such an intermediary might choose to pick from the queue with frame-based prioritization?
I do not think so. To clarify, what a terminator would do is use a priority queue (that uses the urgency and the stream ID as the priority level) for queuing the requests. That would be indifferent regardless of the conveyer of the absolute priorities. This type of design has been a bit difficult with H2 priorities.
One reason why absolute priorities are useful is because such a proxy may break a single client connection into multiple origin connections. Breaking a client dependency tree and keeping it up to date is hard.
To me it seems that how a server not directly connected to the end-client uses the values of the priority header field sounds like a different question, regardless of how the information is to be used (e.g., bandwidth distribution, queuing requests by priority).
Though I agree with the use case and that it is easier with this proposal than with the dependency tree, I don't really see the need to extend on this point in-depth in the text. It seems more like an implementation detail to me, though could be mentioned in-passing as a potential use case / motivation for the "stateless" setup.
I think I am in agreement with @rmarx. My thought is that we might be able to clarify the point by adding an item to the bullet point list in the Introduction, if that text fits well. I do not think we need to try hard on this or go further than that.
Yeah I agree.
any thoughts on if this is still relevant?
I do not think we need to pursue this. Closing.
An intermediary might have a queue that enqueues the requests received from the client, so that the number of requests that it issues to the backend can be capped.
One example is the http2-max-concurrent-requests-per-connection configuration directive of H2O. It limits the number of requests that H2O issues to the handlers, rather than the number of requests that the client can issue.
When picking a pending request from such a queue, it would be a good idea to consult the urgency value of those requests.