Closed bkchr closed 5 years ago
The HTTP/QUIC spec solves that a layer above QUIC. The HTTP/3 mapping specify a priority order, to check for example which stream should be written or read before or after which other stream.
In the absence of explicit application preference, anything goes. You are suggesting doing round-robin, but that's not always optimal. Sending all of stream 1 before all of stream 2 guarantees that stream 1 can be entirely received soon, which is probably better than guaranteeing that stream 1 and 2 are received at the same time. Following that reasoning, it would make sense to send first the stream that is "closest to finish".
With the HTTP/3 mapping, streams are arranged in a priority tree. I really don't know whether we should manage that priority tree in the transport layer. We could, but I am concerned with the additional complexity.
You mention "starvation". But starvation is only a problem if the multiplexed streams are really independent, like for example messages in two parallel chat rooms. You would not want text on one to queue after image on the other. The HTTP mapping does not have a "parallel but equal" concept, but obviously that's just software.
So yes, we could add some logic in picoquic_find_ready_stream(cnx)
to chose something else that the first stream with available data. We just have to find a clean way to let applications express their policy.
Yeah, I will need to look closer into it the problem.
And you are right, we should not bring that much complexity into the transport layer. However, the specification mentions that the implementation could offer some prioritization.
What would you think of creating something similar to the wake up queue for connections.
We could provide picoquic_set_stream_prio(u32 prio)
and higher prio
corresponds to an higher prioritization of this stream. Than in picoquic_add_data_to_stream
we put the stream based on its priorization into the queue. Do you think that could work?
Managing a simple ranked list is surely simpler than managing the tree of dependencies, so yes it could work. There are additional complexities, like the requirement to open streams in their numerical order.
Or we could ask the application to only start sending on a new stream if the higher priority tasks are done. Maybe we should start by drawing scenarios, so we can test them.
Yeah, maybe we need to think more about it. I think that I also found the problem in my rust implementation and now the streams do not seem to "starve" anymore. I think I close this issue for now.
Hey, I think that I currently have the problem of a stream starving, because another stream sends too much data. Do you have any idea how we could solve this? Could we maybe at least not start every time at the first stream and instead memorize the last stream we already processed? So, that we at least make sure that all streams get a chance to send its data.