Closed vasilvv closed 1 year ago
So groups share the connection resource equally i.e. round robin between them? That's going to cause interesting latency effects if you insert a very urgent thing in the send queue and have little idea when it can actually get sent out.
I guess we can make groups non-equal in priority, which would make sense, since we already require a session to inherit its HTTP priority.
So if you could live with 8 priority groups (per session? Need to think more), then you basically have a design that mimics HTTP extensible priorities. The difference is strict ordering that w3c has defined. But we can spec that as a new extension in HTTP land per https://www.ietf.org/id/draft-pardue-httpbis-priority-order-00.html
Ah, no, what I meant is, we can assign relative priority values to the priority groups. So the full hierarchy would be:
It's straightforward enough to extend HTTP to add more groups if that's what you need.
What you're suggesting is possibly like Pat's scheme that Cloudflare implements for HTTP/2: see https://blog.cloudflare.com/better-http-2-prioritization-for-a-faster-web/
This was proposed as one possible way to do extensible priorities. We didn't go with it because it was more complex than we determined we needed for HTTP. WebTransport is different though and maybe it's time to add more knobs.
The key point though is, if you can keep W3C and IETF models roughly consistent, it improves the chances of writing apps that can work in a consistent manner. That's super nice for intermediaries that need to forward over a bottleneck.
I'm not sure what "consistent" would mean here. As consequence of pooling, the priorities that WebTransport streams get are by necessity a product of HTTP priorities plus whatever extra priorities we decide to add to prioritize things within a single session. This is about changing the latter part (the extra priorities).
Agree.
W3C should focus on the API needs for determining local send decisions. So far that led to stream order. This issue proposes groups. Either way we are very close to the existing HTTP priorities model with extras. That's great.
Later on, if we think signalling that intent explicitly on the wire would help at all, IETF can do that piece of work. Design consistency, as we are doing, helps make that work much easier.
That does lead to an unfortunate situation where a session with a lot of active groups would get more bandwidth compared to a session with just one, but since the connection is tied to an origin anyways, I feel like that is not that much of an issue.
In my opinion, if a session is transparently pooled with another session, there must be no distinguishable difference to the application. At the very least, the application has to be aware of the side-effects before it agrees to pool with another session.
The fact that sessions share an origin is relevant, but it doesn't prevent unfortunate behavior. For example, let's assume MoQ makes a priority group per track. Two tabs (different sites?) connect to live.twitch.tv
and get pooled together. The broadcast with a single track is significantly starved compared to the broadcast with multiple tracks.
Meeting:
Meeting:
Meeting:
Bikeshedding:
const group1 = wt.createSendPriorityGroup(); // OR
const group2 = new WebTransportSendPriorityGroup(wt);
const writable1 = await wt.createUnidirectionalStream({sendPriorityGroup: group1});
const writable2 = await wt.createBidirectionalStream({sendPriorityGroup: group2}); // applies sender-side
group1.weight = 2; // twice as heavy as the default of 1
wt.datagrams.sendPriorityGroup.weight = 3; // ?
sendOrder
can now be 0 instead of undefined.Just like we did for sendOrder, I think we also need to handle setting sendPriorityGroup on incoming bidirectional streams. E.g.
for await (const {writable, readable} of wt.incomingBidirectionalStreams) {
writable.sendOrder = 1000;
writable.sendPriorityGroup = group1;
This would match what we did for sendOrder in https://github.com/w3c/webtransport/pull/510 for the same reason.
Feedback from IETF #117 in San Francisco Summary of comments detailed at https://notes.ietf.org/notes-ietf-117-webtrans
Mo Zanaty: this is overly simplistic. Applications will have a combination of transport types. Hard to bridge the gap between application-limited media types and flows that can immediately saturate the link. I'm convinced that strict ordering will never provide a useful semantic for applications ...
Alan Frindell: challenging to come up with an API for priorities that will meet every application's needs for applications that haven't been written yet. What about a call out to JS to let the applicaiton decide?
Marten Seeman: Looking RFC9218, there's an incremenetal flag. Wondering if there's a need to define a new thing if we can use that?
Victor Vasiliev: I don't think we need incremental when you have uint64 priorities because you can just order things the way you need to. Regarding the upcall that Alan mentioned, that would be the ideal solution, but it's not feasible because it will make your network stack perform extremely slowly. I feel like the solution that is currently on the slide that tries to accommodate two problems, while being as simple as possible, restrict the order of streams and pass along carve-outs between streams. Seems to be simplest and generally agreeable to all of the implementers. Definitely understand that there are other use cases that might not be addressed.
Luke Curley: Don't want to have the callback mechanism ... you can do incremental yourself. Groups are nice because you can do round-robin. Prioritization is a never-ending topic, and this is a good starting point.
Cullen Jennings: As a thought experiment, how much of this can be done as a polyfill on top of sendOrder?
Victor: ..to answer the thought experiment, it's possible to emulate over sendOrder if the segments are short.
TPAC meeting discussion - https://www.w3.org/wiki/WebTransport/Meetings2023#WebTransport_TPAC_meeting_-_Seville_2023TPAC and slide questions https://docs.google.com/presentation/d/1PzshEzs8GPeoYvVboi-9D0n8ZtXBZHuM7uZe1blahSk/edit#slide=id.g27b5021956a_1_111.
Instead of priority groups, you could accomplish the same behavior with session pooling. Each WebTransport session will get round-robined over the QUIC connection. The ability to specify weights would be equally useful for both session pooling and priority groups.
But I think it's a better idea to support priority groups. It's simpler and avoids the headaches of pooling sessions.
In fact, maybe an application should use a SharedWorker and priority groups instead of session pooling...
(continuation of #493)
I believe that the current priority scheme defined in the draft is in an unfortunate position where it does not actually address any specific practical use cases: it's too flexible for some use-cases (that only require strict priorities), but insufficiently flexible for others (that require more than "strict" and "non-strict" buckets).
I think we should go in the direction of making things more flexible, since people seem to be unwilling to go for less flexibility. The proposal here is roughly:
Pooling question: the reason this design is natural is that we already need priority groups of that nature for pooling. A natural implementation here is to make groups global for the connection, but key them by
(session ID, priority group ID)
instead of just session ID. That does lead to an unfortunate situation where a session with a lot of active groups would get more bandwidth compared to a session with just one, but since the connection is tied to an origin anyways, I feel like that is not that much of an issue.API question: do we want those groups to be just an int or something? Or maybe you'd have to explicitly create one as an object, and then use that object as an opaque handle?