Open martinthomson opened 5 months ago
Currently the reaction for video rate guidance would be to either relay the information to the server or to request video with a certain maximum rate. I think the first option would be rather application specific (like in an app not necessarily in a more generic browser interface), the second already exists, no?
The second definitely does not exist. Video-consuming applications that run in a browser generally provide rate adaptation of their own. They are also unable to receive the sorts of signals that this would create - this is because the browser doesn't pass along inauthentic signals from what is indistinguishable from a random attacker (that's a piece missing from your first part).
Sorry, are you saying SCONE is building a protocol that receives inauthentic signals that could be sent by a random attacker?
The whole point of this work is to build within the envelope of what can be authenticated. That would be a requirement for a browser passing any signal along.
I was talking about protocols like DASH where the client request a segment and also a certain resolution or bit rate. Are you say in the browser interface the segment can be requested but the resolution/bit rate is always set by the browser and there is no client interface?
(Sorry, I could probably look this up myself but I thought it's fast to ask you...)
In DASH or HLS, the rate adaptation is not done by the browser, but by a web page. But the browser is the one that consumes the signal from the network. The browser therefore needs a way to let the web page know what is going on.
This is headed towards being a really interesting question for SCONEPRO.
If a client uses a SCONEPRO service, and self-adapts, do we expect it to
Any of these would provide the control of the server's sending rate that the client needs to stay within SCONEPRO adaptation guidance.
I know we've had a lot of conversations about wanting to get applications to stop using segment-by-segment requests in an HTTP/3 environment, because people have a sense that DASH and HLS were making up for the deficiencies of running over TCP (with head of line blocking, bufferbloat, and all the other problems that justified chartering QUIC), and they shouldn't have to make up for the same deficiencies when they run over HTTP/3.
If a client continues to use DASH/HLS-style segment-by-segment requests over HTTP/3, even if the server is using rate adaptation algorithms that avoid building one-way delays, is SCONEPRO going to provide a benefit to the client?
I'm not following you here @SpencerDawkins. The method for adaptation that is employed by an application (which might be client-only actions, server-only actions, or joint actions) shouldn't matter so much as the fact that it is the application and not the network that adapts. If your argument is that certain forms of adaptation are poor, that shouldn't be relevant here unless the architecture we propose precludes doing better. Is that the case?
@martinthomson -
I'm not following you here @SpencerDawkins.
I confuse others, too. :confused:.
I might be talking about something that's not relevant here, but to make sure, let me split this into two parts.
The method for adaptation that is employed by an application (which might be client-only actions, server-only actions, or joint actions) shouldn't matter so much as the fact that it is the application and not the network that adapts.
I agree with you here, and after discussions in PANRG and in SPUD/PLUS, I wouldn't argue if I disagreed!
If your argument is that certain forms of adaptation are poor, that shouldn't be relevant here unless the architecture we propose precludes doing better. Is that the case?
I'm not ready to take a position, I'm trying to unconfuse myself.
In my mind, there are five parts of the architecture in play:
I agree that the network is out of play here.
When we say "the application" in a QUIC context, that could mean "an application that uses an API to interact with a black-box QUIC implementation", or it could mean "the application and QUIC implementation that are bundled together, so that the application developer has more control over the QUIC implementation than would be the case if it was a black box".
I think those are distinct options, and are orthogonal to whether actions are client-only, server-only, or joint actions.
Does that help? If so, I hope
If a client uses a SCONEPRO service, and self-adapts, do we expect it to
- adapt the rate at which it sends application-level acknowledgements, or
- adapt the rate at which it sends QUIC-level acknowledgements, or
- adapt the way it requests segments of the video resource and resolution or bit rate?
makes more sense.
There's a model where QUIC adapts its sending behavior to minimize one-way delays, without the application above QUIC taking action, and there's a model where the application above QUIC doesn't expect that to happen, so continues to take responsibility for pacing requests, the way DASH and HLS do today, even though they aren't making up for the deficiencies of TCP implementations that were identified a decade or two ago.
That's what I thought might be relevant to considerations about how this works in a WebAPI.
OK, that helps. At least for the stuff on each endpoint, I imagined two parts to any interface:
The first is what I had in mind when opening this issue. The second is something that I believe should be scope for this work, but something you might see taken elsewhere. For the Web, this might be in the WebTransport or Fetch APIs. And of course native interfaces to QUIC stacks might do many things, which is a great place to learn what works and what doesn't.
@martinthomson - thank you for the quick response! That helped me a lot.
This could be addressed in current discussions on PR #14 and the related issue(s).
I'm following up on @martinthomson's comment here:
OK, that helps. At least for the stuff on each endpoint, I imagined two parts to any interface:
- The part where the QUIC piece tells the application piece about signals it receives from the network.
- The part where the application tells the QUIC piece to apply certain adaptations.
The first is what I had in mind when opening this issue. The second is something that I believe should be scope for this work, but something you might see taken elsewhere. For the Web, this might be in the WebTransport or Fetch APIs. And of course native interfaces to QUIC stacks might do many things, which is a great place to learn what works and what doesn't.
For the first item on the list,
For the second item on the list,
What I'm trying to do, is to firm up the deliverables that we're proposing in the updated BOF request. It would be good to have a clearer idea of how SCONEPRO will perform adaptation, so we can make sure there's a deliverable with a place to include it.
Focusing on the video case case at the moment, there is an assumption that the client application can use the SCONEPRO indication to request an appropriate quality video bitrate from the server.
So, in this case, QUIC is not needing to adjust, but will just run its normal congestion control loop, and since the application data will be rate-limited within the network-allowed capacity, this should switch it from being essentially congestion-limited to rather being application-limited.
There is one thing that we plan to update in the draft on API considerations, regarding information passed downwards from the application. What is lacking right now is a way for the application to explicitly indicate that it is a video flow that will benefit from SCONEPRO signaling being performed on it. Other than that, I don't think we envision more complex interaction between the client application and underlying QUIC stack.
FWIW there's a draft covering these issues now: https://datatracker.ietf.org/doc/draft-eddy-sconepro-api/
And there has been subsequent mailing list discussion: https://mailarchive.ietf.org/arch/msg/sadcdn/jU5_btUAgpVbFmaCjWusaY9XqpQ/
I care about video on the web. If applications need to react to requests from the network, a Web API is going to be a necessary part of that chain.
Ideally, that work would be done by the W3C or in close cooperation with them. I can help coordinate there, but we'd need something in the charter to recognize that.