httpwg / http-extensions

HTTP Extensions in progress
https://httpwg.org/http-extensions/
434 stars 143 forks source link

Need a way to tell if server supports priority #1274

Closed guoye-zhang closed 3 years ago

guoye-zhang commented 4 years ago

Extensible priority scheme is an extension which server might not choose to implement. However, Low-Latency HTTP Live Streaming depends on current HTTP/2 dependencies and weights, and if it is to switch to the new scheme, client needs to tell if the server supports priority. In case priority isn't supported, client would disable low-latency features and fallback to regular HLS.

roger-on-github commented 3 years ago

Given the weak (or non-existent) guarantees provided by Extensible Priorities, there's a fallback approach that is worth considering. LL-HLS could mandate the use of Extensible Priorities, for H3 connections and H2 connections with H2 priorities disabled, with a static prioritization on resources to be imposed by the server and a defined prioritization process (or outcome) on the server and intermediaries.

The defined prioritization could be something along the lines of:

You've made it clear that EP cannot impose those server "musts." But the LL-HLS spec can. It would be difficult for clients to validate them, but the HLS stream validator tool could include a set of "must-pass" tests.

One difficulty is that the namespace of urgency defined by EP (0-7) is too small to effectively support static prioritization. LL-HLS could address this by defining a supplemental-urgency (0 to 1000) where urgency is capped at 6, and a rule that for any simultaneous set of responses with urgency=6, supplemental-urgency be applied.

Pros of this approach:

Cons:

It's not my first choice. But it might be good enough.

LPardue commented 3 years ago

@roger-on-github thank for detailing this fallback thoroughly.

As you rightly point out, Extensible Priorities is generic but you are free to profile at the application as you have done and apply more restrictive scheduling targets. This allows you complete control over client application behaviour.

All Playlist responses must have an urgency of 1 All Segment responses must have incremental=0 All Segment responses must have an urgency > 2, ordered by highest bitrate tier, with lower tiers having greater urgency

One way to interpret this is that LLHLS is an additional server-side signal. E.g. the server applies prioritization according to static configuration such as content-type. I don't know if that's your intention but by taking that approach you provide clear guidance to servers about how to do things, and empower them to do it regardless of the client support for extensible priorities. Cloudflare has previously blogged that employing such a strategy for Microsoft Edge (pre chromium) improved important web page loading metrics. Alternatively, this could be guidance recommending that content origins present this information to a proxy/CDN in the form of an Extensible Priorities signal. That seems like an easier requirement to levy on LLHLS deployments, but I am not an expert.

As a further iteration you might want to add some words about handling client reprioritization signals.

One difficulty is that the namespace of urgency defined by EP (0-7) is too small to effectively support static prioritization. LL-HLS could address this by defining a supplemental-urgency (0 to 1000) where urgency is capped at 6, and a rule that for any simultaneous set of responses with urgency=6, supplemental-urgency be applied.

This is a limitation and there has been some past discussion on the merits of more vs. less. I think the current count strikes the middle-ground. In the past we've said if people need more, then additional parameters can be used. You're effectively extending the range here which seems like a good approach, another way would be to add another dimension.

As mentioned before, I'm sympathetic to your use case. It sucks to say "your problem, you fix it" but I think on-balance that the complexity or additional requirements to do anything else may make the topic of this issue too big for HTTP implementers to solve. I of course cannot talk for all implementers.

@roger-on-github In the interest of making forward progress, is the fallback something you're happy to run with, or would you like this discussion to carry on? We have an HTTP WG Interim meeting scheduled on October 20 2020 with 15 minutes agenda time for HTTP Priorities - is that a good target for coming to a resolution?

roger-on-github commented 3 years ago

@LPardue regarding client reprioritization, good point. Probably good to have something along the lines of "a server MAY override the static prioritization in response to explicit prioritization requests from the client."

As far as the next stage of discussion goes, I'll do a round of comment-seeking from the HLS community and the networking folks at Apple and see what turns up.

roger-on-github commented 3 years ago

I surveyed various groups and received no objections to the general idea of static server-generated prioritization for LL-HLS over H3.

So next, at some point, I'll need to:

Once that's done the LL-HLS spec can be updated to add H3 support, referencing the appropriate I-Ds.

LPardue commented 3 years ago

Hi @roger-on-github, thanks for the update!

It seems like we have a resolution to the original issue raised against the Extensible Priorities specification and that we can close with no action here. Does that sound ok?

The work you describe has some touchpoints with the HTTP community beyond this spec. I'd be happy to engage where I can to improve the general state of things.

We've so far benefitted from @rmarx's qlog and qvis combination to record and analyse how stream multiplexing actually works in real connections. That could be a good starting point for your validation testing plans. I think there would be community interest in enhancements to this aspect of QUIC and H3.

roger-on-github commented 3 years ago

Yes, I'm ok with closing this issue. Thanks for the pointers on qlog and qvis; I'll keep them in mind.

LPardue commented 3 years ago

Since we have an alternative that folks can live with, closing this issue with no action.