ietf-wg-mops / draft-ietf-mops-streaming-opcons

drafts for the mops IETF working group
Other
5 stars 4 forks source link

Adaptive bitrate at low-latency #32

Closed kixelated closed 3 years ago

kixelated commented 3 years ago

One common problem is building an ABR algorithm for low-latency delivery, especially for client-driven protocols (ex. LL-DASH).

Traditional segmented delivery (HLS) allows segments to be downloaded with unlimited throughput. It's clear that if you can download 2s of media in 1s, then your network can handle double the throughput. Vice-versa, if it takes 3s to download 2s of media, then your network is struggling and you should switch down.

But when video is delivered as it is generated (LL-DASH), it is difficult to determine if your connection can handle a higher bitrate. 2s of media will be delivered in 2s even if your network can support a far higher throughput. The connection is application-limited and not congestion-limited.

I haven't explored the public developments in this space, but it's difficult enough that Twitch has sponsored an academic challenge. In my opinion, it's why LLHLS opted to use smaller segments instead of streaming segments, at the cost of higher latency.

agouaillard commented 3 years ago

That's a good point.

Groups working on real-time media protocols have historically solved this by (1) removing the need for server-side ABR when dealing with adaptation, and (2) managing the media end-to-end instead of by segments (source -> encoding -> ingest -> transcoding -> chunking -> upload -> delivery ....).

"Simulcast" is exactly the same as ABR but on the source side, removing the need for (and the extra latency cost of) transcoding server side. SVC codecs are one step further, since they achieve the same goal but with a single bitstream and have extra resilience, faster resolution shift, and native capacity for End-to-end encryption (DRM on steroids to protect both content and users) built in.

I wrote a little something for non technical audience a few years back, and I m happy to give tech pointers to peer-reviewed papers and/or specifications if need be: http://webrtcbydralex.com/index.php/2019/04/06/webrtc-1-0-simulcast-vs-abr/

acbegen commented 3 years ago

I haven't explored the public developments in this space, but it's difficult enough that Twitch has sponsored an academic challenge. In my opinion, it's why LLHLS opted to use smaller segments instead of streaming segments, at the cost of higher latency.

LL-HLS uses parts (and these are not smaller segments) just like the chunks DASH-LL uses. However, LL-HLS still has a request/response for each of these parts unlike DASH-LL and that gives them the advantage of measuring the available bw a bit easier. But then, we already have developed ways to measure the available bw accurately in DASH-LL anyway.

Check the LoL+ algorithm we implemented (and also available of dash.js as of v3.2) at: https://github.com/Dash-Industry-Forum/dash.js/wiki/Low-Latency-streaming

You might also find Twitch's evaluation interesting: https://www.youtube.com/watch?v=rcXFVDotpy4

SpencerDawkins commented 3 years ago

Linked to Issue #3

SpencerDawkins commented 3 years ago

Fixed in #59.