Open wilaw opened 4 years ago
Note that L2A just won the Twitch MMsys challenge.
Thank Will! We would like to support this and move forward on it. Theo would be happy to further develop the algorithm within DASH.js. Please feel free to contact him: https://github.com/ThAlKa.
To aid testing we have put up a new reference stream. This is an LL-DASH MBR stream, 4 bitrates: 360p@730k, 432p@1100k, 540p@2000k and 720p@3000k. We intend to keep this stream up 24/7 for testing. This is based on the latest version of FFMPEG using the FFlabs implementation of LL-DASH. If it would be helpful for other bitrates or alternative spacing of the bitrates please let me know. Each stream has wall clock and resolution/bitrate burnt in for easy identification/timing.
https://cmafref.akamaized.net/cmaf/live-ull/2006350/akambr/out.mpd
To aid testing we have put up a new reference stream. This is an LL-DASH MBR stream, 4 bitrates: 360p@730k, 432p@1100k, 540p@2000k and 720p@3000k. We intend to keep this stream up 24/7 for testing. This is based on the latest version of FFMPEG using the FFlabs implementation of LL-DASH. If it would be helpful for other bitrates or alternative spacing of the bitrates please let me know. Each stream has wall clock and resolution/bitrate burnt in for easy identification/timing.
https://cmafref.akamaized.net/cmaf/live-ull/2006350/akambr/out.mpd
This is a very useful resource, thank you for providing it.
While using it for testing (of a LL bitrate adaptation algorithm), I noticed that the fragment size used here can be optimized further, with respect to low-latency bitrate adaptation. If not mistaken the stream is organized in 8s segments-2 s fragments, which of course is in-line with https://dvb.org/wp-content/uploads/2020/03/Dash-LL.pdf).
It is worth mentioning that dash.js, according to my knowledge and observations, allows bitrate switches only at fragment boundaries, currently. With the short buffers required for low-latency (<2s), and with throughput drops being registered (in dash.js at least) only after completion of fragment downloads, this stream is likely to stall at bitrate down-switch instances.
Example scenario A: A fragment of 2 s length is downloaded @3000K with a 6000K bandwidth. Assume that the buffer sits at 1 s before the download, while after the download of the fragment, the buffer is at 2 s, having increased (rounding for chunks) by approx 1 s (2 s fragment inserted in the buffer - 1 sec required to download it (assuming no intermediate stall)). Sequentially, the next fragment is requested at @3000K (again), but immediately after the request, the bandwidth is reduced to 1500K. Then, approximately 2*3000/1500 = 4 s will be required before a new fragment in a lower bitrate (@1100K) is requested, causing an intermediate stall => increase in latency of approx 2 s
Example scenario B: Same as above but fragment length is now 0.5 s. After the download of the first fragment, the buffer is at 1.25 s, having increased (rounding for chunks) by approx 0.25 s (0.5 s fragment inserted in the buffer - 0.25 s required to download it (assuming no intermediate stall)). Sequentially, the next fragment is requested at @3000K (again), but as in example A, immediately after the request, the bandwidth is reduced to 1500K. Then, approximately 0.5*3000/1500 = 1 s will be required before a new fragment in a lower bitrate (@1100K) is requested, thus managing to keep the buffer in the positive regime (1.5-1=0.5 s) and in turn avoiding an intermediate stall or increase in latency.
Of course, depending on the amplitude of bandwidth drops a stall might be unavoidable, yet in example B above (for instance), a stall is avoided (or perhaps a shorter stall would manifest at worst), by i) simply allowing bitrate change in chunk boundaries ( which is a player-related issue) or by ii) having shorter segments and (>1 s, preferably 0.5 s) fragments as per example B (which is a stream-related issue).
In direction of ii) above, is it perhaps possible to update the stream above, yet with such a (shorter) fragment configuration instead?
Please correct me in the case that I have misinterpreted (or miscalculated) anything. Thank you in advance.
Thank you for the feedback.
The current configuration of the stream is the segments are 2s (2.002s) in length. The chunked encoding is done per frame (29.97fps). Therefore under ideal circumstances a segment download should take 2s, as in the limit the player gets data every 33ms as each frame is produced and sent. Your modeling aligns with the working thesis that as a good starting point the segments should be shorter than the desired latency. So the player has some margin to switch and that the smaller the segment the larger the bandwidth reduction the player can cope with without stalling.
The target latency for this stream is 3s, so I agree that 2s segments gives some room to work with but not a lot of room.
Of course the actual limit of switching is random access points in the stream, determined by GOP size. The GOP size for this stream is currently set to 30 frames (1s), so there is an additional RAP in the middle of each segment. In the manifest you can see the Resync tags that point this moment in time within the segments allowing a player to know (or at least hint) at where those sub-segment switch points are. The player would need to be able understand these tags to take advantage of those additional switch points.
If it's easier for testing at this stage, I'm happy to setup another stream is with 1s segments as a comparison?
That would be very much appreciated and useful to test the two new LL bitrate adaptation algorithms (L2A-LL and LoL+) that are currently being incorporated in dash.js (v 3.2.0).
Thanks!
Very interesting discussion.
@peterchave is right, dash.js does not support the resync functionality yet.
Not sure if you have considered this already, but it is possible to abort the download of a fragment. So in case we see a massive bandwidth drop it would be possible to cancel the current request and instead start requesting a lower bitrate. However, then we have the problem that we need to overwrite existing data in the buffer.
In the easiest scenario we cancel the request for an upcoming segment for which the presentation start time is larger than the current time (segment rendering hasn't started yet). In this case we can overwrite (or delete and add) without an issue.
If the segment is already being rendered I am not sure what happens if we simply replace it in the buffer. Probably causes issues. Probably it would be easier in this case to implement the resync functionality and start replacing the parts that have not been rendered yet.
@Peter – we could also try 1.6s segments as a quality/switch-interval trade-off if the e2e target is 3s. Segment duration does not have to be integer seconds. For 30fps video & 48kHz audio, 1.6 is exactly 48 video frames and also integer audio samples.
-Will
From: peterchave notifications@github.com Reply-To: "Dash-Industry-Forum/dash.js" reply@reply.github.com Date: Wednesday, November 25, 2020 at 7:42 AM To: "Dash-Industry-Forum/dash.js" dash.js@noreply.github.com Cc: "Law, Will" wilaw@akamai.com, Author author@noreply.github.com Subject: Re: [Dash-Industry-Forum/dash.js] Investigate Learn2Adapt-LowLatency algorithm for near-second latency adaptation. (#3231)
Thank you for the feedback.
The current configuration of the stream is the segments are 2s (2.002s) in length. The chunked encoding is done per frame (29.97fps). Therefore under ideal circumstances a segment download should take 2s, as in the limit the player gets data every 33ms as each frame is produced and sent. Your modeling aligns with the working thesis that as a good starting point the segments should be shorter than the desired latency. So the player has some margin to switch and that the smaller the segment the larger the bandwidth reduction the player can cope with without stalling.
The target latency for this stream is 3s, so I agree that 2s segments gives some room to work with but not a lot of room.
Of course the actual limit of switching is random access points in the stream, determined by GOP size. The GOP size for this stream is currently set to 30 frames (1s), so there is an additional RAP in the middle of each segment. In the manifest you can see the Resync tags that point this moment in time within the segments allowing a player to know (or at least hint) at where those sub-segment switch points are. The player would need to be able understand these tags to take advantage of those additional switch points.
If it's easier for testing at this stage, I'm happy to setup another stream is with 1s segments as a comparison?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https:/github.com/Dash-Industry-Forum/dash.js/issues/3231*issuecomment-733784101__;Iw!!GjvTz_vk!F1SR_voWagSdEZjsJckSQlr_dXNgjsDAvxUCJGJ0I9Cj0fAORsRxfWsYTugv$, or unsubscribehttps://urldefense.com/v3/__https:/github.com/notifications/unsubscribe-auth/AAVCMCQSRUAHEAQO7HHRAYDSRUQV5ANCNFSM4MQI7EOQ__;!!GjvTz_vk!F1SR_voWagSdEZjsJckSQlr_dXNgjsDAvxUCJGJ0I9Cj0fAORsRxfS94eKyl$.
Hello, @wilaw , @peterchave Currently I am working on designing an ABR algorithm suited for low latency dash streaming in university, and I found trouble in setting up a testing environment in the lab.
Specifiaclly, it would make the job easier if I can : (1) encode a video complied with dash low latency and set up a video server locally (i.e., generate .mpd and chunks); (2) run our solution against the reference low latency algortihm in dash (L2A and LoL+ )
Therefore, could you please give me some advice/instructions on fixing some issues. (1) The right version of ffmpeg which supports documented dash options as in (https://ffmpeg.org/ffmpeg-formats.html#dash-2) and scripts in (https://dashif.org/docs/CR-Low-Latency-Live-r8.pdf). I tried the master branch (v4.3) of ffmpeg's offcial github repo, it turs out some option is not supported (e.g., 'frag_type'); (2) Some example scripts of using the ffmpeg to generate .mpd and chunks for low latency dash. Can the script in (https://dashif.org/docs/CR-Low-Latency-Live-r8.pdf) work with the right ffmpeg? (3) Video server configuration. The server in (https://github.com/twitchtv/acm-mmsys-2020-grand-challenge) works with modified dash v3.01, could you give me some advice on http server configuration to work with dash v3.2, which implements L2A and LoL+ ?
Thank you.
Hello, @wilaw , @peterchave Currently I am working on designing an ABR algorithm suited for low latency dash streaming in university, and I found trouble in setting up a testing environment in the lab.
Specifiaclly, it would make the job easier if I can : (1) encode a video complied with dash low latency and set up a video server locally (i.e., generate .mpd and chunks); (2) run our solution against the reference low latency algortihm in dash (L2A and LoL+ )
Therefore, could you please give me some advice/instructions on fixing some issues. (1) The right version of ffmpeg which supports documented dash options as in (https://ffmpeg.org/ffmpeg-formats.html#dash-2) and scripts in (https://dashif.org/docs/CR-Low-Latency-Live-r8.pdf). I tried the master branch (v4.3) of ffmpeg's offcial github repo, it turs out some option is not supported (e.g., 'frag_type'); (2) Some example scripts of using the ffmpeg to generate .mpd and chunks for low latency dash. Can the script in (https://dashif.org/docs/CR-Low-Latency-Live-r8.pdf) work with the right ffmpeg? (3) Video server configuration. The server in (https://github.com/twitchtv/acm-mmsys-2020-grand-challenge) works with modified dash v3.01, could you give me some advice on http server configuration to work with dash v3.2, which implements L2A and LoL+ ?
Thank you.
I tried (ffmpeg in https://gitlab.com/fflabs/ffmpeg/tree/dashll), and use ./configure parameters as in (https://trac.ffmpeg.org/wiki/CompilationGuide/Centos), still some options are not supported, such as 'framerate', 'export_side_data', and 'Unknown encoder 'avc1.640016' '... May be the configure parameters are wrong?
@kefanchen - I have a small github project that will compile FFMPEG from source and setup a basic LL-DASH encoder script. https://github.com/peterchave/install-ll-encoder This has been tested on Ubuntu 18.04 LTS, other distros might need some tweaks. This will hopefully should give you a working FFMPEG and known good script. Let me know.
@kefanchen - I have a small github project that will compile FFMPEG from source and setup a basic LL-DASH encoder script. https://github.com/peterchave/install-ll-encoder This has been tested on Ubuntu 18.04 LTS, other distros might need some tweaks. This will hopefully should give you a working FFMPEG and known good script. Let me know.
Thank you.
Interesting development in low latency ABR algorithms from Unified Streaming, as they submit a solution to the Twitch MMSys challenge, along with a fork of dash.js implementing the algorithm.
https://github.com/unifiedstreaming/Learn2Adapt-LowLatency
Solution proposes a stable algorithm for near-second latency operations.
"ABSTRACT Achieving low-latency is paramount for live streaming scenarios, that are now-days becoming increasingly popular. In this paper, we propose a novel algorithm for bitrate adaptation in HTTP Adaptive Streaming (HAS), based on Online Convex Optimization (OCO). The proposed algorithm, named Learn2Adapt-LowLatency (L2A-LL), is shown to provide a robust adaptation strategy which, unlike most of the state-of-the-art techniques, does not require parameter tuning, channel model assumptions, throughput estimation or application-specific adjustments. These properties make it very suitable for mobile users, who typically experience fast variations in channel characteristics. The proposed algorithm has been implemented in DASH-IF’s reference video player (dash.js) and is made publicly available for research purposes. Real experiments show that L2A-LL reduces latency to the near-second regime, while pro- viding a high average streaming bit-rate and without impairing the overall Quality of Experience (QoE), a result that is independent of the channel and application scenarios. The presented optimization framework, is robust due to its design principle; its ability to learn and allows for modular QoE prioritization, while facilitating easy adjustments to consider other streaming application and/or user classes."
Proposal