w3c / webcodecs

WebCodecs is a flexible web API for encoding and decoding audio and video.
https://w3c.github.io/webcodecs/
Other
937 stars 131 forks source link

Extend EncodedVideoChunkMetadata for Spatial Scalability #756

Open aboba opened 6 months ago

aboba commented 6 months ago

Fixes #619

Rebase and update of PR #654

Related: https://github.com/w3c/webrtc-encoded-transform/issues/220


:boom: Error: 400 Bad Request :boom:

PR Preview failed to build. (Last tried on Jan 9, 2024, 10:27 PM UTC).

More PR Preview relies on a number of web services to run. There seems to be an issue with the following one: :rotating_light: [CSS Spec Preprocessor](https://api.csswg.org/bikeshed/) - CSS Spec Preprocessor is the web service used to build Bikeshed specs. :link: [Related URL](https://api.csswg.org/bikeshed/?url=https%3A%2F%2Fraw.githubusercontent.com%2Fw3c%2Fwebcodecs%2F9efde67f1de0fdf0194f96bc8cb8f1eeb9197d80%2Findex.src.html&md-warning=not%20ready) ``` Error running preprocessor, returned code: 2. FATAL ERROR: Couldn't find target frameId 'dict-member': <span data-dict-member-info="" for="EncodedVideoChunkMetadat/frameId"></span> ✘ Did not generate, due to errors exceeding the allowed error level. ``` _If you don't have enough information above to solve the error by yourself (or to understand to which web service the error is related to, if any), please [file an issue](https://github.com/tobie/pr-preview/issues/new?title=Error%20not%20surfaced%20properly&body=See%20w3c/webcodecs%23756.)._
aboba commented 6 months ago

@kalradivyanshu @fippo PTAL.

kalradivyanshu commented 6 months ago

@aboba Looks good. So we just set L3T3 in encoder, and each frame will tell us which spatial and temporal layer it belongs in, and which frames are its dependencies, then in the decoder nothing changes, we just make sure all dependencies are fed in before feeding in the frame, and it just works, right?

aboba commented 6 months ago

For a frame to be decodable, all its dependencies need to have been received and decoded without an error callback. From the conference server perspective, this means not only tracking what frames were sent to each participant, but also the transport status (whether the frame was completely received) and whether it was successfully decoded. Currently the underlying encoder API limits avenues available for repair to keyframe generation, retransmission and forward error correction. Alternate Long Term Reference (LTR) frames or layer refresh (LRR) are not yet supported.

aboba commented 5 months ago

@tonyherre PTAL

kalradivyanshu commented 4 months ago

Thankyou so much for this @aboba , what all is left in this PR to get it accepted in the spec?

aboba commented 4 months ago

@kalradivyanshu It has been noted that spatial scalability is not widely used today because it is not hardware accelerated and therefore create power and thermal issues on mobile devices. As a result, applications are using spatial simulcast instead. Also, the current WebCodecs API does not support layer refresh, which means that if a spatial frame is lost, a base layer keyframe is required, rather than just creating a new spatial frame referencing a received base-layer frame (e.g. moving to a new Long-Term Reference).

@Djuffin has argued that these problems need to be fixed before spatial scalability could become popular in WebCodecs, and therefore that it would make sense to focus on a new encoder API that can address the problems rather than just shipping a (potentially unusable) feature.

kalradivyanshu commented 4 months ago

Oh ok. Couple of things:

  1. Spatial scalability is critical for low network restraint according to me, whenever thinking about video streaming/communication, its always a trade off b/w cpu and network, yes CPU used will be more, but simulcast needs keyframes whenever switching layers (specially since for AV1 webcodecs doesnt support switch frames #747). And even with hardware acceleration, keyframes are notoriously a lot bigger, so not only will they clog the network, but will cause issues with CPU as well.

  2. If I am using simulcast, if I lose a frame, I still have to rely on a keyframe, so its still a problem as I have mentioned, switching layers mean another keyframe, so either application have to regularly generate keyframe, or create feedback machanisms like PLI, which cause scale issues when broadcasting to larger crowd.

While I agree with the issues @Djuffin raised, I honestly feel that since the new API is atleast a year away, spatial scalability should be added or at the very least stuff like switch frames should be added to make simulcast more usable. Without any of these, the only solution is to do simulcast with keyframes requests for every switch, which in turn will add a huge load on encoder and also the decoder and network.

Thank you both for all your work!