Open alvestrand opened 3 years ago
@alvestrand has this been overtaken by your new proposal?
I think this is the previous iteration of the line of thought that led to my present problem description, yes. So it may make sense to point this bug at that proposal.
Link to the writeup: https://lists.w3.org/Archives/Public/public-webrtc/2022Aug/0032.html
the motivation for #154 was such a one-ended use-case. We use encoded transform to do process encoded data sent over a PeerConnection (second bullet above) and receive a custom codec over RTP[^1]. For that use-case it is useful to know the RTP timestamp to allow the "jitter buffer" to detect losses [^2]. The data received gets decoded using WASM (or Webcodecs) and does not get enqueued back into the normal webrtc pipeline (or a silent frame gets enqueued instead while playout is handled differently)
[^1]: note that this would benefit from BYOC and better integration into SDP. Currently it needs to pick a negotiated but effectively unused payload type from the SDP (such as G711a) [^2]: it might also be useful to have the receiveTime from rVFC
One use case bug raised (no PR yet): https://github.com/w3c/webrtc-nv-use-cases/issues/77
I wonder if it's the right place complain about mandatory controller.enqueue
(https://github.com/webrtc/samples/blob/gh-pages/src/content/insertable-streams/endtoend-encryption/js/worker.js#L81)? My use case is that I get AES-encrypted audio/video frames from WebRTC and feed them into the browser CDM via MSE, which is a one-way process and I never get decrypted frames back. Hence I have nothing to enqueue, but if I don't enqueue anything, most WebRTC implementations will start firing PLIs non-stop
I've already come up with some rather creative workarounds to get this plugged, but Firefox in particular is still a problem, and tbh having this handled by the specification would be much more preferable
The pragmatic solution is to enqueue silent audio or black frames (of the appropriate size) which keeps webrtc stats mostly working as one would expect
Right, that's what I call "rather creative workarounds". Creating a stream producing black frames and encoding them on-the-fly is way too complicated and wasteful
@guidou has been working on an "one ended encoded stream" specification - specifying a sink that you can enqueue frames to that don't come from the same source. We probably need a similar abstraction for a stream that you get frames from but never enqueue frames too.
Use cases that are obvious:
These need to be written up somewhere.