Open jamiebuilds opened 1 month ago
track.requestFrame();
will trigger a frame emition but the frame emition might happen asynchronously (even in a different thread). Hence why it might be rendered.
Please also note that, as per spec, video frames would be enqueued in a worker. So you would need to ensure srcObject is set in window before enqueueing video frames in the worker (through postMessage).
This issue had an associated resolution in WebRTC November 19 2024 meeting – 19 November 2024 (Issue #114: VideoTrackGenerator/MediaStream should buffer the current frame):
RESOLUTION: Not buffering the last frame is the expected behavior
Right now, at least in Chromium's implementation which they say is to spec:
If you write a frame to
MediaStreamTrackGenerator
and then assign its container MediaStream to avideo.srcObject
you will never see that frame.This is different than the behavior of a MediaStream created by
HTMLCanvasElement.captureStream()
Besides the behavior being different, it also makes using these streams inconvenient for using across multiple
<video>
elements which is common in a lot of video calling apps, and you end up needing to hold onto the current frame anyways.Instead, if you want to avoid recreating the same stream again, you would have to hold onto the last frame and write it to the stream again when necessary:
This difference between
CanvasCaptureMediaStreamTrack
vsMediaStreamTrackGenerator
caused a real regression in our app when attempting to use it.