Open aiden-jeffrey opened 5 months ago
Would a meta to tag "original" video frames as opposed to copied / made-up be enough for you? You could then discard things as you see fit. Another option of course would be a property that would cause the source to output buffers marked as gap buffers, then in the demuxer they would be transformed as straight gap events, with the audio buffers still being demuxed as normal.
Mmm, yes I was thinking that my approach might mess up audio. Out of interest, from an architecture perspective, why was the cefmux
element required? I.e. why doesn't the cefsrc
just expose an audio pad as well? Is it standard to stream audio packets on a video/raw pad and then demux?
I.e. why doesn't the cefsrc just expose an audio pad as well? Is it standard to stream audio packets on a video/raw pad and then demux?
No, it is not standard but a workaround for the fact that GstBaseSrc
is designed to expose a single Always
source pad.
The alternative solution is a wrapper bin, with one source per output stream and a shared context (in this case the CEF browser), but at the time this was implemented cef had no support for audio capture, and it was then easier to retrofit a demuxer to the initial implementation :)
I'm looking to add a
vsync
type functionality to the cefsrc element that would only push buffers when a fresh one is painted in theRenderHandler.OnPaint
method. Ultimately I want to be able to record webgl applications that may have a variable frame rate into a constant frame rate video. In other words, I want one frame in my mp4 file perrequestAnimationFrame
in js land.Currently it's clear that (baring some initial paints), there is 1
OnPaint
call per animation frame.I can sort of get there by controlling the duration and pts time in
gst_cef_src_create
, but I was wondering if you had some better idea. Is the answer something to do with making the element non-live for this vsync use case?