centricular / gstcefsrc

A simple gstreamer wrapper around Chromium Embedded Framework
86 stars 45 forks source link

Ideas on making src element only generate frames when a fresh buffer is created #84

Open aiden-jeffrey opened 4 months ago

aiden-jeffrey commented 4 months ago

I'm looking to add a vsync type functionality to the cefsrc element that would only push buffers when a fresh one is painted in the RenderHandler.OnPaint method. Ultimately I want to be able to record webgl applications that may have a variable frame rate into a constant frame rate video. In other words, I want one frame in my mp4 file per requestAnimationFrame in js land.

Currently it's clear that (baring some initial paints), there is 1 OnPaint call per animation frame.

I can sort of get there by controlling the duration and pts time in gst_cef_src_create, but I was wondering if you had some better idea. Is the answer something to do with making the element non-live for this vsync use case?

MathieuDuponchelle commented 4 months ago

Would a meta to tag "original" video frames as opposed to copied / made-up be enough for you? You could then discard things as you see fit. Another option of course would be a property that would cause the source to output buffers marked as gap buffers, then in the demuxer they would be transformed as straight gap events, with the audio buffers still being demuxed as normal.

aiden-jeffrey commented 4 months ago

Mmm, yes I was thinking that my approach might mess up audio. Out of interest, from an architecture perspective, why was the cefmux element required? I.e. why doesn't the cefsrc just expose an audio pad as well? Is it standard to stream audio packets on a video/raw pad and then demux?

MathieuDuponchelle commented 4 months ago

I.e. why doesn't the cefsrc just expose an audio pad as well? Is it standard to stream audio packets on a video/raw pad and then demux?

No, it is not standard but a workaround for the fact that GstBaseSrc is designed to expose a single Always source pad.

The alternative solution is a wrapper bin, with one source per output stream and a shared context (in this case the CEF browser), but at the time this was implemented cef had no support for audio capture, and it was then easier to retrofit a demuxer to the initial implementation :)