nannou-org / nannou

A Creative Coding Framework for Rust.
https://nannou.cc/
5.97k stars 304 forks source link

Passing frames into a video encoder #504

Open alexmorley opened 4 years ago

alexmorley commented 4 years ago

Hi :wave:, awesome project.

I am a total newbie to graphics rendering and was wondering what the nicest path to passing the frames rendered in the window into a video encoder so they can be recorded. I've seen the capture_frame part of the window API, is that the way to go?

Currently I'm trying to just use OBS to record the window but I can't do that and render at full speed.

TIA! :)

mitchmindtree commented 4 years ago

Thanks @alexmorley!

Yes exactly, capture_frame is your best bet. See examples/draw/draw_capture.rs for a demonstration of how to use it to record an image sequence. Once you have your image sequence, I'd recommend using ffmpeg to convert it from a sequence to whichever format you desire.

Keep in mind that if capturing an animation, you might like to use the current frame number as your source of time rather than app.time for a more precise result, e.g. let t = frame.nth() / 60.0; for a smooth 60 FPS sequence.

Hmmm it would be nice if we had a tutorial for this in the guide, and maybe an example of automating the process by calling out to ffmpeg.

Anyway, hope this helps!

alexmorley commented 4 years ago

Keep in mind that if capturing an animation, you might like to use the current frame number as your source of time rather than app.time for a more precise result, e.g. let t = frame.nth() / 60.0; for a smooth 60 FPS sequence.

Yup. That was the plan!

Hmmm it would be nice if we had a tutorial for this in the guide, and maybe an example of automating the process by calling out to ffmpeg.

I'll have a go and if I have some success I'll make a PR.

OvermindDL1 commented 2 years ago

I'm attempting to stream the frames after encoding them to another location, however I can't seem to figure out how to get the resultant ImageBuffer of the captured frame as it seems to only dump into a file as per:

        // If the user did specify capturing the frame, submit the asynchronous read.
        if let Some((path, snapshot)) = snapshot_capture {
            let result = snapshot.read(move |result| match result {
                // TODO: Log errors, don't print to stderr.
                Err(e) => eprintln!("failed to async read captured frame: {:?}", e),
                Ok(image) => {
                    let image = image.to_owned();
                    if let Err(e) = image.save(&path) {
                        // TODO: Log errors, don't print to stderr.
                        eprintln!(
                            "failed to save captured frame to \"{}\": {}",
                            path.display(),
                            e
                        );
                    }
                }
            });
            if let Err(wgpu::TextureCapturerAwaitWorkerTimeout(_)) = result {
                // TODO: Log errors, don't print to stderr.
                eprintln!("timed out while waiting for a worker thread to capture the frame");
            }
        }

Which is located in file frame/mod.rs. It seems to only consume the captured image to save out to a file, instead of, say, handing it to me to do with what I wish (including submitting to a file if a want but in my case I wish to encode it and stream it out). Saving it out to an image, compressing that image, saving it to the filesystem, then reading that back from the filesystem and decoding the image just to encode it into yet another format to be able to stream out seems a bit more costly than just if I were passed the owned image directly (say via a callback or so). :-)

OvermindDL1 commented 2 years ago

Actually that would be simple it seems, I looked through the code path of where the path is sent through and that looks to be easily replaceable with an FnOnce, could even have a function that works like capture_frame does now (capture_frame_image?) that takes a path and just passes through a default FnOnce that just writes the image out to disk.

Would a PR be accepted? Should the existing function have the same operation as it does now and a new function (capture_frame_fn or so?) be made instead, or rename the existing so it is more accurate?