3Dickulus / FragM

Derived from https://github.com/Syntopia/Fragmentarium/
GNU General Public License v3.0
349 stars 30 forks source link

pipe rendered data directly to video encoder #115

Closed claudeha closed 4 years ago

claudeha commented 4 years ago

Is your feature request related to a problem? Please describe. Intermediate files on disk when preparing final video files are huge (BMP) and/or slow (PNG). Would be nice to be able to pipe the lossless data directly to ffmpeg or similar.

Describe the solution you'd like An option in the animation render dialog to choose image sequence (as before), or alternatively a PPM stream to pipe to a custom ffmpeg command, by default populated with something like: ffmpeg -framerate 25 -f image2pipe -i - -pix_fmt yuv420p -profile:v high -level:v 4.1 -crf:v 20 -movflags +faststart output.mp4. The ffmpeg process could be launched via popen() on posix systems, not sure about Windows. Not sure how to capture the output of the command too, so it may just go to the terminal FragM was started from.

Describe alternatives you've considered Writing a PPM stream to a FIFO on the filesystem created by mkfifo, this would block FragM if there is no process reading from it which would could be confusing. Also more manual process-juggling.

Additional context Currently working with a cheap-to-render frag, the disk is the big bottleneck for me at the moment.

3Dickulus commented 4 years ago

again something beyond FragM, you could have a prog that watches a folder for the next file and deletes them as they get used.

rendering to stream and something goes wrong, oops need to adjust a setting or something, gotta restart render? no thanks. if needed just render a lowres demo reel before committing to the big one.

yes the files do add up fast that's why we have png exr and other formats with reasonable compression, if FragM is being used in a scenario that requires rendering thousands of frames whoever is doing that should be familiar with the amount of space required to contain a raw copy, a working copy and a copy outside of the working machine for redundancy (that's the way the big boys do it) and these images would likely be getting blended or merged into other/live scenes so the best quality is needed, once compressed via mpeg etc the images are no longer viable working stock...

if images are rendered fast enough just go fullscreen and capture the playback?

I really want to put efforts into getting v3.0 ready, clean up frags, make sure everything works properly as is under GL4 before dreaming up new features. I'm sure GL4+ is going to present some new options ... PCL (point cloud library) and other stuff, but a really solid base will be appreciated later on and right now there is too much that has not been tested thoroughly.

claudeha commented 4 years ago

Nevermind, I copy/pasted my frag into some rendering code I had around already (which I optimized for streaming data asynchronously back from the GPU, pipelined with a few frames of delay) and that combined with FFMPEG encoding is around 9x faster than FragM writing BMPs.

My GPU is not quite fast enough for realtime rendering at full-HD (achieves 15fps or so).

3Dickulus commented 4 years ago

FragM has a lot of overhead to manage all of the features, ideally one would develop a fragment in FragM and, as you've done, render with some custom code if speed is essential, personally I would never ever use BMP format as it's bulky and MS-centric

edit: this is something that might be heldover for v3 stable