Open AlexanderSchuetz97 opened 4 months ago
Thank you for the recommendation! This seems to be somewhat related to #318. At least the implementation should get way easier.
Would a tcp/udp/unix socket also be acceptable? As there is only one stdin for ffmpeg, using pipe would limit us to one input of this kind, while using a tcp socket would allow more flexibility.
Example:
new FFmpegBuilder().addTcpInput(myInput1).done().addTcpInput(myInput2).done()...
Could produce something like:
-i tcp://127.0.0.1:2000?listen -i tcp://127.0.0.1:2001?listen
Unfortunately the mp4 demuxer of current head/master ffmpeg does not demux some mp4's from a unix stream or tcp stream. It is however happy to decode it from pipe:0. I have tried both from unix and tcp stream. Didnt work only pipe:0 worked.
Also another problem I have is that I cannot really use tcp. Customer will ask why I need this port and whats sent over it. They will complain that its not encrypted... (even if I only bind localhost)
This leaves me with unix which doesnt work and pipe:0 that only works using process builder.
Interesting. Are you aware that you won't be able to use the progress API, as it also opens a temporary TCP port?
If we add this, we must ensure the OS buffers are not overfilled, specifically FFMPEG's stdout. Currently, we block by reading stdout; with this, we would also be concerned about writing stdin at the same time. It may be time to move the stdout/stdin handling onto a separate thread.
Position in Code.
I do not use the progress api. Its not needed for my use case.
And yes pumping the data must be done in a seperate thread. I have not implemented additional synchonization in my process builder variant. I have not observed any issues but I would not exclude that my source storage is not capable of delivering data fast enough to overfill ffmpegs input buffer.
What I have observed tho is that ffmpeg is much more sensitive about the output buffer beeing full. Since I primarily output to unix sockets, setting the recv buffer size to a very large size and pumping the data from the unix socket into a cache before it is streamed to the end client for display, was necesarry. I tried streaming directly to the end client but the client consumed data too slowly so the output buffer filled up. This caused broken pipe errors in ffmpeg.
Speed shouldn't be too big of an issue, depending on how you load the data from storage. Not reading fast enough would prevent filling the buffer on the other side. However, that depends on many factors; you should test that in your environment.
Small update: I got the following code working:
InputStrream in = Files.newInputStream(Paths.get(Samples.big_buck_bunny_720p_1mb));
FFmpeg ffmpeg = new FFmpeg();
ffmpeg.setProcessInputStream(in);
FFmpegBuilder builder = ffmpeg.builder().addInput("pipe:").addOutput(Samples.output_mp4).done();
ffmpeg.run(builder);
I'm not quite ready to merge, but the code is already available on my fork.
This looks good to me. Do you call close() on the InputStream?
Its very important you make it clear who should call close() on the InputStream. My person preference would be that caller calls close() on the stream after run(builder) returns. This would allow for try with resources around ffmpeg.run(builder)
I do not think I have the time to build a branch and import it into my package management tool. Its quite complicated/time consuming. I would be willing to check it out if you provide it on a maven repository (not maven central, but an actual maven repo with a public URL as that is much easier to import, many larger projects have a snapshot repo for such purposes)
The stream is not closed. In most cases, a try-with-resources should be fine, but I didn't want to limit use cases. I'll make sure to include it in the documentation.
I'm currently on vacation (flying back tomorrow evening). I'll see what I can do about a snapshot repository. Currently I'm thinking of using jitpack, but I'll see if there are better/other options.
Hello, I need to transcode live videos, essentially reading from an InputStream and pipeing the output into an OutputStream is what I need to do. Unfortunately writing the file to disk is not possible for my use case. I tried using 2 unix sockets which did work for some formats, but some input formats did not work that way. notably the MP4 demuxer didnt like demuxing from a unix socket. It does however work with demuxing from stdin.
Sadly your CLI wrapper has no support for demuxing from stdin, so I have to rely on ProcessBuilder directly for this one task.
I wouldnt mind developing this as a feature and making a PR to this repo if you are intrested in this. My idea would be to add a function to FFmpegBuilder that accepts a supplier to a InputStream. A bit like this:
At some appropriate location in the library I would add this code or something similar: (This may possible require starting of a "background" thread and I will maybe have to make adjustments for java 8 compat.)
After these changes someone could then call builder.addInput("pipe:0") to pipe the input from the InputStream. For my usecase the output would still be a unix socket since my output muxer can write to the unix socket and the input would be from the InputStream.
Tell me if you have any intrest in this feature and I will begin making it and give you a PR shortly.