Closed QuantumKitty closed 1 year ago
Whether VideoWriterNV encoding works in multiple processes?
The ffmpegcv call the ffmpeg
to run video encoding. The ffmpeg
is multiple processing to run the NVENC, but the python-ffmpegcv
is single processing.
In general, the ffmpegcv
didn't take the whole computing power compared to ffmpeg
.
RGB
or BGR
so that cause performance degradation.ffmpegcv
transforms image with the ffmpeg
from stdout (PIPE). Which works but is not efficient.Any built-in function in VideoWriterNV for parallel encoding?
No
I want to write multiple videos at a same time using VideoWriterNV
Yes, you can create multiple VideoWriterNV. One VideoWriterNV
doesn't occupy all the NVENC. You can use multiple VideoWriterNV
instances (~=5) uptoMax # of concurrent sessions
. https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new
nvtop
in linux to see the NVENC %usage.ffmpeg
commands to convert videos. You can try ffmpegcv.noblock
to speed up your multiple encoding (may take a bit more cpu usage). Tell me whether that works for you.
In my case, I have 8 high-speed cameras working at 200fps, processed on one single server. I can retrieve each frame Mat data (YUV420) in realtime from cameras, and I am seeking an efficient way to encode all those frames into 8 HEVC videos respectively, without any frame loss or buffer stack-up. As you can see, this is challenging with CPU-encoding. I therefore want to take the advantage of my RTX3090-NVENC. Umm...yeah, OpenCV does not support that which leads me here.
You suggest using ffmpeg
commands to convert videos. I took a brief look at ffmpeg
, looks like it only accpects a sequence of .jpeg images, rather than YUV420 Mat data from realtime?
In fact ffmpegcv.VideoWriterNV
is exactly what I want, but you mention ffmpegcv
transforms image with the ffmpeg
from stdout (PIPE), which is not efficient.
So my question is:
Is the performance degradation in ffmpegcv.VideoWriterNV
unavoidable in my case?
Or say is ffmpegcv.VideoWriterNV
already the most efficient method in my case?
If yes, I am going to directly use ffmpegcv.VideoWriterNV
.
Otherwise I might have to look deep into FFmpeg.
Thanks!
It's better to use OBS studio to stack your cameras. The OBS is the most efficient. I've used 120fps, 800x1280, 9cam and I record them whole. Then, I use ffmpegcv to crop the ROI of each view in video file. You can also "Stream" the obs to your server in your case.
Did it work for you?
Nope. I'm using machine vision cameras with hardware triggers. Sources cannot be recognized in OBS. I also have some customized functions like realtime DNN inference (at a lower fps) besides video encoding at 200fps, which I guess can't be achieved in OBS. But thanks for recommanding.
A tip may be helpful. The raw pix_fmt yuv420p
is much efficient in video encoding & decoding. The rgb24
and bgr24
formats waste time transcoding with yuv420p
. The ffmpegcv.VideoCapture supports yuv420p
reading, but VideoWriter doesn't yet. Your DNN may also need rgb24
. I hope you can find the way out. Best wishes.
Thanks!
I want to write multiple videos at a same time using VideoWriterNV. My plan is to use multiprocessing. My concern is that whether VideoWriterNV encoding works in multiple processes since there is only one NVENC per GPU. If yes, will it achieve a proportional comsumption time? (i.e. 1+1+1≈3) Plus, is there any built-in function in VideoWriterNV for parallel encoding that is more efficient that multiprocessing? Thanks!