Closed Anacondo closed 4 months ago
Just for context I ran a quick test on my system (Ryzen 5950x w/ 64GB RAM) and this is how long it took to detect the scenes:
Video 1 (4K, 2h12m): 15min Video 2 (1080p, 1h47m): 4min
I tested the scene detection with a 4k hdr source and the same but downscaled to 720, the speed was 1.33x vs 1.32x. I knew the bottleneck is the cpu decoding speed. A possibility is to use the gpu to decode, but in my experience this is unreliable, it fails with some encodes and the gpu variations decode different formats. But, I still can speed up the detection using more than one process, on my pc the decoding use 25% or less cpu, so at least we can use three processes. I'll do it when I have time because I need to change the worst code in the program xd
Haha, ok mate, no problem. Thanks a lot! Any way I can help? I can try to investigate the command line for ffmpeg to add more threads if you point me to the place in the code where it's used.
Haha, ok mate, no problem. Thanks a lot! Any way I can help? I can try to investigate the command line for ffmpeg to add more threads if you point me to the place in the code where it's used.
Sure but I think ffmpeg doesn't have that option. I added multhreading to the scene detection when the video is large, with a maximun 3 ffmpeg processes, check if this helps
Hello,
I was wondering if there would be a way to achieve this through ffmpeg, as the process takes a very long time with 4K content and above. I've seen other tools like av1an downsize the video to 720p or even 480p for the scene detection, since you don't really need the best quality for it, and it was much faster.
Any idea if something similar could be implemented?
Best regards.