I was trying to test, as you may know often the compression standards uses Y4M/YUV files, and it fails to load them as the header is not having information and the ffprobe fails as it cannot find nb_frames as it is usually N/A for Y4M files. Same happens for yuv files. I couldn't figure out how to feed width and height of images via CLI or JSON file.
Related question,
If we feed any video, will it be using ffmpeg (or libavfilter in the heart) to decode videos.
Are we always using pytorch impl for any image manipulation tasks (upscaling/downscaling for eg), in the past when we tested ffmpeg impl of conversions of YUV->RGB or vice-versa it was not giving accurate colours. So the standard recommendation was to stick to HDRTools for conversions of YUV to RGB etc. I did not experiment with pytorch for HDR image manipulation. I am curious to know which one is responsible for conversions here in the case of YUV feeding.
Hello Team,
This is a great tool,
I was trying to test, as you may know often the compression standards uses Y4M/YUV files, and it fails to load them as the header is not having information and the ffprobe fails as it cannot find
nb_frames
as it is usuallyN/A
for Y4M files. Same happens foryuv
files. I couldn't figure out how to feed width and height of images via CLI or JSON file.Related question,
ffmpeg
(or libavfilter in the heart) to decode videos.ffmpeg
impl of conversions of YUV->RGB or vice-versa it was not giving accurate colours. So the standard recommendation was to stick toHDRTools
for conversions of YUV to RGB etc. I did not experiment with pytorch for HDR image manipulation. I am curious to know which one is responsible for conversions here in the case of YUV feeding.Sample: https://media.xiph.org/video/aomctc/test_set/hdr2_2k/MeridianTalk_1920x1080_5994_hdr10.y4m
I believe if we support YUV file ingest where we ask users to input {pixel format, size, bitdepth} might be fastest way to resolve this issue.