nagadomi / nunif

Misc; latest version of waifu2x; 2D video to stereo 3D video conversion
MIT License
1.43k stars 136 forks source link

Correct YUV(YCbCr/YPbPr) colorspace conversion for video processing #164

Closed nagadomi closed 2 months ago

nagadomi commented 3 months ago

Discussed in https://github.com/nagadomi/nunif/discussions/157

Maybe the current implementation uses BT.601 for yuv420p/yuv444p. Since HD MP4 generally uses BT.709, the color conversion may not be correct.

Maybe color_range (pc/tv) is also related to this problem.

In this area, including the ffmpeg/pyav implementation, are a nightmare, so I am not sure whether this can be handled correctly.

3gMan commented 3 months ago

This is what the double color space conversion does in iw3 when selecting yuv420p. RGB didn't seem much better for banding, but I didn't check actual colors like this. Screenshot (111) Original is left. Iw3 convert on right using yuv420p, slower, 15 crf.

Besides the fairly drastic color change, there is also a granularity and less smooth transition noted.

Using iw3 dmap in resolve the final product comes out virtually identical in color and grading to the original left image. Just to give some data points in case it helps.

nagadomi commented 3 months ago

I think the darker image on the right is due to colorspace and/or color_range (this issue).

Also the boundary between the background and the person on the left side of the screen is visible as sharp edges. However, I think it is due to the stereo generation process and not related to colorspace or encoding. This seems to be the result of forward_fill.

3gMan commented 3 months ago

I think the darker image on the right is due to the colorspace (this issue).

Also the boundary between the background and the person on the left side of the screen is visible as sharp edges. However, I think it is due to the stereo generation process and not related to colorspace or encoding. This seems to be the result of forward_fill.

I agree it's the same issue. The sharp edges not sure. I used rowflow V3 exporting from iw3, but it could also be the specific camera method I am using in Resolve. Haven't noticed it while watching.

romanr commented 2 months ago

Using iw3 dmap in resolve

How do you that, how do you get dmap, using command line? Do you use any camera footage as source?

3gMan commented 2 months ago

Using iw3 dmap in resolve

How do you that, how do you get dmap, using command line? Do you use any camera footage as source?

"Export Disparity" under stereo format. This creates a png image for each depth frame which takes up a good amount of space. Select all dmap images, and drag onto timeline in Resolve. You can then delete that timeline video, and you are left with a fully usable depthmap clip in the "media" tab of resolve.

romanr commented 2 months ago

depthmap clip

Do you use Fusion and 3D camera to achieve final SBS video?

3gMan commented 2 months ago

depthmap clip

Do you use Fusion and 3D camera to achieve final SBS video?

Yes sir!

nagadomi commented 2 months ago

I added Colorspace(--colorspace) option. The default is unspecified, which is the same as the previous versions that does nothing. See https://github.com/nagadomi/nunif/blob/dev/iw3/docs/colorspace.md for details. This is a very complicated feature. (and sorry for my english skills for this complicated spec) I wanted auto to be the default value, but I am not very confident that it works correctly with any video.

Please let me know if you have any problems, errors, or questions about this feature.