Closed nagadomi closed 3 months ago
This is what the double color space conversion does in iw3 when selecting yuv420p. RGB didn't seem much better for banding, but I didn't check actual colors like this. Original is left. Iw3 convert on right using yuv420p, slower, 15 crf.
Besides the fairly drastic color change, there is also a granularity and less smooth transition noted.
Using iw3 dmap in resolve the final product comes out virtually identical in color and grading to the original left image. Just to give some data points in case it helps.
I think the darker image on the right is due to colorspace and/or color_range (this issue).
Also the boundary between the background and the person on the left side of the screen is visible as sharp edges. However, I think it is due to the stereo generation process and not related to colorspace or encoding. This seems to be the result of forward_fill
.
I think the darker image on the right is due to the colorspace (this issue).
Also the boundary between the background and the person on the left side of the screen is visible as sharp edges. However, I think it is due to the stereo generation process and not related to colorspace or encoding. This seems to be the result of
forward_fill
.
I agree it's the same issue. The sharp edges not sure. I used rowflow V3 exporting from iw3, but it could also be the specific camera method I am using in Resolve. Haven't noticed it while watching.
Using iw3 dmap in resolve
How do you that, how do you get dmap, using command line? Do you use any camera footage as source?
Using iw3 dmap in resolve
How do you that, how do you get dmap, using command line? Do you use any camera footage as source?
"Export Disparity" under stereo format. This creates a png image for each depth frame which takes up a good amount of space. Select all dmap images, and drag onto timeline in Resolve. You can then delete that timeline video, and you are left with a fully usable depthmap clip in the "media" tab of resolve.
depthmap clip
Do you use Fusion and 3D camera to achieve final SBS video?
depthmap clip
Do you use Fusion and 3D camera to achieve final SBS video?
Yes sir!
I added Colorspace
(--colorspace) option. The default is unspecified
, which is the same as the previous versions that does nothing.
See https://github.com/nagadomi/nunif/blob/dev/iw3/docs/colorspace.md for details.
This is a very complicated feature. (and sorry for my english skills for this complicated spec)
I wanted auto
to be the default value, but I am not very confident that it works correctly with any video.
Please let me know if you have any problems, errors, or questions about this feature.
Discussed in https://github.com/nagadomi/nunif/discussions/157
Maybe the current implementation uses BT.601 for yuv420p/yuv444p. Since HD MP4 generally uses BT.709, the color conversion may not be correct.
Maybe color_range (pc/tv) is also related to this problem.
In this area, including the ffmpeg/pyav implementation, are a nightmare, so I am not sure whether this can be handled correctly.