mpv-player / mpv

šŸŽ„ Command line video player
https://mpv.io
Other
28.54k stars 2.92k forks source link

dither-depth=auto is broken on vo=gpu-next. #11862

Closed ghost closed 8 months ago

ghost commented 1 year ago

Important Information

Reproduction steps

Use vo=gpu and dither-depth=auto to watch any(?) video, preferably with an 8-bit monitor or forced 8-bit output in the GPU driver. Observe the log or info page and what it says about dithering. Then contrast with what happens when you use vo=gpu-next and dither-depth=auto.

Expected behavior

dither-depth=auto should always result in 8-bit dithering for an 8-bit output.

Actual behavior

On my 8-bit monitor, 8-bit dithering fine for vo=gpu, regardless of the gpu-api setting. But, when switching to vo=gpu-next, the behavior changes. With gpu-api=d3d11, the bit depth is incorrectly set to 10. And with gpu-api=vulkan, no dithering is performed at all. Both do work fine if dither-depth=8 is specified manually.

Log file

vo=gpu https://pomf2.lain.la/f/xhm706vw.txt vo=gpu-next -gpu-api=d3d11 https://pomf2.lain.la/f/takytfgg.txt vo=gpu-next -gpu-api=vulkan https://pomf2.lain.la/f/npq8090s.txt

Sample files

https://pomf2.lain.la/f/k1fahjqe.webm

hooke007 commented 1 year ago

You have to manually set depth for vulkan because https://github.com/mpv-player/mpv/blob/0bfafd2451b25d25d3a8249373dc63b5d5e94379/video/out/vulkan/context.c#L275-L278

haasn commented 1 year ago

Working as intended. You are getting a 16-bit backbuffer with vulkan and a 10-bit backbuffer with d3d11. Dithering is not needed at 16 bit depth.

ghost commented 1 year ago

That makes sense for intermediate rendering steps, but couldn't naively sending a 16-bit texture to be displayed at the end cause banding if it's not explicitly dithered to 8 bits first? That happens to me in Krita anyway.

ghost commented 1 year ago

Off-topic, but will libplacebo's blue and white noise dithering ever be exposed in mpv?

hooke007 commented 1 year ago

bluenoise ā†’ā†’ dither=fruit

natural-harmonia-gropius commented 1 year ago

cause banding

auto results banding to me too, 8 is fine.

Raza-0007 commented 1 year ago

Working as intended. You are getting a 16-bit backbuffer with vulkan and a 10-bit backbuffer with d3d11. Dithering is not needed at 16 bit depth.

I have a similar issue, on my 8 bit (6 bit + FRC) display , with dither-depth=auto, vo=gpu-next dithers at 10 bit on all videos, while vo=gpu dithers at the expected 8 bit.

Is this a bug or is this how gpu-next is supposed to dither?

Here are my tests, keeping everything else exact same, except the vo and gpu-api:

--vo=gpu-next --dither-depth=auto --gpu-api=vulkan ------> Dithering at 10 bits --vo=gpu-next --dither-depth=auto --gpu-api=d3d11 ------> Dithering at 10 bits

--vo=gpu --dither-depth=auto --gpu-api=vulkan ------> Dithering at 8 bits --vo=gpu --dither-depth=auto --gpu-api=d3d11 ------> Dithering at 8 bits

This is on Windows 10, mpv 0.35.0-436-g1c82d6ae (June 11 2023) bundled with the latest SMPlayer.

I can post logs if needed, but since it happens on all the videos, it should be easy to replicate on any 8 bit display.

~Raza

hooke007 commented 1 year ago

@Raza-0007 Your question's answer is there https://github.com/mpv-player/mpv/issues/10881

ghost commented 1 year ago

@haasn's last reply there seems to contradict his first reply here. Am I dumb?

Raza-0007 commented 1 year ago

@hooke007 Thanks. I did not consider checking as far back at Nov 2022. I thought this was a recent change.

I am not an expert on video rendering engines, but I thought that dithering to 10 bits on a 8 bit display will cause quality issues. Something like over dithering. But reading the other discussion, @haasn seems to say that it should not cause any issues. I will of course take his word as he obviously knows more than I do about this.

@sizumam This is what I thought that when dether-depth=auto, it should either choose the exact bit depth of the display or otherwise default to 8 bit. This is why I thought it was a bug in gpu-next as it was dithering to 10 bits on auto on my 8 bit display. If it does not cause quality issues then I am ok with it.

Raza-0007 commented 1 year ago

I did another test on an 8-bit (H264) encoded video, on my 8 bit display. vo=gpu-next is still dithering to 10 bits!

Again I am not an expert, but isn't this a bit pointless! Neither the video has nor the display is capable of displaying more than 8 bits of information, what would be gained by dithering it to 10 bits!

And yet again if someone can verify that it does not effect video quality, then I am ok with it.

~Raza

hooke007 commented 1 year ago

I thought this was a recent change.

As I said, what you did is what I did in that issue. It's not a recent change.

Raza-0007 commented 1 year ago

@hooke007 What I meant was that I did a search on this issue before posting, but I did not go beyond the last 3 months, as I thought it was a recent change. Thats why I missed your Nov 2022 post

ghost commented 1 year ago

Again I am not an expert, but isn't this a bit pointless! Neither the video has nor the display is capable of displaying more than 8 bits of information, what would be gained by dithering it to 10 bits!

It would reduce rounding errors in intermediate steps. But that should just be a matter of it using a 10-bit FBO, and dithering should still be to 8 bits at the end I would think...

haasn commented 1 year ago

That makes sense for intermediate rendering steps, but couldn't naively sending a 16-bit texture to be displayed at the end cause banding if it's not explicitly dithered to 8 bits first? That happens to me in Krita anyway.

Do you see visual banding? Test on a 16-bit gradient, e.g.:

Gradient-16bit

https://github.com/mpv-player/mpv/assets/1149047/f424d0fb-dcdb-4b10-aada-f71fa8d592ac

If you do, that means it's a driver/platform bug. The system should not advertise a texture format higher than the monitor depth if it does not perform internal dithering..

haasn commented 1 year ago

If the driver really doesn't dither internally despite advertising 16-bit backbuffers, then maybe we could work around it like this:

diff --git a/src/vulkan/swapchain.c b/src/vulkan/swapchain.c
index c3d9295c..f3d10031 100644
--- a/src/vulkan/swapchain.c
+++ b/src/vulkan/swapchain.c
@@ -192,10 +192,12 @@ static bool pick_surf_format(pl_swapchain sw, const struct pl_color_space *hint)
                 break; // accept
             continue;

-        // Accept 16-bit formats for everything
+        // Only accept 16-bit formats for non-SDR
         case VK_FORMAT_R16G16B16_UNORM:
         case VK_FORMAT_R16G16B16A16_UNORM:
-             break; // accept
+            if (pl_color_transfer_is_hdr(space.transfer))
+                break; // accept
+            continue;

         default: continue;
         }

Of course, this would still pick 10-bit backbuffers if available. There's almost fundamentally no way around this, because we don't know the true monitor bit depth and have to rely on the compositor not lying to us.

natural-harmonia-gropius commented 1 year ago

I don't know is screenshot can show banding.

win11 d3d11 nvidia 536.40

depth 8 image depth auto (10bit) image

haasn commented 1 year ago

Screenshot is irrelevant, unless you can do a capture of the video signal coming into the monitor (e.g. HDMI capture card). Do you see banding visually?

natural-harmonia-gropius commented 1 year ago

Do you see banding visually?

yes, very noticeable in dark aera.

ghost commented 1 year ago

vo=gpu + gpu-api=d3d11 + dither-depth=no = banding ! vo=gpu + gpu-api=d3d11 + dither-depth=auto = no banding vo=gpu + gpu-api=d3d11 + dither-depth=8 = no banding vo=gpu + gpu-api=vulkan + dither-depth=no = banding ! vo=gpu + gpu-api=vulkan + dither-depth=auto = no banding vo=gpu + gpu-api=vulkan + dither-depth=8 = no banding vo=gpu-next + gpu-api=d3d11 + dither-depth=no = no banding vo=gpu-next + gpu-api=d3d11 + dither-depth=auto = no banding vo=gpu-next + gpu-api=d3d11 + dither-depth=8 = no banding vo=gpu-next + gpu-api=vulkan + dither-depth=no = banding ! vo=gpu-next + gpu-api=vulkan + dither-depth=auto = banding ! vo=gpu-next + gpu-api=vulkan + dither-depth=8 = no banding

haasn commented 1 year ago

And does the above patch fix it?

Raza-0007 commented 1 year ago

Of course, this would still pick 10-bit backbuffers if available. There's almost fundamentally no way around this, because we don't know the true monitor bit depth and have to rely on the compositor not lying to us.

@haasn Since you are here, I just wanted to ask, as my original issue was not related to 16-bit backbuffer. Below are two log files of a 8 bit video that I am playing on a 8 bit display. All options are the same except the --vo and --dither-depth=auto for both.

vo=gpu-next.txt

vo=gpu.txt

--gpu-next chooses to dither to 10 bits, while --gpu dithers to 8 bits.

From what I have read, you are saying that --gpu-next does this by design and there is no quality loss in the output by dithering to 10 bits for this setup. Is this correct?

haasn commented 1 year ago

From what I have read, you are saying that --gpu-next does this by design and there is no quality loss in the output by dithering to 10 bits for this setup. Is this correct?

Assuming your graphics driver / compositor is working correctly, yes.

ghost commented 1 year ago

And does the above patch fix it?

@haasn no. All 12 results are the same on my machine with that patch. [ 0.255][v][vo/gpu-next/libplacebo] Picked surface configuration 22: VK_FORMAT_R16G16B16A16_UNORM + VK_COLOR_SPACE_SRGB_NONLINEAR_KHR https://pomf2.lain.la/f/p9zb7tm.txt

hooke007 commented 1 year ago

Screenshot is irrelevant, unless you can do a capture of the video signal coming into the monitor (e.g. HDMI capture card).

His screenshots showed clear banding in dark area...

Raza-0007 commented 1 year ago

Assuming your graphics driver / compositor is working correctly, yes.

Thank You!

haasn commented 1 year ago

And does the above patch fix it?

@haasn no. All 12 results are the same on my machine with that patch. [ 0.255][v][vo/gpu-next/libplacebo] Picked surface configuration 22: VK_FORMAT_R16G16B16A16_UNORM + VK_COLOR_SPACE_SRGB_NONLINEAR_KHR https://pomf2.lain.la/f/p9zb7tm.txt

Are you sure you had the patch applied? It should be impossible to pick that format combination with the above patch.

llyyr commented 1 year ago

https://github.com/mpv-player/mpv/blob/master/video/out/vulkan/context.c#L277

--dither-depth=auto is not implemented for the vulkan context, you should use d3d11 if you want it to be auto detected, or set it to 8 manually.

hooke007 commented 1 year ago

--dither-depth=auto is not implemented for the vulkan context, you should use d3d11 if you want it to be auto detected, or set it to 8 manually.

It was already answered here https://github.com/mpv-player/mpv/issues/11862#issuecomment-1614454558

haasn commented 1 year ago

https://github.com/mpv-player/mpv/blob/master/video/out/vulkan/context.c#L277

--dither-depth=auto is not implemented for the vulkan context, you should use d3d11 if you want it to be auto detected, or set it to 8 manually.

This is irrelevant, it affects only --vo=gpu, but OP was talking about --vo=gpu-next.

Raza-0007 commented 1 year ago

I believe that whoever is in charge of the mpv manual needs to update this section on --dither-depth here:

https://mpv.io/manual/stable/#options-dither-depth

Since the --dither-depth behavior is different for each supported --vo and --gpu-api, a few lines of explanation in the manual will put people's mind at ease, and they will know that what they are seeing in the log is by design, and not a bug in mpv.

For example, it is generally known that if dithering is done at any bit depth other than the connected display's default bit depth, it will lead to ugly output. But if this is not true, and there is no quality loss in dithering to 10 bits with --vo=gpu-next, then this should be specified in the manual.

Also the manual currently says that on --dither-depth=auto if bit depth cannot be detected, then 8 bit per component are assumed, which obviously is not true for --vo=gpu-next, as it always defaults to 10 bits, so this section needs to be updated.

~Raza

haasn commented 1 year ago

Also the manual currently says that on --dither-depth=auto if bit depth cannot be detected, then 8 bit per component are assumed, which obviously is not true for --vo=gpu-next, as it always defaults to 10 bits, so this section needs to be updated.

This is not true, though. vo_gpu_next only dithers to 10 bits when the surface is detected as 10-bit. The bit about the bit depth not being detectable only applies to OpenGL. On Vulkan, we always know the exact bit depth of the backbuffer.

The manual description is entirely consistent with the observed behavior.

Raza-0007 commented 1 year ago

This is not true, though. vo_gpu_next only dithers to 10 bits when the surface is detected as 10-bit.
On Vulkan, we always know the exact bit depth of the backbuffer.

But then why is --vo=gpu-next with --gpu-api=vulkan detecting both my internal and external displays as having a bit depth of 10 bits, when both are only 8 bit capable displays? This is the part that has been bothering me since I discovered it in the log entry.

haasn commented 1 year ago

This is not true, though. vo_gpu_next only dithers to 10 bits when the surface is detected as 10-bit. On Vulkan, we always know the exact bit depth of the backbuffer.

But then why is --vo=gpu-next with --gpu-api=vulkan detecting both my internal and external displays as having a bit depth of 10 bits, when both are only 8 bit capable displays? This is the part that has been bothering me since I discovered it in the log entry.

Because your compositor is advertising us a 10-bit backbuffer. Most compositors will do internal dithering down to 8 bit before sending it out for scanout.

Raza-0007 commented 1 year ago

@haasn Thanks for the explanation. Since you have already confirmed that all this up dithering and down dithering do not effect video quality, then I am OK with it.

And thanks for your wonderful work on gpu-next and libplacebo.

~Raza

Touceire commented 1 year ago

do not effect video quality

But it absolutely does. mpv --no-config --vo=gpu-next --gpu-api=vulkan --dither-depth=auto causes terrible banding while mpv --no-config --vo=gpu-next --gpu-api=vulkan --dither-depth=8 is fine.

Raza-0007 commented 1 year ago

@Touceire I have not noticed any banding issues, but then I have --deband=yes enabled among the options I pass to mpv, so that might be smoothing out the banding for me. I will do some tests with --deband disabled to see if I notice banding issues.

llyyr commented 1 year ago

This is irrelevant, it affects only --vo=gpu, but OP was talking about --vo=gpu-next.

Right, I incorrectly assumed that this was broken because of that TODO but if that's not the case then I can also reproduce this on Linux. When dither-depth=auto is set with gpu-api=vulkan, no dithering happens at all. But if I set gpu-api=opengl I do see logs about dithering and in gpu-next/libplacebo output:

[ 0.170][d][vo/gpu-next/libplacebo] [ 86] // pl_shader_dither

Here's both logs dither_auto_vulkan.log dither_auto_opengl.log

Another thing to add is that plplay does do 8 bit dithering by default on the same file, and even picks the right surface configuration while mpv does not

plplay:

Available surface configurations:
    0: VK_FORMAT_B8G8R8A8_SRGB                  VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    1: VK_FORMAT_B8G8R8A8_UNORM                 VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
Picked surface configuration 1: VK_FORMAT_B8G8R8A8_UNORM + VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
Codec: hevc (HEVC (High Efficiency Video Coding))
Using hardware frame format: vaapi
Spent 38.476 ms translating SPIR-V
Spent 10.853 ms generating shader LUT
Dithering to 8 bit depth

mpv:

[ 0.080][v][vo/gpu-next/libplacebo] Picked surface configuration 13: VK_FORMAT_R16G16B16A16_UNORM + VK_COLOR_SPACE_SRGB_NONLINEAR_KHR

haasn commented 1 year ago

@llyyr And with the above patch to disable R16G16B16A16?

haasn commented 1 year ago

Another thing to add is that plplay does do 8 bit dithering by default on the same file, and even picks the right surface configuration while mpv does not

Impossible to verify without log but there's a good chance this is because of it using X instead of Wayland (depending on how your GLFW is configured), and Xwayland not supporting high-bit-depth backbuffers.

llyyr commented 1 year ago

With the patch, it dithers to 10 bit output.

[ 0.183][v][vo/gpu-next/libplacebo] Dithering to 10 bit depth

there's a good chance this is because of it using X instead of Wayland (depending on how your GLFW is configured), and Xwayland not supporting high-bit-depth backbuffers.

This is correct, I was able to verify it by running plplay with SDL2 backend and making sure it was running under Wayland natively. SDL_VIDEODRIVER=wayland plplay -a sdl2-vk -v [video] > plplay.log

Dithering to 10 bit depth plplay.log

Available surface configurations:
    0: VK_FORMAT_B8G8R8A8_SRGB                  VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    1: VK_FORMAT_B8G8R8A8_UNORM                 VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    2: VK_FORMAT_R8G8B8A8_SRGB                  VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    3: VK_FORMAT_R8G8B8A8_UNORM                 VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    4: VK_FORMAT_R4G4B4A4_UNORM_PACK16          VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    5: VK_FORMAT_B4G4R4A4_UNORM_PACK16          VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    6: VK_FORMAT_R5G6B5_UNORM_PACK16            VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    7: VK_FORMAT_B5G6R5_UNORM_PACK16            VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    8: VK_FORMAT_R5G5B5A1_UNORM_PACK16          VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    9: VK_FORMAT_B5G5R5A1_UNORM_PACK16          VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    10: VK_FORMAT_A1R5G5B5_UNORM_PACK16          VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    11: VK_FORMAT_A2R10G10B10_UNORM_PACK32       VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    12: VK_FORMAT_A2B10G10R10_UNORM_PACK32       VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    13: VK_FORMAT_R16G16B16A16_UNORM             VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
    14: VK_FORMAT_R16G16B16A16_SFLOAT            VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
Picked surface configuration 11: VK_FORMAT_A2R10G10B10_UNORM_PACK32 + VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
Codec: h264 (H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10)
Using hardware frame format: vaapi
Spent 38.619 ms translating SPIR-V
Spent 11.706 ms generating shader LUT
Dithering to 10 bit depth
haasn commented 1 year ago

With the patch, it dithers to 10 bit output.

And that gets rid of visible banding? (i.e. does the driver dither from 10 to 8 bits internally?)

llyyr commented 1 year ago

I see more banding when mpv is doing 10 bit dithering than when it is doing 8 bit dither, so I don't think the driver is doing any dithering internally.

haasn commented 1 year ago

In that case, i.e. assuming the driver isn't dithering internally, then In theory, mpv doing 10 bit dithering vs mpv doing 16 bit dithering (or even doing no dithering at all) shouldn't make any difference.

Sounds like this may be a severe bug in whatever Wayland compositor you're using. Which one is it?

llyyr commented 1 year ago

I think my compositor does internally dither to 8 bits internally when I set render_bit_depth 10 (ctrl+f 'render_bit_depth' in case it's relevant https://man.archlinux.org/man/sway-output.5.en)

mpv doing 8 bit dithering vs 10 bit dithering has no difference in the output to my screen when my render_bit_depth is set to 10.

In my previous test, it was set to 8.

haasn commented 1 year ago

FWIW I can't reproduce this on sway with amdgpu/mesa. I get offered only 10-bit formats, no 16-bit, and don't get visible banding at all, even when completely disabling dithering (on the 10-bit sample).

So on my end, the wayland compositor / graphics driver work together to correctly dither output. Tested on sway, weston, plasma.

haasn commented 1 year ago

I tested sway with render_bit_depth 8 (though this was presumably also the deafult in my previous test) and I still get dithering when picking a 10-bit backbuffer on an 8-bit screen. The only configuration that gives me banding is combining an 8-bit backbuffer with no dithering.

haasn commented 1 year ago

After a substantial amount of debugging with @llyyr on IRC I came to the conclusion that everything works correctly on my end - wlroots gets a 16-bit buffer from vulkan, then forwards this for scanout to the display. Even if the display is artificially forced into 8-bit mode, I see no dithering, confirming that the amdgpu kernel driver is performing dithering as expected (and as confirmed in the amdgpu DC source code).

So, in conclusion, I have no idea why it doesn't work for some people. Some bug somewhere, but only god knows where.

ghost commented 1 year ago

Sorry for the necro, but I have a curiosity question regarding driver dithering after reading this thread.

Even if driver dithering is working as intended, is there any reason why the end-user shouldn't set --dither-depth to their display's bit-depth regardless? Is there any reason to believe that the compositor's built-in dithering is "superior" to whatever dither shaders mpv offers? (or vice-versa?)

On d3d11, --dither-depth=auto is already "layering" a 10-bit fruit dither pattern alongside driver dithering, so there shouldn't be any visual oddities with using --dither-depth=8 on an 8 bpc display, no? At the very least, there technically should be no perceptible differences even on synthetic test content. But if compositor dithering is indeed superior to mpv's available dither shaders, then from what I'm understanding, dithering should be kept at --dither-depth=no instead of auto or <insert display bpc here>.

llyyr commented 1 year ago

Is there any reason to believe that the compositor's built-in dithering is "superior" to whatever dither shaders mpv offers?

No, driver dithering is probably always worse than what mpv can offer. But it's extremely difficult to tell the difference. Though at this point, after testing on both Intel/AMD gpus, across Windows/X11/Wayland, I've considered that maybe my driver is dithering and it's so bad at it that I think it's not doing anything at all.