Closed Obegg closed 2 years ago
Quality is subjective. Personally I find the flickering associated with error diffusion on moving images to be one of the most intrusive artifacts. For this reason I would absolutely never use error diffusion for anything other than still images. Furthermore, note that error diffusion is not currently gamma aware. So when dithering to a very low bit depth (<=4), I would also always use fruit
, even for still images. Also, the difference is subjectively basically non-existant for higher bit depths (>= 8), so you might as well go with what's fastest.
(Yes, I'm aware that the above considerations combine to "never use error diffusion", but that is at least part of the reason why I think it's such a pointless anti-feature. Though I think it would have a marginal use case if we made it gamma-aware, then it would be the best method of dithering still images to very low bit depths)
This point of view changed my point of view, I did not know this and/or noticed any artifacts.
Based on that, I take it that the "best" is dither=no
.
Is dithering that bad and unnecessary?
Is turning it off makes the video "original to source" (sort of say)?
@Obegg at 12 bit dithering is pretty pointless but to be on the safe side I would still use --dither=ordered
on --vo=gpu-next
. It's extremely fast, to the point of basically being free. (On --vo=gpu
I would stick with --dither=fruit
)
Is turning it off makes the video "original to source" (sort of say)?
No, turning it off introduces rounding artifacts / banding. But at 12 bit they're imperceptible.
Thank you! I now use:
vo=gpu-next
dither=ordered
dither-depth=12
Hi @haasn
I would absolutely never use error diffusion for anything other than still images. Furthermore, note that error diffusion is not currently gamma aware
Just checking in - I remember seeing a commit to liblacebo's error diffusion, is it gamma aware now and would be beneficial to use?
No, nothing has changed.
(watch/compare screenshots in full size) (settings: hwdec=no, vo=gpu-next) dither off, deband off: dither on: (dither-depth=8, dither=fruit) dither + deband on (deband=yes, deband-iterations=1, deband-threshold=55, deband-range=20, deband-grain=5)
dither off: dither on: (dither-depth=8, dither=fruit)
I can't see any difference here, I even zoomed in, got close to the monitor and quickly switched between the two. What do you see? Edit: I used my TV to see the difference between the two, there is something that is changing over the dark areas, yes, I see it now.
Glad to hear that. What I see? dither on/off = night and day difference, especially in motion. Deband goes one step further. Not a must but nice to have.
@geextahslex Thank you! Great to see such difference and different combinations!
I don't know what's so special about the deband settings (deband-iterations=1, deband-threshold=55, deband-range=20, deband-grain=5
), I actually use the default values which profile=gpu-hq
provides (which is dither=auto
and deband=yes
.
I actually didn't know there is such difference, I first believed it would be best to turn off the "enhancers" such as deband and etc, to preserve the "creative intent" but when I see and compare such results I rather trust mpv developers and use profile=gpu-hq
(the only thing I actually use over the gpu-hq
profile is scale=ewa_lanczos
, other than that - everything is default.)
I felt the same way and had them turned off. But then I realised how much of an impact there actually is. Maybe try going with "vo=gpu-next"
I felt the same way and had them turned off. But then I realised how much of an impact there actually is. Maybe try going with "vo=gpu-next"
Uh didn't I mention I already use vo=gpu-next
? I do use it
@Obegg oh okay, I thought "profile=gpu-hq"
@Obegg oh okay, I thought "profile=gpu-hq"
This is my mpv.conf
profile=gpu-hq
vo=gpu-next
target-colorspace-hint=yes
video-sync=display-resample
scale=ewa_lanczos
cscale=ewa_lanczos
glsl-shader="~~/FSRCNNX_x2_16-0-4-1.glsl"
[FSRCNNX x8]
profile-cond=p["frame-drop-count"]>10 or p["mistimed-frame-count"]>10 or p["vo-delayed-frame-count"]>10
glsl-shaders-remove="~~/FSRCNNX_x2_16-0-4-1.glsl"
glsl-shader="~~/FSRCNNX_x2_8-0-4-1.glsl"
profile-restore=copy
[No Shaders]
profile-cond=p["frame-drop-count"]>30 or p["mistimed-frame-count"]>30 or p["vo-delayed-frame-count"]>30
glsl-shaders-remove="~~/FSRCNNX_x2_8-0-4-1.glsl"
profile-restore=copy
(I left out the other settings that are not related to video) Basically, it uses the best shaders and if it detects there's some performance issues than it uses the 2nd best shader and if it still has issues than it won't use any shader.
@Obegg Wow I didn't even know that you can tune it to performance. Mpv is truly magnificent ^^
at 12 bit dithering is pretty pointless but to be on the safe side I would still use
--dither=ordered
on--vo=gpu-next
. It's extremely fast, to the point of basically being free. (On--vo=gpu
I would stick with--dither=fruit
)
Any reason you disrecommend ordered dithering on vo=gpu but recommend ordered on gpu-next?
I have a few questions regarding dither.
1) Most screens are 8-bit or 10-bit, but on Windows there's this option that says 12-bit so should I set
dither-depth=12
?2) There are a lot of different algorithms for
error-diffusion
such assierra-lite
,floyd-steinberg
,jarvis-judice-ninke
,atkinson
. Which one is the "best" (without thinking about performance)? It doesn't matter if it's very demanding from the CPU/GPU as long as it's the "best".