yshui / picom

A lightweight compositor for X11 with animation support
https://picom.app/
Other
4.06k stars 585 forks source link

Noticeable gradient banding for high blur strength #602

Closed bakana808 closed 1 year ago

bakana808 commented 3 years ago

Hi everyone,

When using the dual_kawase blur with strength = 20, there is a noticeable gradient banding (or "color banding") as a result of my monitor not being able to represent all the intermediate colors in the blur:

(I can see it in this screencap but I'm not sure how hard it is to see for anyone else)

I was wondering if there was a way to alleviate this, perhaps with some kind of dithering effect that is applied after the blur or the ability to set an overlay image over the transparency so that I could overlay noise (which is probably what Windows 10 does in their Fluent apps to solve the same problem)

absolutelynothelix commented 3 years ago

that's a good idea. i think it's not that hard to do dithering on blurred image with a shader.

DeathKhan commented 3 years ago

Bump, having the same banding unknown

PickNicko13 commented 2 years ago

Especially eye-catching on darker images, seems to be an even bigger issue with dark window backgrounds (a blending issue?).

tryone144 commented 2 years ago

These color bands are especially apparent at large blur-radii which results in very fine gradients. More noticeable on darker images (usually the case with transparent background).

absolutelynothelix commented 2 years ago

@tryone144, i think it definitely should be configurable, because no matter how you implement it, it will affect performance, and for some machines (old thinkpads, etc.) it may be noticable. i don't think that there is a need to implement it for xrender (idk, who uses it, kinda fallback for very old machines?) and for legacy backends (since sooner or later they will be removed afaik).

tryone144 commented 2 years ago

Ping @yshui:

yshui commented 2 years ago

@tryone144

  1. Agreed, yes.
  2. I think it's fine, high blur strength would be really slow on xrender already.
  3. I don't have a preference here. Pick the one that's easier to do, maybe?
yshui commented 1 year ago

@tryone144 i just thought of this, i think before we do any dithering, we should try using 16bit textures to store the intermediary results

Edit: never mind, didn't help.

yshui commented 1 year ago

Did some research on dithering algorithms. Ordered dithering looks... poor, and error-diffusion is tricky to implement on GPU.

mpv is using a compute shader for error diffusion, maybe we could consider requiring OpenGL >= 4.3 for dithering.

absolutelynothelix commented 1 year ago

@yshui, maybe you don't remember me already, but you've awakened me :D

i resurrected my arch installation that i haven't booted for about two years and maybe i'll try to play with this since i had some experience with dithering (floyd-steinberg in particular, but afaik there are issues with implementing it as a shader) and i'm interested in this issue (at least for now, so no promises + i don't have much free time lately).

absolutelynothelix commented 1 year ago

the first, dumb and pretty straightforward, approach is to apply some noise. it won't get rid of color banding completely, but i can see the difference (and it adds kinda aesthetic effect). it could be a fallback for old devices (if mpv's approach will be used and opengl >=4.3 will be required). zzz

mikejzx commented 1 year ago

the first, dumb and pretty straightforward, approach is to apply some noise. it won't get rid of color banding completely, but i can see the difference (and it adds kinda aesthetic effect). it could be a fallback for old devices (if mpv's approach will be used and opengl >=4.3 will be required). zzz

While that approach may seem dumb/naïve at first glance, I think it probably would be one of the better ways to go about doing it.

Unordered dithering algorithms such as Floyd-Steinberg will probably not produce good results because of "jittering" artifacts that are a result of the unpredictable error diffusion. The addition or movement of a single pixel will cause a large region of the image to be affected.

The following GIF shows the artifacts that would appear when using a non-ordered dithering algorithm such as Floyd-Steinberg: floyd steinberg This is from Joel Yliluoma's page on his dithering algorithm: here. From that page:

... A single yellow pixel was added to the image and moved around. The animation has been quantized to 16 colors and dithered using Floyd-Steinberg dithering. An entire cone of jittering artifacts gets spawned from that single point downwards and to the right.

This is not a problem with ordered dithering algorithms (i.e. Bayer dithering), but ordered dithering instead creates a very noticeable dithering pattern across the image and generally I think most would agree that it wouldn't fit very well on a desktop: (see bottom-right image)

ordered

The "dumb" approach of simply overlaying a noise texture and have it repeat over the blurred regions will not suffer from these issues, and should still be effective in reducing gradient banding. Additionally, this approach should be much more performant than trying to do dithering in real time. Of course for this to work properly, the noise texture should be large enough and random enough such that no seams are visible when tiled.

Perhaps the difficult thing to work out would be how to actually blend the noise texture in--whether it should multiply with the pixels, subtly added on top, etc.

Also, I believe KDE's compositor (Kwin) uses noise to reduce colour banding. You may want to check it out for ideas. From what I've seen it seems to achieve some pretty decent results.

absolutelynothelix commented 1 year ago

@mikejzx, yep, i did some research on this topic overnight and it's generally a bad idea to do real-time dithering based on image contents (due to performance issues and artifacts, as you mentioned), it's better to apply some random noise. from what i've found blue noise is a good source of non-distracting noise for dithering.

absolutelynothelix commented 1 year ago

i'm not that good at glsl, but here is a snippet i used on shadertoy to demonstrate noise applying:

const float NOISE_STRENGTH = 0.015; // 3.5 / 255.0

float random(in vec2 coords) {
    return fract(sin(dot(coords.xy, vec2(12.9898, 78.2330))) * 43758.5453);
}

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = fragCoord / iResolution.xy;
    vec4 col = texture(iChannel0, uv);

    fragColor = col + mix(-NOISE_STRENGTH, NOISE_STRENGTH, random(uv));
}

and the "texture" i used to test it on (load with https://www.shadertoy.com/view/lsGGDd):
https://i.ibb.co/PC3x3Sr/bands.png

(source, i've adapted and simplified it a bit)

tryone144 commented 1 year ago

@yshui

Edit: never mind, didn't help.

That's what I expected. The banding occurs when drawing the final image to the screen. Doesn't matter if we have higher precision internally.

@mikejzx already mentioned the relevant points. :smile:

@mighty9245 interesting approach with calculating a random value for each pixel. I have played with a pre-calculated noise texture (screen-dimensions, turbulence noise) and adding the noise value (distributed around 0) to the fragment color. The main drawback was the relatively expensive computation to generate this texture (and the added memory consumption).

TL;DR: Adding/subtracting some kind of random noise should be the easiest approach with a pleasing visual appearance. Dithering probably introduces more artifacts than the additional computational overhead is worth.

Regardless of what we choose, the xrender backend is more than likely too slow for any of the above approaches.

yshui commented 1 year ago

I did say I don't like how ordered dithering looks, but I will try to see if I can put something together quickly to see what it looks like in practice.

Computationally ordered dithering should be pretty cheap.

yshui commented 1 year ago

Actually, I was wrong, it looks pretty decent. See the dither branch.

The approach I've taken is using a 16-bit intermediary back buffer (the ->back_texture), and apply dither at the final step. Otherwise, if I only use dither in blur shaders, other composition work would still cause precision loss (like alpha blending, etc.), and banding would still be visible.

With this approach I don't see any banding anymore and dithering is also unnoticeable. (i have a 4k screen, so maybe that helps). I don't know how expensive this is though.

yshui commented 1 year ago

fun side effect: i can use this to emulate a low color depth screen! :smile:

(this is what blur looks like on a 8-color screen, i turned up dither grain size as well) a

yshui commented 1 year ago

hmm, btw, why do blur textures have an alpha channel?

absolutelynothelix commented 1 year ago

i decided to test the dithering and i believe it's enabled with --dithered-present but it fails for me with

workstation:~/Downloads/picom/build/src$ ./picom --dithered-present
[ 12/01/2022 21:21:03.279 gl_init ERROR ] Framebuffer attachment failed at line 929: GL_FRAMEBUFFER_UNSUPPORTED
[ 12/01/2022 21:21:03.279 glx_init ERROR ] Failed to setup OpenGL
[ 12/01/2022 21:21:03.280 initialize_backend FATAL ERROR ] Failed to initialize backend, aborting...

am i having a bad gpu (gtx 1650) or what?

yshui commented 1 year ago

Hmm, does nvidia not support 16-bit texture for framebuffers? i thought it's a required format.

OK, only RGBA16 is required, RGB16 is not. I need to check both of them.

yshui commented 1 year ago

can you upload a trace from apitrace?

absolutelynothelix commented 1 year ago

yep, i've used my favorite debug tool called commenting out things and after commenting out this it, at least, started, but i don't see any dithering (doesn't works without it or unnoticable?)

absolutelynothelix commented 1 year ago

can you upload a trace from apitrace?

sure, but i've never used apitrace before. iirc there was an instruction to debug picom with apitrace, can you link it if it exists? or i'll figure it out on my own a bit later

yshui commented 1 year ago

let me try something first before you run apitrace.

yshui commented 1 year ago

@mighty9245 ok, can you try again with the latest change?

absolutelynothelix commented 1 year ago

@yshui, fixed :) [ 12/01/2022 22:31:42.531 gl_init INFO ] Using back buffer format 0x805b

and a difference showcase from an average fhd screen enjoyer: Untitled

incredible, i can't even notice dithering, it just removes the banding and thats it. very good.

also, if someone wants to use my noisy approach, maybe for some kind of aesthetics, i believe you can put it into a custom shader.

yshui commented 1 year ago

we could add a custom present shader. by the looks of it it would be simple to do.

tryone144 commented 1 year ago

The approach I've taken is using a 16-bit intermediary back buffer (the ->back_texture), and apply dither at the final step. Otherwise, if I only use dither in blur shaders, other composition work would still cause precision loss (like alpha blending, etc.), and banding would still be visible.

Nice :+1: Compositing with a higher depth-resolution and only applying dithering when compressing down to 8bits for drawing the final result sounds logical. This should keep the visual fidelity and disturb color steps to be unnoticeable. Otherwise we would just use higher precision internally but loose that when presenting (and still have colorbanding), or the dithering would be just a different kind of noise to disturb the color bands.

also, if someone wants to use my noisy approach, maybe for some kind of aesthetics, i believe you can put it into a custom shader.

Since it's self-contained, this should be no problem once we have a way to specify a full-screen present shader.

we could add a custom present shader. by the looks of it it would be simple to do.

Adding --present-shader analog to --window-shader-fg? Config refresh causes backend re-init, so that should work fine.

yshui commented 1 year ago

Closing as it's been merged to next