canonical / mir

The Mir compositor
GNU General Public License v2.0
643 stars 103 forks source link

Use better scaling filters #3172

Open Saviq opened 11 months ago

Saviq commented 11 months ago
          The images show what appear to be texture minification artifacts. The downsampling filter currently in use is a simple bilinear filter (OpenGL's `GL_LINEAR`), which is prone to this kind of aliasing. I reproduced the effect on a QEMU machine and took some screenshots with a zoomed-in detail for better comparison:

filtering1

The problem also occurs when multiple outputs display cloned content at different scales. If the client uses the highest scale to set the buffer's resolution, the compositor downsamples the buffer when the surface is rendered on the lower-scale outputs. The issue affects server-side decorations for the same reason.

I ran some experiments using different types of texture sampling to try to improve the image quality for the case of a buffer being created for scale 10 and rendered at scale 1. Of course, it is unlikely that someone will use cloned outputs with such large differences in scale, but the artifacts are visible even if one output has a scale of 4 while another is unscaled.

The best results seem to be achieved by enabling mipmapping with trilinear filtering and a LOD bias of -1:

filtering2

The rationale is that mipmapping prefilters the texture, while the negative bias forces the sampler to use the next higher LOD to avoid a loss of sharpness. Other biases also work, but -1 seems to be the sweet spot for clearer text.

I also tried supersampling (using the GL_OES_standard_derivatives extension) with different patterns (2x2, 2x2 rotated grid, quincunx) combined with biased trilinear filtering, but the text is blurrier when compared to the biased mipmapping:

filtering3

In summary, if downsampling artifacts become a problem, biased trilinear filtering could be used. It is faster than supersampling and produces better results.

Originally posted by @hbatagelo in https://github.com/MirServer/mir/issues/3171#issuecomment-1859518890

Saviq commented 11 months ago

@hbatagelo I've filed a new issue with your findings.

hbatagelo commented 11 months ago

@Saviq Thanks for filing the new issue.

About your comment in #3171, I understand that with proper tracking the client could provide a buffer scaled according to the output the surface is currently in, and downsampling/upsampling would be unnecessary. However, it's still not clear to me how the client would notify the compositor in the case of cloned/mirrored outputs at different scales.

Consider a scenario where the client has a single surface cloned into outputs of scale 10 and 1. The client receives two "enter" events, one for each output, and now intends to attach scaled buffers to the surface: a high-resolution buffer for the 1st output and a low-resolution buffer for the second output. What requests should the client make to associate these buffers with the surface in a way that allows the compositor to track the buffers with the correct outputs?

RAOF commented 11 months ago

About your comment in #3171, I understand that with proper tracking the client could provide a buffer scaled according to the output the surface is currently in, and downsampling/upsampling would be unnecessary. However, it's still not clear to me how the client would notify the compositor in the case of cloned/mirrored outputs at different scales.

Yeah, it's not possible with current Wayland protocols for clients to submit buffers for multiple scales. (IIRC we had support for this in mirclient, or at least thought about it).

Fortunately it's a reasonably rare case for surfaces to be on multiple outputs with different scale factors (at least once we fix the enter/leave events).

It would also be a reasonably simple protocol extension to add support for multiple buffer sizes. Maybe we should provide one?

hbatagelo commented 11 months ago

Thanks @RAOF for the clarification.

After fixing #342, scaling artifacts would likely only be noticeable when multiple outputs have scale factors differing by a large amount, and only for windows displayed in cloned outputs or spanning side-by-side outputs. While a protocol extension would address these cases, I wonder if placing the responsibility on the client to provide multiple buffers wouldn't result in increased usage of memory bandwidth. In all likelihood, the client would simply provide a downsampled high-resolution buffer since it is the same content displayed at different scales. I believe these rare cases would be better addressed by server-side filtering, with the additional benefit of compatibility with server-side decorations.