Closed Firestar99 closed 1 year ago
On Windows AMDVLK and Nvidia have the exact same results, so I would declare this PR ready to merge.
But I feel like rendering into a sRGB vs UNORM image is not properly understood. What you output in the shader should always be in linear colorspace, vulkan will take care of converting it into sRGB for you if necessary. This is especially important when blending, as converting to sRGB yourself means that the colors get blended in sRGB space, which is ofc wrong, they should be blended in linear color space. There too Vulkan will take care of sRGB conversion for you. If you blend on an sRGB rendertarget it will read the src color, convert it from sRGB to linear, blend it with your dst color in linear color space, convert it from linear back to sRGB and finally write the color as sRGB into your rendertarget. If you do the color space conversion in the fragment shader "for the convenience of users with linear color space rendertargets", then you will blend in sRGB space and thus break blending. As the image format is linear the GPU won't do any sRGB to linear conversion, cause it's already linear why should it. So if you want this feature you have to break blending.
In your sample you render directly into the swapchain image, in sRGB in one window and linear in the other. The OS is telling you that it will interpret the image in the SrgbNonLinear
color space, thus it makes a lot of sense to present images in the sRGB color space. So I would expect the sRGB window to have the correct colors, but for some yet unknown reason the colors are off and instead the linear version looks correct. Compare to the browser version: https://www.egui.rs/#Colors
Continuing assuming this works: Now for a game engine internal image which will get post-processed later on, you may render into a linear image, like an SRGA16_SFLOAT
, but after post-processing the final image will be in sRGB format so the conversion happens on that write. If you want them to look the same you basically have to render into your own linear image and then blit it onto the swapchain sRGB image to do the color space conversion (or better render a full screen triangle as blit and copy_dst is not guaranteed to be supported on swapchain images).
Above suggestion would go to the fragment
. The idea was from egui_wgpu
's shaders
I think it's good to let the user specify the color space and handle the special case in the shader. Similar to https://github.com/emilk/egui/blob/master/crates/egui-wgpu/src/egui.wgsl
Something is wrong with my explanation above, cause it's the sRGB window that changed color, not the linear one...
It is also a typical use case to render directly to the swapchain, I'd say.
In my game engine, I post process the RGBA16 scene image, write that image on the swapchain. And last, place the gui also over the swapchain.
Gui could go to either, but I prefer the option to be able to choose
You cannot implement fragment shader-based sRGB conversion without breaking blending. I've edited my comment above to explain why: https://github.com/hakolao/egui_winit_vulkano/pull/47#issuecomment-1737346484
(and took me a while to make sure what I wrote is hopefully correct)
I also force-fushed a version which labels the windows with sRGB or linear to avoid confusion.
I figured out why the sRGB looks weird and UNORM looks fine.
Usually everyone does all their color and blending calculations in linear color space, so that the color you get out at the end of your light calculation is physically correct. And in Vulkan you can just write your shader expecting that the inputs and outputs are always in linear color space, doesn't matter whether the texture read/written is in linear or sRGB color space.
But egui does not seem to care about physically correct lighting, it only cares about perceived brightness. And going along with that, the texture * color
as well as blending is fully done in the "perceived" sRGB color space instead of linear one. Which goes well with APIs which only have linear "UNORM" textures, like basic OpenGL or wgpu.
Now if you were to use a Vulkan sRGB texture, you would write your color in sRGB and vulkan would convert your assumed linear color space to sRGB, basically doing linear to sRGB conversion twice, which is why the sRGB window is so much brighter. So in theory if you took that 2x sRGB window's output and converted once sRGB to linear, you'd get only once sRGB and it should look like the UNORM window.
How do we fix it? Luckily Vulkan has separate Image and ImageView objects, which allows us to take any Image of either UNORM or SRGB format and view it as a UNORM. And then draw on it as if it was a UNORM so vulkan never converts the final written colors to sRGB. Now of course there would be an issue if you'd expect to get the UI's colors in linear color space, as it can only output them in sRGB without messing up blending, so we may want to limit the API to sRGB images only to prevent people from shooting themselves in the foot. But as you're most likely creating your swapchain image in sRGB format anyways, as the interpreted color space is always ColorSpace::SrgbNonLinear
(except if you go HDR with extensions), shouldn't be a huge deal?
Typically most images are already in SRGB color space. And I think it is recommended that people use SRGB swapchain target too. However, egui recommends Bgra8Unorm
. I'd like this integration to enable both options. In the case of Bgra8Unorm
, you need to convert only in the fragment shader. https://github.com/emilk/egui/blob/master/crates/egui-wgpu/src/egui.wgsl. In that file, in the case of Bgra8Unorm
you'd be running fs_main_gamma_framebuffer
, the other one in the case of srgb
. Both require fragment level conversion.
In my wgpu
game engine, I use images with egui that are Bgra8Unorm
as is the swapchain image, as recommended by egui. But for the scene images, I use hdr capable textures (rgba16). Render rgba16 images (post processed) on bgra8 swapchain, then render egui on bgra8 also. In my vulkano projects, I've done the same.
As far as I know, this is all a requirement of how colors are handled in egui
.
I would prefer that we don't break the api and enable both options for the target. And user needs to decide how to use the library, and in what space their images pasted to egui are.
I know I'm persistent with this, but again you cannot just convert your fragment output color from srgb to linear as this will break blending. See the screenshot below where I've done the exact same conversion on both the sRGB (left) and linear (right) and it breaks both blending tests (even though it should only be on in sRGB). I would assume that the reference implementation in wgpu is broken as well, just that due to wgpu only having linear color space images this code path and fragment shader are never used. If you run the vulkan backend though renderdoc you will always see it render into the linear UNORM swapchain image, and I assume you cannot force it to render into an sRGB image to see the bug happening, cause wgpu doesn't support them.
I've also now exactly copied their "gamma" fragment shader, which was used on my machine to render to that linear swapchain image, to make sure there aren't any deviations anymore. So the current state of the PR should work correctly with color spaces.
Though my solution for sRGB images doesn't work either, as having a different ImageView format than image requires the image to have the mutable format flag, which for swapchains requires the extension VK_KHR_swapchain_mutable_format
. So I'm back at square one with supporting sRGB images, except if users can accept broken blending, maybe though an explicit flag?
I dont mind persistence, let me do another round of investigation tomorrow! And thanks for your effort. I’m mostly coming from the perspective of wanting to keep the api flexible and making sure we can get the correct output on one of the color spaces and the other can be close. Optimally both of course
Now, if you wish to find a correction that's even better for the SRGB
feel free to do so, but I'm not sure if it's possible and if there's some precision lost that causes the blending error.
I've committed your suggested changes and did some slight code cleanup to deduplicated some code where I almost missed a required change. Also should I cargo fmt
? I usually get so many changes then that I didn't commit any of it.
But my last commit f6617d2c37b09bbcfbba7a9c964233457d496c50 introduces some API breaking changes and is slightly opinionated:
GuiConfig::allow_srgb_render_target
, default false, which is required if someone wants to use an sRGB render target. Otherwise it panic!'s with "Using an output format with sRGB requires GuiConfig::allow_srgb_render_target to be set! This ensures the user is aware that rendering to an sRGB render target will breaking blending causing discoloration of UI elements"allow_srgb_render_target
without breaking anything, I would have to default it to true making it pointless, as by default you just use the first available surface format which is sRGB. So either break things or not have that check at all (or delay it to next vulkano 0.34 release?)demo_app
which does bothYea, do run cargo fmt
!
due to double color space conversion
without the sRGB to linear conversion onto an sRGB image this would be correct. But with that conversion the actual reason is
due to blending in linear color space and not sRGB as egui expects.
I may get the terminology wrong, and don't always know if blending is referred to as the operation performed by GPU with our given pipeline instructions or what we do in the fragment shader currently. Here's only what I meant with the double conversion, call it what you want.
Egui colors are in srgb. In sampling, conversion occurs by GPU to linear space. Thus to get back to srgb, we convert to srgb. But because we write now from linear space (where the shader runs) to srgb target, GPU will convert that to srgb. To correct that, we convert to linear once more to offset that automatic conversion, and after writing we'll be in right color space visually. I'm assuming there's some precision lost or something there.
Don't mind a better description! Just meant to make the message less severe, because I don't think it's too bad of a difference, based on how the demo apps look with both options.
Can you run the cargo fmt
with nightly (cargo. It's annoying, but rustfmt.toml
is used with some nightly options.
cargo +nightly fmt
and suddenly cargo fmt has a low fewer changes... I just replaced the commit
Blending is configured in GraphicsPipelineCreateInfo::color_blend_state
and defines what the GPU does with the alpha component output of your fragment shader. By default it just writes it to the texture, effectively doing nothing. But you can also configure it to do transparency effects, like the typical src_color * src_alpha + dst_color * (1 - src_alpha)
with src
being the output of your fragment shader and dst
being the previous color value of that pixel.
Vulkan expects our shader to always output in linear color space and does the blending always in linear color space, independent of whether the render target is linear or sRGB. And if it's sRGB vulkan will do all the conversions necessary so everything stays in linear colorspace. But Egui expects blending to happen in sRGB, which makes things weird. And the only way to get there is to always write sRGB from your shader and have a linear texture so vulkan doesn't convert it.
still missing the README.md example update, otherwise ready
And one lint issue it seems remnant of older rust versions... https://github.com/hakolao/egui_winit_vulkano/actions/runs/6338437170/job/17215621104?pr=47
both readme and lint are fixed, but I won't be available until tomorrow if something else were to break
This is how the sRGB color test things look like on the
master
branch right now on Linux KDE Plasma X11, RADV. This is in sRGB colorspace rendertarget, but looks similarly with a sRGB RT.This branch changes the shaders to fix the colorspace transformations. This image is however in linear color space, as opposed to sRGB.