godotengine / godot

Godot Engine ā€“ Multi-platform 2D and 3D game engine
https://godotengine.org
MIT License
89.7k stars 20.8k forks source link

Depth texture returned by RenderSceneBuffersRD is upside down and in sRGB colorspace #90148

Closed jake-low closed 2 months ago

jake-low commented 6 months ago

Tested versions

System information

Godot v4.3.dev5 - macOS 14.4.1 - Vulkan (Forward+) - dedicated AMD Radeon Pro 555X - Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 Threads)

Issue description

When sampling a depth texture in a compute shader (as part of a CompositorEffect), there are two surprises that aren't found when using a depth texture in a normal GDShader.

  1. The texture seems to be upside-down. To sample it correctly you need to flip the y coordinate of the screen UV, so the origin is effectively in the top left rather than the bottom left.
  2. The texture seems to be in the sRGB color space. In order to get correct depth values you need to convert back to linear space (depth = pow(depth, 2.2)) before constructing your NDC vec3.

I'm opening one bug (not two) because I think it's possible that these two symptoms share a common cause.

I discovered this while trying to build a compositor effect by following @BastiaanOlij's RERadialSunRays example (thank you very much for putting that example together!). I think it's possible that the sRGB color space bug went unnoticed because that example only checks if depth < 1.0 which will work the same with or without correcting for gamma.

Regarding the texture being upside down, it occurs to me that it could be that I've made a mistake in passing the camera transform and projection matrices into my shader, and causing the camera view space to appear inverted that way. But I checked the engine source code and I think I'm doing things correctly. Please correct me if I'm wrong though. šŸ™‚

Steps to reproduce

Use the sample project below. In it, there is a CompositorEffect which uses a compute texture to draw an SDF shape using raymarching. Around the shape are several ordinary mesh objects (which will appear in the depth buffer). The raymarcher samples the depth buffer and stops marching once it reaches that depth in the scene, to avoid drawing the SDF shape on top of objects that should occlude it. The expected result looks like this (the raymarched shape is the pink cone in the center, the other shapes are MeshInstance3Ds).

image

The two surprising behaviors I observed are both highlighted in // BUG comments in RaymarchShader.glsl. You can remove the workarounds to see what happens without them.

Without flipping the y coordinate of the screen UVs before sampling the depth texture, you get this: (the occluded part of the cone is wrong because it's using the upside down version of the depth texture)

image

And without applying the inverse gamma transformation, you get this: (the cone appears to intersect the cube even though it should be behind it, because the depth value is wrong)

image

Minimal reproduction project (MRP)

CompositorEffectDepthBufferBug.zip

BastiaanOlij commented 6 months ago

Ok, it's important to realise that composer effects gives you access to the buffers as used by the rendering engine. This is the actual depth buffer that is bound to the fixed pipeline when rendering geometry.

The depth buffer you access in a normal GDShader is actually a copy made after opaque rendering, which is flipped for ease of use. So that creates both overhead, and limits the use of the depth buffer.

And yes, in Vulkan images are rendered "upside down", that's just the way the rendering engine works and you'll have to adjust for that. All buffers are oriented the same.

I'm not sure why there is a linear to sRGB conversion, @clayjohn maybe you can shed light on why this could be happening? Again we're using the depth buffer as is, so it should be treated as a data buffer, not as a color buffer, but possibly this is effected by hints in the shader code. Indeed my sun rays example wouldn't care because we're just masking based on depth though if this is the case and we're just missing a hint, I'll be sure to update the example.

jake-low commented 6 months ago

Thank you for the explanation on why the Y coordinate is inverted compared to what I expected from GDShader. Glad to know that it's not an error on my part in how I created the sampler or something.

Re: the sRGB conversion, I will fiddle around with layout hints in the shader, and if I'm able to resolve the issue that way I will report back here.

Thanks again for your help!

xcodian commented 3 months ago

I'm experiencing these issues in v4.3-beta2 and this is making it extremely hard for me to work with the depth buffer in compositor effects.

The provided reproduction example is completely broken for me in v4.3-beta2 (no errors, just no raymarched shape visible), and I'm getting extremely weird values when trying to linearise the depth buffer myself...

float sRGBtoLinear(float sRGB) {
    return pow((sRGB + 0.055) * 0.94786729857, 2.4);
}

/* ... */

float rawDepth = sRGBtoLinear(texture(depthTexture, screenUv).x);
vec4 clipSpace = vec4(screenUv * 2.0 - 1.0, rawDepth, 1.0);
vec4 viewSpace = matrices.inv_projection_matrix * clipSpace;
viewSpace.xyz /= viewSpace.w;

float worldSpaceDepth = length(viewSpace.xyz); // getting weird values out

/* ... */

Is my math wrong here or is something seriously wrong with how we're given the depth buffer in compositor effects? The same code works perfectly in the gdshader GLSL dialect.

Any explanation on how to get world-space depth with the raw Vulkan depth buffer we're given would be greatly appreciated.

clayjohn commented 3 months ago

I haven't fully looked into this. But looking quickly at the MRP, I can see it is using the raw "view_projection" matrix that we use for Stereo rendering. That is essentially the modified projection matrix that combines the eye offset and the basic camera matrix from the camera. It doesn't include, the depth correction needed for rendering in the RD backend, the reverse z (which is why the MRP doesn't work anymore), the y-flip that we do internally, or the jitter offset (if using TAA).

It may be enough to just include these corrections, but I haven't tested yet. We construct a "correction" matrix internally to do all of that at once right before we upload the matrix to the UBO.

Projection correction;
correction.set_depth_correction(p_flip_y);
correction.add_jitter_offset(taa_jitter);
clayjohn commented 3 months ago

It seems i was able to get the MRP working just fine by adding in the depth correction like so:

var correction: Projection = Projection.create_depth_correction(true)
projection_matrix = correction * projection_matrix;

And by setting the "near clip" value to 1.0 (since we use reverse-z projection).

Screenshot 2024-06-25 at 11 09 06 PM Screenshot 2024-06-25 at 11 09 23 PM

@BastiaanOlij We need to quickly decide if render_scene_data.get_view_projection() should return the view projection before or after corrections are applied. I suspect we could apply corrections before passing the projection to the user so that they don't have to think about it. However, we could also document the need for depth corrections in the RD backend and leave it up to the user to apply the corrections themselves.

BastiaanOlij commented 3 months ago

@clayjohn I think we need to be very careful here. Remember, render_scene_data is first and foremost build for our internal rendering logic. We have now exposed its data for outside use so there is a chance of breakage if we change these. I do think we can change this particular one, I did a search and it looks like all logic is internalised.

Your conclusion is right, we do not apply the depth correction and such until we load the data into the UBO, so the data held within the class is before such corrections. We would thus have to apply them in these getters before returning the values.

BastiaanOlij commented 3 months ago

@jake-low I've put a fixed version of your MRP that on the PR linked above that works properly with that PR applied. I noticed that you had a few memory leaks as well. Not 100% happy with the fix because I couldn't find if we had a buffer update command so there is probably an improvement there still.

edit found the function I was overlooking, so it now uses buffer_update to load new data into the new buffer.

akien-mga commented 2 months ago

Fixed by #93630.