godotengine / godot-proposals

Godot Improvement Proposals (GIPs)
MIT License
1.13k stars 93 forks source link

Add a render mode to allow depth testing and screen reading transparent objects #10847

Open Mopzilla opened 1 week ago

Mopzilla commented 1 week ago

Describe the project you are working on

I am working on a project that heavily uses transparency

Describe the problem or limitation you are having in your project

Transparency is very difficult to work with - especially when they overlap. I have issues with transparent objects blending improperly with each other, and with the sky texture.

I want a screen effect that occurs when the camera is contained within a water volume which distorts the screen, as well as layering other effects.

The issue is I cannot read the screen texture to distort it, as transparent shaders are not included in the screen texture.

I also cannot depth test the water volume to determine how thick the fog effect should be, as you cannot depth test transparent shaders.

Describe the feature / enhancement and how it helps to overcome the problem or limitation

Add a render_mode flag that allows a shader to depth test transparent shaders and include transparent shaders in screen reads by reading after the transparent layer.

This way on my screen space shader I can accurately get the depth/screen of every object in the scene.

Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams

I am unsure how it would work in the engine, but my assumption is performance would only be lost when actually using the render mode.

For example add render modes: depth_test_alpha and screen_read_alpha Using depth_test_alpha you have transparent objects included in your depth texture Using screen_read_alpha you have transparent objects included in your screen texture

If this enhancement will not be used often, can it be worked around with a few lines of script?

I can't actually think of any solution whatsoever, which is why I am writing this. If there were a work-around that was somewhat convenient I would use it (if one exists let me know!)

Is there a reason why this should be core and not an add-on in the asset library?

I can't think of any downsides except performance which only happens if you actively use the render mode. And there are so many flaws with transparency handling in-engine, so it would be nice to have it built in.

Calinou commented 1 week ago

I don't know if this is technically feasible given how Godot's renderer works, especially since it currently only supports forward rendering and not deferred rendering.

Mopzilla commented 1 week ago

I see, before I posted this proposal I did look at some other posts and someone said they just decided to do it themselves in the engine so I figured it was something easy to implement and was missing for some other reasons. At least the screen reading that is.

Jesusemora commented 1 week ago

We many options for transparency. You have to be more specific with what you want and what your problem is, because it could be a genuine limitation, or you just haven't done your reseach.

Mopzilla commented 1 week ago

I want a screen effect that occurs when the camera is contained within a water volume which distorts the screen, as well as layering other effects.

The issue is I cannot read the screen texture to distort it, as transparent shaders are not included in the screen texture.

I also cannot depth test the water volume to determine how thick the fog effect should be, as you cannot depth test transparent shaders.

I think there is a solution with using the compositor, but that is far above my current knowledge (though I am trying to understand it).

If there was something simple like a render mode that exposed this functionality then it would make creating these effects extremely easy for less experienced developers.

I updated the original post to better describe the issue I am having.

PureAsbestos commented 5 days ago

You want a post-processing effect for this. See this page in the docs for more info. You don't actually need to get the depth of transparent objects to do this (unless you have transparent things underwater maybe, but that's a separate problem).

Mopzilla commented 4 days ago

That docs page is what I used to build the shader. The issue is (again this could be the wrong way to go about what I want) inside my quad shader that covers the camera I want to read the depth so I can determine how transparent a given fragment should be.

Objects up close are 0 ALPHA, and objects further than a certain range are 1 ALPHA.

The problem arises because I cannot get the depth of transparent materials, so if the only thing between my camera and the sky (or an object far away) is a transparent waterfall, it gets covered because the quad shader displays 1 ALPHA.

If the surface of the waterfall shader was included in the depth test then my quad screen shader would render at 0.2 ALPHA on that fragment, allowing the waterfall to be visible.

I am very new to shader programming and this could be the wrong way to go about what I want to achieve. (screen effect which adds fake fog, the further away a fragment is from the camera the higher the alpha value for that fragment)

thompsop1sou commented 1 day ago

I've also worked with screen-reading effects that both needed to use the depth texture and needed to be able to see transparent objects. Godot doesn't have great options for this right now (as you've discovered), but they are currently in development (see the Rendering Compositor issue, which Calinou mentioned above).

If you need a way to work around this right now (before the Rendering Compositor is finished), here are a few options:

  1. If you only need to access the color texture (hint_screen_texture) and not the depth texture or the normal-roughness texture, then you can just use a 2D screen-reading effect. You'll put this on a full-screen TextureRect or Sprite2D. Because this is a 2D effect, all of the 3D rendering will be already completed in the texture that you access using hint_screen_texture, including rendering of transparent objects. In addition, you can stack screen-reading effects using the BackBufferCopy node (see the docs here). This doesn't work if you need access to the depth texture or the normal texture because hint_depth_texture and hint_normal_roughness_texture are not available in 2D shaders.

  2. If you're willing to do a little low-level code (compute shaders, GLSL, and using the RenderingDevice), you can use the new CompositorEffect resource. (This was introduced in Godot 4.3. If I understand correctly, it is one step toward a complete Rendering Compositor.) This allows you to add a post-processing effect at several points in the 3D rendering pipeline, including after transparent objects have been rendered. You can access any of the screen textures that you might use in a normal shader (hint_screen_texture, hint_depth_texture, or hint_normal_roughness_texture). However, you'll have to write the low-level code to explicitly pass these to a compute shader. The docs go over a complete example of how this works here.

Note: Even though a compositor effect does have access to all of the screen textures after the transparent pass, transparent objects will only automatically write to the color texture. To get them to write to the depth texture, you'll need to set them to depth_draw_always. Unfortunately, there is no way to get them to draw to the normal texture (see this issue).

  1. There's also a way to pass the screen textures about manually via viewports, but it takes quite a bit of setup. However, the setup doesn't really involve low-level code, so this might be a better option if you'd prefer to stay away from that. In general, you'll do something like the following:
    1. You'll need two cameras. One will capture color information (so we'll call it the "color camera") and the other will capture depth information (so we'll call it the "depth camera"). You'll want the two cameras to have all of the same properties except that they should be on different visual layers. You'll also need to make it so that their 3D transforms match (you could make the depth camera a child of the color camera).
    2. Add two SubViewport nodes to your setup. You'll want to hook up one viewport to the color camera and the other viewport to the depth camera. You can do this by writing a little script, which can be put on the cameras or on the viewports. You'll make use of the method RenderingServer.viewport_attach_camera() to attach the cameras to the viewports (see here).
    3. Pass the resulting textures from these viewports as sampler2D uniforms to your post-processing shader. See here for an example of how to do this. After this is done, you will access the color texture and depth texture via these uniforms. (You will no longer need to use hint_screen_texture and hint_depth_texture. Basically, you've manually passed in these textures instead of using Godot's built-in versions.)
    4. Finally, you'll need to write a custom shader for all the objects that you want to be correctly visible in these textures that you just created. In this object shader (which, again, needs to be put on all your 3D objects), you'll overwrite the fragment() function so that it outputs regular color information for the color camera but outputs depth information for the depth camera. You can do this by testing which layer the current camera is on using the shader built-in CAMERA_VISIBLE_LAYERS. If it's on the color layer, output regular color; if it's on the depth layer, output depth information. (I think CAMERA_VISIBLE_LAYERS is only available in the vertex() function by default, but you can pass it as a varying to the fragment() function.)

If you decide to go with this third option, I'd be happy to provide more details. I've done it before, so I know it can work. But it is a little finicky to get everything set up correctly.