Closed SephReed closed 3 days ago
Edit: Hello to any future readers! If you were linked here from a comment in the official docs, be aware that while this info is generally accurate, it's a bit simplified and might not be technically correct in all aspects. Also, by the time you read this, it may be out of date.
Unfortunately, I don't think any of these behaviors are bugs. These are pretty common limitations with this postprocessing method, and they arise from interactions between different rendering systems in Godot.
Consider these facts about how Godot implements certain things:
hint_screen_texture
, hint_depth_texture
, hint_normal_roughness_texture
) are all created once, after all opaque objects are rendered and before all transparent objects are rendered. So transparent objects can use the screen textures, but cannot be in the screen textures.Combine those facts and I believe all the behavior can be explained:
In your case, I think the easiest workaround is to set Alpha Cut
to Discard
in your Sprite3D. This will make the Sprite3D render either fully transparent or fully opaque pixels, and go in the opaque queue, so it shows up in the screen texture.
I think in some cases you can also work around this by setting the VisualInstance3D > Sorting > Offset
of the quad to a large negative number. This will render most transparent objects over the postprocessing quad, so while they won't be affected by the postprocess, they will show up in the scene.
You could also consider using a CompositorEffect, which does have the ability to create postprocessing that works well with transparency.
These limitations are already somewhat documented but they could perhaps be made more clear on the Advanced Postprocessing page itself.
(Also, I don't think that the issue you linked is related - these limitations are in effect with perspective cameras too. There are some issues with orthographic camera in shaders but they're not related to this issue IMO)
Firstly, thank you so much for the detailed explanation. The relief of hearing from someone knowledgable is a gift. And you nailed it: Transparency is the caveat I was missing, and it makes perfect sense.
That being said, why is this the recommended post processing method? I thought it was strange to have a quad in the scene at all, when all I really want is to take the buffer from the camera, and put it through a shader.
You mentioned "this postprocessing method," but it's the only one I've been able to find documented or in any tutorials online. Is there another method that simply adds a shader layer after the render stage?
why is this the recommended post processing method?
At the time that article was originally written, and up until the Compositor was implemented in 4.3, Godot really did not have a dedicated custom postprocessing solution. The method documented in "Advanced Postprocessing" is something of a hack.
These days the most robust way to implement a custom postprocessing would probably be a compositor effect, but you need to understand lower-level GLSL and rendering code. There's also a good collection of resources here. I believe that there are vague plans to implement a friendlier abstraction for custom postprocessing that uses the compositor but doesn't require as much boilerplate, but I don't think they are concrete yet. The official compositor tutorial implements something like that, since it lets you swap the GLSL code on demand.
Woah, really!? My intuition was wayyy off here. I went into this thinking "yup, there's some pixels, I'll just slap a shader on it. Easy peasy."
I've seen some suggestions to double camera it. Have a camera render to a 2d viewport, then add shaders to that.
Oh, if you only need the color texture and not the depth or normal+roughness textures, you can also do postprocessing after the 3D scene is rendered, using a ColorRect control node. That way is documented here https://docs.godotengine.org/en/stable/tutorials/shaders/custom_postprocessing.html
And yeah, there are also some tricks you can do with viewports in some cases, to do "compositing" without the compositor.
Okay. So, screen shader options are:
You said that option 2 doesn't allow depth, normal, or roughness. It seems like I should be able to render the scene with normals if I'm able to render it with colors. Same for depth and roughness. Is it more that it would be inefficient because each would require a separate render of the scene?
Option 2 (ColorRect) can't use depth, normal, or roughness because the data is no longer available at that point in the rendering process. For workarounds, you might find this comment and the linked project helpful. They explain it better than I can.
Perfect link. Thank you!
And for the record I think that the current state of postprocessing docs can be much improved, I would really like some page that compares the pros, cons, and limitations of each approach, especially now that the Compositor is implemented. It's just a large piece of work to to that.
I think you're very right on this one. With each step I take forwards, it seems I find myself even further confused as to how anything gets done efficiently.
My suspicion is that most people are applying shaders to individual objects and moving on. I don't know what the performance cost is of adding a grayscale shader to every object in a scene, rather than doing that in post-process. I would assume negligible for that type of shader, but that it adds up very quickly for things that are more complex.
Another related discussion:
https://github.com/thompsop1sou/custom-screen-buffers/issues/1
Tested versions
v4.3.stable.official [77dcf97d8]
System information
Godot v4.3.stable - macOS 14.0.0 - GLES3 (Compatibility) - Apple M1 Pro - Apple M1 Pro (10 Threads)
Issue description
https://github.com/user-attachments/assets/22a2d77f-c372-49b0-9326-43ea82b35902
3D Scene with a Sprite3D and MeshInstance3D set to full screen quad with shader.
Bugs include:
Steps to reproduce
Navigate the preview window, notice things being culled, notice the sprite3d never being purpleized
Minimal reproduction project (MRP)
screen_shader.zip