Closed sdfgeoff closed 1 year ago
Do you mean the readable depth buffer texture? Or a general zbuffer? How do you sample it exactly?
I use a screen filter and sample the texture. You can see the filter here: http://www.6dof.space/Inferno0.0.2/Scripts/filter.js
Ah yes, I see. So we simply omit any blended draw calls for this depth map right now: https://github.com/playcanvas/engine/blob/master/src/scene/forward-renderer.js#L1904
I wonder what the best solution API-wise would be.
@willeastcott @daredevildave @Maksims ?
Another way is just to make the posteffect system more flexible, so you can simply draw the HUD after the posteffect, what would be the proper options actually.
Yeah, it's tricky to handle depth map's with blended calls. I don't think there is any perfect solution
Rather than extending the posteffect system to allow the HUD to be drawn afterwards, I would prefer the ability to run multiple scenes at the same time, and the postFX to be applied on a per-scene basis. This makes some sense because the playcanvas interface already has methods of putting objects into scenes, so no additional UI development is required.
I've run into this issue as well. I'm trying to create a water shader with a foam line. The water material has blending so that it can be translucent when alphaToCoverage
isn't available. My approach to rendering the foam lines using the depth texture is similar to the one described here:
https://lindseyreidblog.wordpress.com/2017/12/15/simple-water-shader-in-unity/
Since blended objects are filtered out of the depth texture, I can't use this technique. @guycalledfrank of the options you listed, either of the last two would be perfect for my case.
Are there any workarounds for now?
I would recommend bearing with us while we put the finishing touches to a major rework of the engine. You can find the current state of the PR holding this update here. After we deploy this, I suspect it'll be much easier to do what you want. @guycalledfrank can probably confirm (or deny!).
In fact there's still a problem, because of this: https://github.com/playcanvas/engine/pull/1057/files#diff-a244156671d2beae826c3398f7d6fc21R195
Basically, each layer is split to opaque and transparent part (based on blendType), and
On WebGL2 you would be able to get the desired behaviour by putting your blended objects into a separate layer between World (opaque) and Depth layers. It will write to the native ZBuffer, and get captured.
On WebGL1 however it's not as easy, because there's this separate pass...
Perhaps we can split to opaque/transparent by the depthWrite flag itself, instead of blendType. In this case it should be expected (and might be not obvious) that all blended objects with depthWrite = true won't be properly sorted.
Tested: seems to break some scenes. Seems like we actually want SOME transparent objects both sorted (back to front) and captured to depth buffer.
The easiest suggestion would be: create objects with redWrite=greenWrite=blueWrite=alphaWrite=false and depthWrite=true and let them get into the depth buffer. Then add actual visible transparent objects to another layer.
As for
posteffect system to allow the HUD to be drawn afterwards
That should be super-easy to do with the upcoming PR.
I'll close this ticket as suggested solution 'draw the HUD after the posteffect, what would be the proper options actually.' is already possible. Please reopen if further work is needed.
If you have an additive type material (blendtype: 1) then it will never write to the depth buffer, even if depthWrite: true.
Why would you want to? In my case I have a HUD that overlays on the scene. I sample the depth map to ensure a post effect does not overlay the HUD. If the HUD objects use additive type alpha, then the depth map does not know the difference between the HUD and the background.