A depth pre-pass is where you pre-populate the depth buffer with the depth of objects, without actually rendering them. Then later, you can render the objects but only when the depth of the object equals what's in the buffer. That wasn't a very good explanation, but it's essentially an optimisation with the following trade offs:
Positive: Each pixel is only shaded once, meaning that the number of times you sample a texture or run a PBR calculation in a frame is O(width_pixels height_pixels) instead of O(width_pixels height_pixels * overdraw) where overdraw can range from 1 in the best case to a lot in the worst case.
Negative: we have to clip and rasterise everything twice.
Negative: doing a depth pre-pass for alpha clipped objects is a bit harder and a worse trade off because we need to run a fragment shader and discard if the pixel should be clipped.
I'm a big fan of depth pre-passes in general, but I'm a bit more hesitant here. As we're already clipping and rasterising everything twice (for both views), doing it twice more could be not worth it. Additionally, because you can't run a pipeline in WebGL without a fragment shader, we'd just be running an empty one anyway.
I think we need to do some profiling to see if this is worth it.
I expect the way we'd do it would be like:
Depth pre-pass all the opaque (pbr or unlit) primitives of all the models (because we're not using any aspect of the primitive materials we can do them all at once).
Render all the alpha clipped primitives normally (we can't batch these in a depth pre pass like the opaque primitives so it's not worth it)
Render all the opaque primitives where they match the depth in the depth buffer.
Shading should be mostly constant time for a given framebuffer width and height unless there's a ton of overdraw with the alpha clipped objects (which could be the case for scenes with a ton of folliage).
A depth pre-pass is where you pre-populate the depth buffer with the depth of objects, without actually rendering them. Then later, you can render the objects but only when the depth of the object equals what's in the buffer. That wasn't a very good explanation, but it's essentially an optimisation with the following trade offs:
I'm a big fan of depth pre-passes in general, but I'm a bit more hesitant here. As we're already clipping and rasterising everything twice (for both views), doing it twice more could be not worth it. Additionally, because you can't run a pipeline in WebGL without a fragment shader, we'd just be running an empty one anyway.
I think we need to do some profiling to see if this is worth it.
I expect the way we'd do it would be like:
Shading should be mostly constant time for a given framebuffer width and height unless there's a ton of overdraw with the alpha clipped objects (which could be the case for scenes with a ton of folliage).