the user requests the device, with optional depth/stencil buffer, and optional antialiasing
the engine requests these [multisampled] color and depth/stencil surfaces from the context, and in the simplest case, all rendering takes place directly into them.
Limitations:
this default framebuffer is usually double buffered by the browser (one gets rendered to by the engine, the other is used to compose previous frame into the web page). This uses more memory than necessary, if all surfaces are multi-sampled. Often, specifically on tiled architecture, the multisampled depth does not even need to be stored in memory at all.
when using post-processing, after the world gets rendered, the multisampled color gets resolved to single sampled buffer, and post-processing takes place on it. At the end though, we have to render this single sampled result back into multisampled framebuffer, and the browser needs to resolve it later again, costing us a lot of GPU bandwidth. Typically, the multisampled buffer is resolve during post-processing, and stays single sampled.
Suggested implementation:
the user requets the device, with optional depth/stencil buffer, and optional antialiasing
those flags are used to create an internal render target (not double buffered).
and the engine requests a single-sampled color only buffer from the context (this is internally double buffered).
engine renders into internal optionally multi-sampled target, resolves it for post-procesing, and at the end this ends up in a single sampled color buffer, never having to unresolve and resolve it again. Note that after the resolve, the scene's depth buffer is likely no longer available (as it cannot be resolved on all platforms) - typically only UI renders on top of post-processing and this is not a limitation in that case/
Possible complication:
we need to investigate if this can work with WebXR. Do we need to receive depth information in the depth buffer from XR, and perhaps output it out again? @Maksims - any idea about this?
From what I can see, there are two ways XR interacts with the depth:
depth / depth-sending feature. From those XR APIs we get depth data in raw format, which can then be uploaded to a texture / texture used to render into the depth buffer. This should work without any changes.
When XR is initialized, a XRWebGLLayer is created, which gives us color / depth/ stencil framebuffer, which can be then set on a WebGL to be used as the default framebuffer. When XRWebGLLayer is created, an option ignoreDepthValues can be specified. If its set to false, the default frame buffer needs to output depth as well, which is not what the proposal wants to use. So in this case, we would likely need to allocate a single sampled depth attachment as well, and resolve / shader-copy depth to it.
This is the current implementation:
Limitations:
Suggested implementation:
Possible complication: