mrdoob / three.js

JavaScript 3D Library.
https://threejs.org/
MIT License
103k stars 35.4k forks source link

WebGLRenderer: Allow for binding, rendering into mipmap storage of textures #29779

Open gkjohnson opened 4 weeks ago

gkjohnson commented 4 weeks ago

Description

Rendering custom mipmaps can be valuable for a number of use cases for post processing, stylization, etc but it's not something that three.js supports currently. Use cases include:

I think there are a few concept disconnects currently. One is that "generateMipmaps" indicates that both mipmap storage should be generated and the mip chain should be generated. When generating custom mipmaps though these concepts should be separate. Ie you may want an option that says "generateMipmapStorage" and "generateMipmapContents". Or a setting that enumerates the three options. Another is that you cannot currently render into just any textures storage.

cc @CodyJasonBennett

Solution

These are some solutions that come to mind - there are no doubt others. I can't say these are optimal or align with what's possible in WebGPU but I'll list them here to start the discussion:

Generating Mipmap Storage w/o Contents The generateMipmaps setting could be changed to take three options so attaching storage does not implicitly mean generating content: NO_MIPMAPS (current false), MIPMAP_STORAGE, or MIPMAP_CONTENTS (current true).

Rendering to Mipmaps (#29844) Currently setRenderTarget supports taking a activeMipmapLevel but as far as I can tell this will only work if the user has specified textures in the texture.mipmaps array, is a 3d texture, or cube map. The active mipmap level could also apply to the automatically-generated mipmap storage using the framebufferTexture2D.

Writing to Regular Texture Mipmaps The above solutions only really apply to RenderTargets but generating custom mipmaps for regular textures, normals maps, data textures, etc are all relevant. A simple solution would be to enable setting a regular non-rendertarget texture as a depth buffer-less renderable target.

Alternatives **Generating Mipmap Storage w/o Contents** To do this currently you can create a render target, initialize it with `generateMipmaps = true`, and then disable it to ensure the storage is available. This however still incurs the overhead of generating mipmaps on creation: ```js const map = new THREE.WebGLRenderTarget( 32, 32, { generateMipmaps: true } ); renderer.initRenderTarget( map ); map.texture.generateMipmaps = false; ``` **Rendering to Mipmaps / Writing to Regular Texture Mipmaps** Using `copyTextureToTexture`, custom mipmaps can be generated with render targets and then copied into the appropriate mipmap level. The additions in #29769 allow for copying any existing mip map data, as well. This solutions incurs unneeded overhead copying and an additional render target, however.
Additional Context WebGPU does not support automatic generation of mipmaps: https://github.com/gpuweb/gpuweb/issues/386 The answer to this [stackoverflow question](https://stackoverflow.com/questions/79109103/how-to-copy-specific-mip-map-level-from-a-source-texture-to-a-specific-mip-map-l/79134417#79134417) shows that it's possible to render into a mipmap storage while sampling from the immediate parent mip by setting the `TEXTURE_MAX_LEVEL`, `TEXTURE_BASE_LEVEL`, `TEXTURE_MAX_LOD`. Setting these can probably be left to the user.
CodyJasonBennett commented 4 weeks ago

Perhaps it's best to accept a number for pure storage rather than overloading an array of texture.mipmaps like webgl_materials_cubemap_render_to_mipmaps. We don't implement transform feedback, but then that would have a similar desire for buffer attributes, which are then only GPU-facing. They have a similar overload in WebGL (or mappedAtCreation: false for WebGPU) for when you don't want to allocate CPU memory just to specify a size, in our case for many mips.

https://github.com/mrdoob/three.js/blob/841ca14e89f3ec925e071a321958e49a883343c0/examples/webgl_materials_cubemap_render_to_mipmaps.html#L117-L118

Implementation-wise, it would be nice to support TEXTURE_MAX_LEVEL, TEXTURE_BASE_LEVEL, TEXTURE_MAX_LOD as texture/render target properties to avoid ping-ponging (https://jsfiddle.net/cbenn/g617aq93) and MRT if able for this API. I've recently used them in conjunction for SSILVB/XeGTAO, although I ran into a plethora of platform issues with NPOT via ANGLE/DirectX (https://issues.chromium.org/issues/40877668). These are texture APIs that remain useful with WebGPU code, and I believe all of the early SPIR-V and now WGSL examples for mipmap generation use TEXTURE_BASE_LEVEL, although you can port SPD (MIT) with some difficulty.

gkjohnson commented 3 weeks ago

Perhaps it's best to accept a number for pure storage rather than overloading an array of texture.mipmaps like webgl_materials_cubemap_render_to_mipmaps.

The "mipmaps" field isn't documented all that well and I don't fully understand how to use it currently. But as far as I know it's used for storing and uploading mipmap data when it's already generated and stored in a file format. The cubemap case looks like an odd workaround / hack to get mipmaps generated.

In terms of specifying a number of levels, are there common use cases for not just generating mipmaps down to a 1x1 pixel when they're needed?

Implementation-wise, it would be nice to support TEXTURE_MAX_LEVEL, TEXTURE_BASE_LEVEL, TEXTURE_MAX_LOD as texture/render target properties to avoid ping-ponging

I expected this could be set by the user using gl.setParameter but I suppose that won't work for WebGPURenderer with a fallback.

and MRT if able for this API

That should work if all the MRT attachments are attached to the framebuffer with the appropriate mipmap levels, I believe.

CodyJasonBennett commented 3 weeks ago

In terms of specifying a number of levels, are there common use cases for not just generating mipmaps down to a 1x1 pixel when they're needed?

Yes, Hi-Z as one example, which does min/max reduction (depending on use of reverse depth), and the rest of the pipeline is very particular with the actual size and number of levels, like NPOT. For lower-spec devices, just a few coarse levels are enough. Many other techniques use hierarchical structures, which don't simply blur or carry over data but merge or interpolate. Fanciest I suppose would be Radiance Cascades which is worth a read itself. I'm not sure if PMREM would count, but maybe that's a decent place to try it.

I expected this could be set by the user using gl.setParameter but I suppose that won't work for WebGPURenderer with a fallback.

I expect this needs API changes in core, but the actual implementation is supported by both WebGL and WebGPU. Here's the WebGPU sample I was thinking of, where these are parameters of both the attachment and texture view. In WebGL, these are binding calls at a level with the framebuffer and to the texture at another level. Maybe we can first assume a reasonable level for sampling based on the level we are rendering to rather than leave it to configuration. https://github.com/gpuweb/gpuweb/issues/386#issuecomment-592600828

That should work if all the MRT attachments are attached to the framebuffer with the appropriate mipmap levels, I believe.

I think it's reasonable to expect the configurations of all attachments and textures to be the same for anything MRT. I've implemented this before externally by hacking __webglTexture and __webglFramebuffer, although I can't speak to the changes required to better support MRT in general. Happy to make a quick demo which is hopefully more proper.

gkjohnson commented 3 weeks ago

Yes, Hi-Z as one example, which does min/max reduction (depending on use of reverse depth), and the rest of the pipeline is very particular with the actual size and number of levels, like NPOT

I guess the questions is more is it a problem if memory is just allocated down to 1x1 mipmaps in these cases - depending on the need you would be able to only generate the first few mip levels. Of course it's ideal to not allocate memory that's unused but it might be a trade for a more ergonomic / easier to integrate change.

I think it's reasonable to expect the configurations of all attachments and textures to be the same for anything MRT. ... Happy to make a quick demo which is hopefully more proper.

An example would be nice but it looks like it would amount to calling framebufferTexture2D multiple times on a bound frame buffer for each color attachment? It shouldn't be too difficult to add support for rendering into just a single mipmap attachment to setRenderTarget, though. Also I'll note that I'm assuming that a depth buffer will not be used for rendering into mipmaps - not sure if it's needed for some other use cases.