mrdoob / three.js

JavaScript 3D Library.
https://threejs.org/
MIT License
102.21k stars 35.35k forks source link

Improving quality of rough environmental reflections #26796

Closed donmccurdy closed 11 months ago

donmccurdy commented 1 year ago

Description

I've attached a model using metalness=100% and roughness=15%, and compared screenshots in several engines.

model_slice_clean.glb.zip

Khronos Sample Viewer slice_khronos PlayCanvas slice_playcanvas Babylon.js slice_babylon three.js slice_threejs

You'll likely notice first that in Khronos Sample Viewer and PlayCanvas, the surface appears considerably less "rough". I'm not especially concerned about that here – the glTF specification doesn't include environment maps today, and implementations have some leeway in how to preprocess them.

I am wondering about the visible blocky edges in the three.js render, though. The same issue is visible with other environments, particularly RoomEnvironment. I understand we use a low-res 256x256px cubemap for PMREM by default — I tried increasing that resolution in local builds, but didn't see a meaningful reduction of the problem.

Solution

Are there settings available, or settings we could add, to improve the rough reflections for cases like these? Can/should IBL be pre-processed offline to improve the result? For scenes with large reflective surfaces, I think the current reflections may be less clean than users would prefer.

Alternatives

N/A

Additional context

No response

WestLangley commented 1 year ago

/ping @elalish

elalish commented 1 year ago

I can definitely answer any of your questions here in detail, but first I'd like to take issue with this:

You'll likely notice first that in Khronos Sample Viewer and PlayCanvas, the surface appears considerably less "rough". I'm not especially concerned about that here – the glTF specification doesn't include environment maps today, and implementations have some leeway in how to preprocess them.

The glTF spec may not include environment maps, but that's only a means of data ingest - it does define our BRDF, which says how roughness should affect sampling of any light source. To me this shows a very clear (and serious) bug on the part of the sample viewer and PlayCanvas, as they are not representing surface roughness accurately at all, which is probably the single most important part of PBR.

As to the much smaller error that three.js exhibits from HDR pixel interpolation, agreed that it's not ideal. My guess is that three.js is doing its interpolation in HDR, while Babylon is probably doing it in SDR (due to clamping or tone mapping their lighting). I don't believe they've ever fixed their even more serious bug, where they clamp their input HDR environment maps, thus using only the SDR part of their range. This gives them wildly incorrect results for realistic HDRs that include e.g. the sun.

The fact is that pixel-level interpolation of HDR data doesn't work well, but I don't know of a reasonable alternative. Really all pixel-interpolation should happen after tone mapping, but rasterizers don't really make that possible. I would say of all these renders, three.js is by far the most accurate, despite this issue. Certainly it's possible to change the mapping of roughness -> PMREM resolution. Increasing that will smooth this out, but you hit diminishing returns rapidly, as making the surface a little flatter dramatically magnifies the effect. And of course a huge part of frame rate performance is the size of the PMREM texture cache.

donmccurdy commented 1 year ago

Thanks @elalish!

We are at roughness=0.15 here — I guess I don't have a good mental model of how 15% rough "should" look. If you're confident that we're correctly representing roughness and the other two renders are not, I'm happy with that.

Also for comparison, here is MetalRoughSpheres

Khronos Sample Viewer Screenshot 2023-09-18 at 5 20 16 PM

threejs Screenshot 2023-09-18 at 5 20 36 PM


I agree the Babylon.js render appears to not be color managed correctly. I've included it for the roughness comparison — they seem to get a similarly-rough result without the blockiness, but if that's done by clamping or tone-mapping early then that's not going to help us much I suppose... 🤔


Certainly it's possible to change the mapping of roughness -> PMREM resolution ... but you hit diminishing returns rapidly ...

Even if it's a "change X and recompile three.js" situation, I'd be interested how to test this?

elalish commented 1 year ago

It's not parameterized for that super cleanly at the moment. I think you'd need to change this relationship and make a corresponding change here.

donmccurdy commented 1 year ago

Apologies if this is something where I should just go read some manuals. Please feel free to shoo me away if so. :)

I pulled a low-res snapshot of the processed IBL out of spector.js...

download

... and now I guess I have a more basic question. Why do lower-roughness values correspond to lower-resolution mips at all? Instead of N cubemaps with power-of-two resolution differences, could this (hypothetically) be N cubemaps all at 6x256x256px? I'd expect that to have a much higher preprocessing cost, but not much difference in the final texture size and framerate.

EDIT: I'll read through your "Fast, Accurate Image-Based Lighting" paper. If I'd read it more carefully back in 2020, perhaps I would not have these questions. 😇

elalish commented 1 year ago

Lower roughness are higher resolution because smoothing and resolution are fundamentally related - we're following the same mathematics as mipmaps. This hits two birds with one stone by also suppressing aliasing mipmap-style. With your approach you'd likely need mipmaps for each roughness level (a common approach before mine).

Yes, the higher preprocessing cost is a very big deal - the main purpose of this technique when I wrote it was to make PMREM generation online instead of offline. Also, the unused texture portions don't actually cost you much of anything in your texture cache, since those parts don't get loaded out of memory, so it still helps with framerate as well.

donmccurdy commented 1 year ago

Mipmaps are traditionally selected for sampling so that the space between samples is approximately 1 texel. When a surface is viewed directly and at full scale, a high resolution mip is sampled. But we don't quite have that same connection between screen space and texel space here.

Borrowing this chart from your paper...

Screenshot 2023-09-19 at 10 58 36 PM

...it seems like perhaps this particular choice of roughness (r=0.15) has landed in a place where the mip resolution has dropped off quickly to around 32x32, but the surface is still smooth enough to show recognizable reflections.

Context: This also happens to be a prominent visual on a page — there are no other relevant roughness levels! — so the rest of our envmap is unfortunately going to waste.

I don't want to complicate things if this is a niche problem and the possible solutions are problematic, but I am hoping there might be a clean way to get better results for roughness in the range [0.1, 0.2]. Suppose we can allow a single 1K texture for the environment, roughly scaling the current texture up to the nearest powers of two. Within that budget, we could at least double the size of the mips associated with roughness values on this range, with either of the arrangements below:

Screenshot 2023-09-19 at 10 56 01 PM

Ignoring my very arbitrary mip placement, if I work out the mapping... does this sound like a reasonable thing to try?

elalish commented 1 year ago

You're welcome to try, but my sense is this will be a lot of work for a very minimal gain. Keep in mind that our PMREM already allows for variable size environment maps, which means if you know your minimum roughness is 0.15, then you can just upload a 64x128 image and get basically full quality with a tiny texture cache (and great framerate). Any difference to what resolution is supported at what roughness will also require increasing the blur sampling to avoid aliasing while keeping the proper sigma. You definitely want to maintain the steady power-of-two reduction in resolution or the filtering time will explode.

The main reason I see this as a minimal gain is that with your example, if you just zoom in, those environment map pixels will get arbitrarily large, so while you might make one camera shot look better, others will still be highly pixelated. And in HDR, there's almost no amount of smoothing that will remove the square interpolation artifacts. You can see this problem clearly with roughness = 0 even if you upload a 4k environment map if you happen to have a flat, or worse yet, concave surface. There's just only so much we can do.

sciecode commented 1 year ago

I'm not 100% on this, but the blurriness diferences could be a side-effect of Roughness remapping. Some renderers use a very common remapping of roughness to a more user-friendly input perceptualRoughness

roughness = perceptualRoughness * perceptualRoughness

Filament has a section (4.8.3.3 - Roughness remapping and clamping ) on perceptualRoughness, which better explains the remapping. But the reasoning is that it better maps to what we perceive as roughness.

Top ( perceptualRoughness ) - Bottom ( Roughness ) image

I don't believe Three.js utilizes this remapping, so if instead of .15 roughness you utilize 0.0225, I think it would have a more direct comparison to what Khronos and PlayCanvas show. (wrong)

PS: completely unrelated to the blockiness

sciecode commented 1 year ago

It appears I'm mistaken.

THREE - top perceptualRoughness - bottom roughness image

Khronos Sample Viewer: image

sciecode commented 1 year ago

After digging a bit more, Three.js indeed already uses remapped roughness for it's normal workflow. Therefore the roughness parameter in all materials should already be considerered what Filament refers to as perceptualRoughness, I took this opportunity to make sure most shaders are being consistent in considering this remapping, and sure enough all PBR roughness usages I could find are indeed consistent:

https://github.com/mrdoob/three.js/blob/53c642e6905a1e53b0dd1c37ce14ec46f8b8fdc2/src/renderers/shaders/ShaderChunk/lights_physical_pars_fragment.glsl.js#L151 https://github.com/mrdoob/three.js/blob/53c642e6905a1e53b0dd1c37ce14ec46f8b8fdc2/src/renderers/shaders/ShaderChunk/envmap_physical_pars_fragment.glsl.js#L29

However the only place where I couldn't find this being considered is when sampling from ENVMAP_CUBE_UV using textureCubeUV. Internally the code directly uses material.roughness as it is, I can't confirm whether that is indeed the appropriate behaviour or not. Perhaps @elalish could confirm this.

https://github.com/mrdoob/three.js/blob/53c642e6905a1e53b0dd1c37ce14ec46f8b8fdc2/src/renderers/shaders/ShaderChunk/cube_uv_reflection_fragment.glsl.js#L168

elalish commented 1 year ago

Indeed, I went through pretty carefully to match up roughness values with the appropriate levels of filtering. You can read more about it in my paper.

mrdoob commented 11 months ago

@sciecode Did you manage to get anywhere with the investigation you were doing?

sciecode commented 11 months ago

I've reviewed our environment brdf implementation for any possible mistakes that might've slipped past, but everything looks in order. It's aligned both with academic literature and what other production renderers are doing at the moment. As far as what is being discussed in this topic, I wasn't able to find anything problematic.

On a somewhat related topic, I've noticed some visual inaccuracies with the usage of UE4 analytical BRDF split-sum approximation.

https://github.com/mrdoob/three.js/blob/4527c3e8498ebdc42e7eb40fcfa90435b9c95514/src/renderers/shaders/ShaderChunk/lights_physical_pars_fragment.glsl.js#L372-L392

It's used as a substitution to the Environment BRDF LUT (used by Sample Viewer), mostly for mobile compatibility, as using a 16b float texture on mobile presents a series of problems.

I've been working on a slightly more accurate model that aims to improve on this approximation, I'll be prepping a PR in the near future, alongside with a blog-post that presents most of the reasoning and comparison between the models.

I've already seen a significant improvement in quality, and it might help approximate the roughness response curve to that of other renderers, but it bears no effect on the blockiness appearance that Don refers to.