Closed kennyalive closed 4 years ago
One the left is the original implementation, on the right is the suggested fix. The selected region shows one area where the issue is well detectable.
Here's the above rendering using EWA filter.
The only official pbrt scene which uses trilinear filtering - simple/spheres-differentials-texfilt.pbrt when rendered with the suggested fix also produces result that matches more closely EWA rendering. But in that case '2*width' filter looks not that bad because removing more frequencies also removes some aliasing which is still visible in EWA/new trilinear algorithm.
Left: (width) filter Middle: EWA Right: (2 * width) filter
Here's the visualization of the selected texture lod in HW rasterization with trilinear texture and HW raytracing which does PBRT-style lod calculation using (width) filter. The results are similar.
and here is the same comparison as above but for raytracing we use (2 * width) filter. Here we see the difference. That's approximately 1 mip level difference.
Thanks--this is great!
A texture footprint with (width) characteristic size (as computed in MIPMap::Lookup) corresponds to texture sampling rate with 1 sample per screen pixel. In order to satisfy the Nyquist limit we need a filter of size (2 width) - and that's what we have in the original code. This computation does not take into account though, that after computing lod and selecting mip levels, we apply bilinear filtering to sample each mip. This effectively increases filter size to (4 width) which results in too blurry images.
There could be at least two solutions here: the first one is to use (2 * width) filter size and then use point sampling when working with separate mip levels. Another solution proposed here is to select the mip levels based on the 'width' filter size and then rely on bilinear filtering to get correct result.
I didn't check this but the second method could provide better results because the last step when we apply bilinear filtering takes into account specific use case and in the case of point sampling the final result is baked into the mip level, so we don't have an opportunity to select proper position between texels as we do with bilinear filtering.
The proposed solution also matched HW filtering results, for example, in this project https://github.com/kennyalive/vulkan-raytracing, the RTX raytracing code computes texture lod using just 'width' and then applies bilinear filter. The result closely matches rasterization version. Color encoding of lod levels used in that demo allows to visualize differences in lod selection. By modifying the code to use (2 * width) it can be shown that is produces wrong result.