NVlabs / nvdiffrast

Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering
Other
1.29k stars 139 forks source link

Dealing with texture undersampling / oversampling #101

Closed RatTac closed 1 year ago

RatTac commented 1 year ago

Hello @s-laine and team,

thanks again for this brilliant tool 😄

Usecase

I use nvdiffrast to optimize a set of PBR textures (albedo, metalness, roughness) and I am currently stuck with one phenomena. For my usecase I observe that optimizing the textures works well if the textures are "oversampled" in the rendered images, i.e. triangles occupy more pixels that the texture resolution.

However, if the resolution gets large, not all texels get projected to the images (neighboring pixels in the rendered images are several texels apart from each other in the texture atlas). The textures are thus "undersampled", not all texels end up in the rendered images, some texels are "skipped".

I notice that optimization works well for the oversampled case, but for the undersampled case really bad artifacts tend to show up for the "skipped" texels in the textures.

This is a result at resolution of the texture @ 512x512, rendered images are "oversampled" grafik This is a result at resolution of the texture @ 1024x1024, rendered images are "undersampled" grafik

The rendered images look fine in both cases (as the skipped texels are not used in the rendered images), however the optimized texture for the undersampled case are not usable. Is there a ways to optimize also the "skipped" texels using nvdiffrast?

What I tried so far

I also tried to experiment with the mip mapping options to account for this. I e.g. computed using the uv image derivates (uv_d) for each pixel how many pixels a skipped for the given resolution

x = torch.amax(self.__uvs_d, -1) #get max uv image derivative
x *= texture_map.size(-2) #adjust according to texture resolution
x = torch.log2(x.clamp(min=1.0)) #get according mimapping level (1 pixel skip -> mipmap 0, 2 pixel skip -> mipmap 1, 4 pixel skip -> mipmap 2, ...

return dr.texture(
            texture_map[None, ...],
            self.__uvs,
            mip_level_bias=x,
            filter_mode="linear-mipmap-linear",
            boundary_mode="wrap",
        )[0, ...]

This selects the correct mipmapping level depending on the texture resolution. However, it only constrains the value in the corresponding mipmapping level, and thus only the average of the corresponding texels in mipmapping level 0. The texels in the original resolution can thus be optimized arbitrarily as long as their average value in the corresponding mipmapping resolution is met.

Desired Result

Lets assume we have a rasterized line of projected neighboring texel [t0 t5 t10] (i.e. texels t1-t4 and t6-t9 are skipped due to undersampling) Then if my reference image colors are [c0, c1, c2] currently nvdiffrast only would optimize t0 -> c0, t5 -> c1, t10 -> c10. I would like that t1-t4 should be optimize as a linear interpolation for c0 to c1 and similarly t6-t9 should be optimize as a linear interpolation from c1 to c2. This would essentially make bot images given above identical (except the different resolution)

Is that at all possible? Is the antialiasing operation helpful for this case? I hope I could explain my usecase 😄

s-laine commented 1 year ago

If I read this correctly, you aren't using a mipmap-enabled texture mode while optimizing. You should first try doing that, using the per-pixel uv derivatives as done in the earth example. The effect is that when texture is "undersampled", larger averages over texels are used when rendering, and gradients affect multiple pixels accordingly.

That said, optimizing a texture can be tricky if the texture and rendering resolutions are far off. If the rendered/optimized images show individual texels only sporadically, the texels can end up acquiring all sorts of artifacts that don't show up when rendered in normal size. One way to mitigate this is to simply render in larger resolution or zoomed closer to the textures. A better downsampling filter in mipmap construction may also help — the built-in default filter in nvdiffrast is the simplest possible 2x2 box filter.

Edit: The paper also talks about this exact case quite a bit, in Sections 3.5 and 4.2 and Figure 6.

RatTac commented 1 year ago

Thanks a lot for the comment! This was really helpful! I could solve my problem by rather shading the texture atlas in texel space and not the images in pixel space. This improves the results significantly, no more sampling artifacts remain. The idea I got from your paper where this is mentioned in one paragraph to indicate the generality of the framework.

amanshenoy commented 1 year ago

@RatTac Just wanted to know how exactly did you shade the texture atlas in texel space? Is it some sort of colored dilation operation? If thats the case, would that not be highly dependant on the way the mesh has been unwrapped?