Closed fyviezhao closed 2 years ago
The coverage information produced by the rasterization operation is not differentiable w.r.t. vertex positions, because an infinitesimal change in vertex positions does not affect which triangle lands under each pixel center. The barycentric coordinates (u, v) are differentiable, though, because the point on triangle under the pixel center changes continuously w.r.t. vertex positions. Therefore, interpolation yields gradients for vertex positions because moving a vertex changes the color of the pixel continuously. These are applicable only in the covered pixels, of course, and ignore the effect of coverage at the silhouettes.
To get gradients w.r.t. vertex positions related to pixel coverage at the silhouettes, you need to apply the antialiasing operation at the end of the rendering pipeline, i.e., before the call to loss function in your example. This smooths the silhouette edges based on the exact geometric location of an edge between two pixel centers, and creates the needed continuous dependency between vertex position and final pixel color there.
The bit about antialiasing in the documentation covers this in more detail, and if you want to dig even deeper, see section 3.6 in our paper.
@s-laine Thanks for your quick reply! So I just need to add nvdiffrast.antialias
before my loss function, and everything will work fine, right?
I have one more concern, what if each triangle has the same color attribute during the traning process, i.e., the three vertices of a triangle have the same RGB value, would the image-space loss still be differentiable w.r.t. the vertex position? Since now an infinitesimal change in vertex positions would not change the covered pixel color. Need I worry about this? FYI: My training pipeline only changes the 2D mesh vertices, while both the triangle index (i.e. the mesh topology) and the color attribute at each vertex are fixed.
Yes, the antialiasing step should provide the gradients you need. There is the caveat that if the triangles at the silhouettes are too small compared to the pixels, the gradients may be excessively sparse/noisy, so you may need to adjust the rendering resolution accordingly.
The values of the color attributes do not affect differentiability. However, if the color is the same everywhere, the related position gradients will indeed be zero as you say, which may make the optimization problem harder, but you will have to experiment to see what works and what doesn't. Even in this case, the silhouette edges will still obtain gradients to vertex positions from antialiasing.
Yes, I tried adding antialiasing at the end of the rendering pipeline and it worked well. Thanks again for your detailed explanation.
I am working on a 2D image alignment problem: given a source and a target image, I want to use linear blend skinning to deform the triangulated source image to match the target. Specifically, I use a neural network to predict the per-vertex offset of the source and obtain a skinned 2D mesh that is ready to be rasterized. Following the triangle.py, I use the
nvdiffrast.rasterize
along with thenvdiffrast.interpolate
to rasterize the deformed 2D mesh (i.e, a collection of colored triangles), obtaining the deformed image.My question is, is the loss between the rasterized deformed image and the target image be differentiable with the vertex offset and further with the neural network parameter?
For better understanding, I summarize my problem as the pseudo-code below, any help is appreciated!