NVlabs / nvdiffrast

Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering
Other
1.37k stars 146 forks source link

what's the meaning of "Calculate footprint axis lengths." #43

Closed myshiop closed 3 years ago

myshiop commented 3 years ago

Hi, I'm looking at the differentiable details. the code calculates the MIP level through sample footprint in the forward pass, but I don't understand what area to calculate,That's the code below.

in texture.cu line742-line749 float A = dsdx dsdx + dtdx dtdx; float B = dsdy dsdy + dtdy dtdy; float C = dsdx dsdy + dtdx dtdy; float l2b = 0.5 (A + B); float l2n = 0.25 (A - B) (A - B) + C C; float l2a = sqrt(l2n); float lenMinorSqr = fmaxf(0.0, l2b - l2a); float lenMajorSqr = l2b + l2a;

s-laine commented 3 years ago

If you think of a pixel-sized circle in the image plane, it projects into an ellipse in texture space. As usual, we only consider the texture coordinate derivatives at the pixel center so that the transform is linear, even though in larger scale it (generally) isn't due to perspective projection. This code calculates the squares of the lengths of the major and minor axis of this ellipse in texture space.

For anisotropic texture filtering, you'd also need the direction of the major axis in texture space, so this code will not be sufficient for implementing that. But for isotropic fetches the orientation of the footprint ellipse doesn't matter and all you need is the axis lengths, or even just one of them depending on the mip selection logic.

myshiop commented 3 years ago

@s-laine thank you very much, in my understanding, if i find P(x, y) in screen space, and its uv is (u, v), we can get ddx and ddy from the Adjacent pixels, P1 and P2. P1 's uv is (u1, v1), P2's uv is (u2, v2). Can I understand that I'm looking for an ellipse passing through these two points,(u1, v1) and (u2, v2), and its center is (u, v), in texture space?

s-laine commented 3 years ago

Yes, this is a pretty close analogy if you ignore the effects of perspective projection. For linear projections, or in fact any case without perspective distortion, this would give exactly the same result as computing the derivatives analytically at center of P.

In practice this method would be brittle because looking at neighboring pixels gives nonsensical results if you have different surfaces at P, P1, and P2 - near a silhouette, for example.