NVlabs / nvdiffrast

Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering
Other
1.29k stars 139 forks source link

is there anyway to obtain the occupancy of each faces and the depth map? #133

Closed kamwoh closed 10 months ago

kamwoh commented 10 months ago

hi,

is there any way to obtain the occupancy of each face and the depth map after calling the rasterize function?

the current rasterization seems like it only returns triangle IDs and the barycentrics

s-laine commented 10 months ago

NDC depth (i.e., z/w) is returned as the third component of the rasterizer output. If you instead want linear camera-space depth or something else, you can compute that separately per-vertex and use it as a vertex attribute for interpolate to get the depth map.

What do you mean by the occupancy of each face?

kamwoh commented 10 months ago

Thanks for helping. Can you please show an example of how to obtain the NDC depth? As I see only two elements are returned in this function:

https://github.com/NVlabs/nvdiffrast/blob/c5caf7bdb8a2448acc491a9faa47753972edd380/nvdiffrast/torch/ops.py#L263

For the occupancy, the soft rasterizer in pytorch3d will return the distance the signed Euclidean distance (in NDC units) in the x/y plane of each point closest to the pixel.

so I wonder if there are some similar functions in nvdiffrast?

Thanks!

s-laine commented 10 months ago

The rasterization op returns two tensors; the first one is the main result tensor containing barycentrics, NDC depth, and triangle ID, and the second tensor contains screen-space derivatives of the barycentrics. The NDC depth is the third channel of the main result tensor. Thus, if you call rast, _ = nvdiffrast.torch.rasterize(...), the NDC depth is in rast[:, :, :, 2].

Note that gradients are not propagated from this NDC depth channel to vertex positions. If you need that, you must go the vertex attribute and nvdiffrast.torch.interpolate() route.

Nvdiffrast does only point-sampled rasterization, so there is no concept of occupancy. There is a separate antialiasing op that approximates surface coverage across edges and adjusts pixel colors accordingly. This allows coverage-related gradients to propagate to vertex positions, and thus makes optimization of geometry possible.

kamwoh commented 10 months ago

Thanks for explanation