NVlabs / nvdiffrast

Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering
Other
1.37k stars 146 forks source link

how to render several polygons into a single image ? #73

Closed lith0613 closed 2 years ago

lith0613 commented 2 years ago

Hi, thanks for your wondeful work ! I have several polygons, and each polygon has a different texture map, and I want to render them into a image together, how can I do this in nvdiffrast ? For example, I upload two mesh with diffrent texture, and how can I render these two mesh into a single image, so that they can have different colors ? data.zip

s-laine commented 2 years ago

At the moment there is no support for accessing multiple individual textures in the texture op. The most efficient workaround is to place all textures into a single texture atlas and modify the uv coordinates for each mesh to point to the correct one. This has drawbacks if you need to wrap the textures or mipmap them, as the individual textures can blend together at the texture edges. To mitigate this, it helps to pad each individual texture in the atlas with wrapped content.

Alternatively, if you have just a couple of meshes/textures, you can render them individually and composite them afterwards so that in each pixel you pick the closest one based on the (z/w) channel of rasterizer output.

lith0613 commented 2 years ago

At the moment there is no support for accessing multiple individual textures in the texture op. The most efficient workaround is to place all textures into a single texture atlas and modify the uv coordinates for each mesh to point to the correct one. This has drawbacks if you need to wrap the textures or mipmap them, as the individual textures can blend together at the texture edges. To mitigate this, it helps to pad each individual texture in the atlas with wrapped content.

Alternatively, if you have just a couple of meshes/textures, you can render them individually and composite them afterwards so that in each pixel you pick the closest one based on the (z/w) channel of rasterizer output.

Hi slaine, thanks for your help ! About the second method

render them individually and composite them afterwards so that in each pixel you pick the closest one based on the (z/w) channel of rasterizer output. 

Is there a simple script for the detail implementation ? And I also wonder to know if its derivatives can be back propagation according to your suggestion ?

s-laine commented 2 years ago

For two meshes/textures, you can use torch.where with the condition based on comparing the z/w channel of rasterizer outputs. You can extend this to a small number of meshes by doing it multiple times, although this is not very efficient.

Alternatively, you can stack all depth outputs into a single tensor and use torch.argmin to determine which is the closest to the camera at each pixel. Then, you can stack the textured outputs into a single tensor on a new axis, and use torch.gather to extract the correct color for each pixel. I don't have an example script at hand, unfortunately.

As long as the compositing is performed using torch operations, the gradients will be backpropagated correctly to the appropriate inputs.

lith0613 commented 2 years ago

Thanks ! I will try as your advise.

csyhping commented 5 months ago

For two meshes/textures, you can use torch.where with the condition based on comparing the z/w channel of rasterizer outputs. You can extend this to a small number of meshes by doing it multiple times, although this is not very efficient.

Alternatively, you can stack all depth outputs into a single tensor and use torch.argmin to determine which is the closest to the camera at each pixel. Then, you can stack the textured outputs into a single tensor on a new axis, and use torch.gather to extract the correct color for each pixel. I don't have an example script at hand, unfortunately.

As long as the compositing is performed using torch operations, the gradients will be backpropagated correctly to the appropriate inputs.

@s-laine , I tried with your suggestion. In my work, lets say an object A and a object B, B is the ground truth that I want to make sure A should be in front of. I'm trying to optimize A's translation, the render comparison is something like color = torch.where(rast_a[..., 2:3] < rast_b[..., 2:3], out_a, out_b). But the gradient of A's translation is always 0. Do you have any idea on this? Thanks.

csyhping commented 5 months ago

And what if I concatenate all meshes into one mesh? Will nvdiffrast treat it as one mesh and perform correct rasterization? (which seems not...)

s-laine commented 4 months ago

Make sure you're executing the antialising op to get gradients related to the locations of the silhouette edges in the image. For correct gradients, you'd have to execute antialiasing as the very last step of the rendering pipeline, i.e., after the merging. This requires having the meshes combined into a single mesh, so in practice it's easiest to do that first, and run the entire rendering pipeline on the combined mesh.

Note that you cannot simply concatenate the triangle buffers together, but you have to offset the vertex indices in the latter meshes' triangle buffers by their start index in the vertex buffer. Otherwise the triangles in, say, the second mesh will incorrectly point to the vertices of the first mesh.

csyhping commented 4 months ago

Make sure you're executing the antialising op to get gradients related to the locations of the silhouette edges in the image. For correct gradients, you'd have to execute antialiasing as the very last step of the rendering pipeline, i.e., after the merging. This requires having the meshes combined into a single mesh, so in practice it's easiest to do that first, and run the entire rendering pipeline on the combined mesh.

Note that you cannot simply concatenate the triangle buffers together, but you have to offset the vertex indices in the latter meshes' triangle buffers by their start index in the vertex buffer. Otherwise the triangles in, say, the second mesh will incorrectly point to the vertices of the first mesh.

Got it , thanks!! I will try that.