NVlabs / nvdiffrast

Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering
Other
1.43k stars 157 forks source link

How to render depth map? #127

Closed SlimeVRX closed 1 year ago

SlimeVRX commented 1 year ago

I want to get depth from mesh, Please send me a sample code! Thank you very much!

video_000000_depth

video_000000_vis_original_size

SlimeVRX commented 1 year ago

Many thanks!

download

This is my sample code

import imageio  # Importing imageio library, used for reading and writing image data
import numpy as np  # Importing numpy library, which is commonly used for large, multi-dimensional arrays and matrices, along with mathematical operators on these arrays
import torch  # Importing PyTorch library, a popular library for machine learning research
import nvdiffrast.torch as dr  # Importing nvdiffrast's PyTorch interface, a library used for rasterization, mainly in graphics rendering

# Define a helper function that creates PyTorch tensors on GPU ('cuda' device)
def tensor(*args, **kwargs):
    return torch.tensor(*args, device='cuda', **kwargs)

# Initialize a CUDA context for rasterization
glctx = dr.RasterizeCudaContext()

# Define the position of vertices in 3D space. Each vertex has x, y, z coordinates and w (for homogenous coordinates)
pos = tensor([[[-0.8, -0.8, 1, 1], [0.8, -0.8, -1, 1], [-0.8, 0.8, -1, 1], [0.8, 0.8, 1, 1], [0.4, 0.4, 1, 1], [0.3, 0.3, 1, 1], [0.2, 0.2, 1, 1]]], dtype=torch.float32)

# Get the z (depth) values from the position tensor
depth_ = pos[..., 2:3]

# Convert the depth tensor to have only one z-value per vertex
depth = tensor([[[(z_val/1)] for z_val in depth_.squeeze()]], dtype=torch.float32)

# Define the triangles by referring to the indexes of vertices in 'pos' tensor
tri = tensor([[0, 1, 2], [2, 3, 1]], dtype=torch.int32)

# Rasterize the triangles to generate pixel coverage and depth (z-buffer)
rast, _ = dr.rasterize(glctx, pos, tri, resolution=[256, 256])

# Interpolate depth values across the rasterized pixels
out, _ = dr.interpolate(depth, rast, tri)

# Compute old minimum and maximum depth values
old_min = torch.min(out)
old_max = torch.max(out)

# Define new minimum and maximum depth values (for normalization)
new_min = 0
new_max = 255

# Normalize the output tensor to range [new_min, new_max]
out = (out - old_min) / (old_max - old_min) * (new_max - new_min) + new_min

# Convert the output tensor to a numpy array, repeat it across color channels, and convert it to uint8 type
out = np.squeeze(out.cpu().numpy())
out = np.repeat(out[:, :, np.newaxis], 3, axis=2).astype(np.uint8)

# Notify the user that the image is being saved
print("Saving to 'tri.png'.")

# Save the rasterized and interpolated depth values as an image
imageio.imsave('tri_.png', out)
s-laine commented 1 year ago

Your code correctly extracts the linear depth, i.e., distance from the camera plane, and normalizes it to between 0 and 255. I don't quite understand what you're asking for us to help with.

SlimeVRX commented 1 year ago

Hi @s-laine!

I have a face mesh, I want to get depth face map from mesh.

I read the triangle.py code and found a way to render depth. I modified triangle.py as above