NVIDIAGameWorks / kaolin-wisp

NVIDIA Kaolin Wisp is a PyTorch library powered by NVIDIA Kaolin Core to work with neural fields (including NeRFs, NGLOD, instant-ngp and VQAD).
Other
1.46k stars 131 forks source link

tiny-cudann doesn't return correct gradeint w.r.t input coordinates for hash grids #128

Open Hippogriff opened 1 year ago

Hippogriff commented 1 year ago

Below is an example of a comparison of gradient returned by tinycudann vs numerical gradients computed for a simple field based on hash grid. The difference between the two approaches is huge. I tried using a pure Pytorch implementation of hashgrid. This approach gives correct gradients using autograd when compared with numerical gradients.


import tinycudann as  tcnn
import torch
import json

# Taken from tinycudann website
with open("config_hash.json") as f:
    config = json.load(f)

field_grid = tcnn.Encoding(3, config["encoding"])

coords = torch.rand((10, 3)).cuda()
coords.requires_grad = True

field = field_grid(coords).mean(1)
grad_outputs = torch.ones_like(field)

field_grad = torch.autograd.grad(field, [coords], grad_outputs=grad_outputs, create_graph=True)

# Numerical gradients
e = 1e-7
eps = torch.zeros((10, 3)).cuda()
eps[:, 0] = e
grad_n = (field_grid(coords + eps).mean(1) - field_grid(coords - eps).mean(1)) / (2 * e)
error = grad_n - field_grad[0][:, 0]
print (error)

tensor([ 0.0433,  0.0139,  0.0417, -0.0146,  0.0047,  0.0325,  0.0156,  0.0366,
         0.0289,  0.0422], device='cuda:0', grad_fn=<SubBackward0>)