NVIDIA Kaolin Wisp is a PyTorch library powered by NVIDIA Kaolin Core to work with neural fields (including NeRFs, NGLOD, instant-ngp and VQAD).
1.46k
stars
132
forks
source link
Hashgrid PyTorch implementation #132
Closed
alvaro-budria closed 1 year ago
The current PyTorch implementation of the hash grid interpolation in
wisp
is not correct. I modified it and tested it by executingIt seems it's producing the same results as the default CUDA implementation after training for 50 epochs:
I noticed that in the original implementation in
wisp
, the coordinates are hashed by rolling the prime on the dimension index:cc[...,0] * PRIMES[(i*3+0)%len(PRIMES)]
but in https://github.com/NVlabs/tiny-cuda-nn/blob/master/include/tiny-cuda-nn/encodings/grid.h, function
lcg_hash
, they just keep each prime fixed to a dimension, which is what I do here.This PR resolves this issue.