NVIDIAGameWorks / kaolin-wisp

NVIDIA Kaolin Wisp is a PyTorch library powered by NVIDIA Kaolin Core to work with neural fields (including NeRFs, NGLOD, instant-ngp and VQAD).
Other
1.46k stars 132 forks source link

Hashgrid PyTorch implementation #132

Closed alvaro-budria closed 1 year ago

alvaro-budria commented 1 year ago

The current PyTorch implementation of the hash grid interpolation in wisp is not correct. I modified it and tested it by executing

WISP_HEADLESS=1 python3 app/nerf/main_nerf.py --config app/nerf/configs/nerf_hash.yaml --dataset-path ../V8_/ --dataset-num-workers 2

It seems it's producing the same results as the default CUDA implementation after training for 50 epochs:

Implementation Loss PSNR time/it
CUDA $1.213 \cdot 10^{-2}$ 27.34 ~1 min/it
PyTorch $1.194 \cdot 10^{-2}$ 27.22 ~7 min/it

I noticed that in the original implementation in wisp, the coordinates are hashed by rolling the prime on the dimension index:

cc[...,0] * PRIMES[(i*3+0)%len(PRIMES)]

but in https://github.com/NVlabs/tiny-cuda-nn/blob/master/include/tiny-cuda-nn/encodings/grid.h, function lcg_hash, they just keep each prime fixed to a dimension, which is what I do here.

This PR resolves this issue.

orperel commented 1 year ago

Looks great, thanks for fixing this @alvaro-budria ! :)