NVIDIAGameWorks / kaolin-wisp

NVIDIA Kaolin Wisp is a PyTorch library powered by NVIDIA Kaolin Core to work with neural fields (including NeRFs, NGLOD, instant-ngp and VQAD).
Other
1.46k stars 132 forks source link

Reduce VRAM usage due to ray tensor generation by setting the tensors' dtype to torch.float32 #180

Closed barikata1984 closed 1 year ago

barikata1984 commented 1 year ago

Why

Dataset preparation required more than 12GB VRAM on the lego dataset when I tried to train a model with the preset nerf_hash config.

What

Ray tensors are generated as float64 tensors during the dataset preparation in main while they are converted to float32 just a few lines below. This PR changes dtype for the tensor generation to float32. This modification reduced VRAM usage (14GB-ish to 8GB-ish, but may vary) with the preset config.

P.S. The RTMV dataset may need a similar change, but this PR does not include it because I could not work with the dataset due to its size.