nerfstudio-project / nerfacc

A General NeRF Acceleration Toolbox in PyTorch.
https://www.nerfacc.com/
Other
1.38k stars 112 forks source link

Reduce precision conversion when packing #124

Closed thomasw21 closed 1 year ago

thomasw21 commented 1 year ago

Assume that:

liruilong940607 commented 1 year ago

Thanks Thomas!! I always wanted to clean this up but didn't find the chance to do it.

I think it makes sense to enforce this conversion inside the cuda kernels but I'm also find with your current way in python.

The test failed because of the format issue. You can fix it with python scripts/run_dev_checks.py

thomasw21 commented 1 year ago

Cool I updated the kernels to support that convention, I guess we can also have a conversion whether packed_info in int32 is safe? It feels like number of samples can be quite large, over 2B ...

thomasw21 commented 1 year ago

Also one a few tests are broken due to the removal of ray_pdf_query in https://github.com/KAIR-BAIR/nerfacc/commit/55acb151620e52a9221c7d84a17c384b1671079c

liruilong940607 commented 1 year ago

Sorry for the late reply. Got distracted by a traveling.

Thanks for the update. You raised up a good point about the #samples. In the example codes I was using up to 2^20 samples, and it is far from reaching to GPU memory limit. I didn't test but likely 2^32 is feasible with a small network. In any case, I agree int64 is more safe for packed_info.

I'm going to merge this PR now and feel free to open a new one if you want.