nerfstudio-project / nerfacc

A General NeRF Acceleration Toolbox in PyTorch.
https://www.nerfacc.com/
Other
1.37k stars 113 forks source link

ngp training is extremely slow #173

Closed shewangmu closed 1 year ago

shewangmu commented 1 year ago

I run the script train_ngp_nerf.py with: --unbounded --aabb="-7.5,-7.5,-7.5,8.5,8.5,8.5" and set batchsize to 65536. The training process is extreamly slow, almost 2.5s per iteration. Under the same situation, ngp only need 4ms. However, the documentation claims that The NGP model can be trained to equal quality in 4.5 minutes. I run on NVIDIA GeForce RTX 3090 with torch1.13.1+cu117. The Volatile GPU-Util is 100%. Memory usage is 12554MiB / 24268MiB. Is this normal? What can I do to speed up the training,

liruilong940607 commented 1 year ago

Hi, the way we deal with unbounded scene is currently different from NGP. We were covering the infinite space with a contraction method and NGP was using mult-scale AABB.

Besides, if you manually set the aabb you also need to adjust the render_step_size, cone_angle etc as those are related to the scale of the aabb.

Note we have a much better version for unbounded scene now, but have not yet released.It will be out soon. Please stay tuned