nerfstudio-project / nerfacc

A General NeRF Acceleration Toolbox in PyTorch.
https://www.nerfacc.com/
Other
1.38k stars 112 forks source link

Support automatic mixed precision (AMP) training? #113

Open iYuqinL opened 1 year ago

iYuqinL commented 1 year ago

I try to apply AMP training with nerfacc, but I found that result in much worse rendering results: Compared to float32 training, there is a 3 point drop in PSNR.

Of course, I fix a data type issue when AMP training.

Selection_394

I wonder if you have plans to support AMP training

liruilong940607 commented 1 year ago

Hi I'm not sure how much benefit I can get from AMP training so currently I'm not quite motivated to support it.

Also one big reason is that I'm not quire familiar with how torch amp works under the hood, so not sure where could cause issue for amp.

But happy to discuss!

iYuqinL commented 1 year ago

The biggest benefit of AMP training is faster training (speedup about 1x).

I am quite new to pytorch cuda extension, and don't know if it is because the autoscaling not work for the cuda implementation. I will take some time to learn about the pytorch extension and AMP.

Thank you for your wonderful work.

claforte commented 1 year ago

Hi, I also think this could help speed up 2X, or double effective batch size of https://github.com/threestudio-project/threestudio/issues/138 text->3D... is there any plan to support this? Otherwise I'll try to find someone who might be able to tackle it.