facebookresearch / NSVF

Open source code for the paper of Neural Sparse Voxel Fields.
MIT License
797 stars 94 forks source link

Too slow training #54

Open dedoogong opened 3 years ago

dedoogong commented 3 years ago

I tried to train with the provided data(windholder) on V100 8 GPUs. it takes arounds 10GB per each GPU and with the default tranining command/configuration, it took almost 6-7 days to finish the training. Is it normal? or is there something for me to speed up? I also tried fp16 training or apex but it's not easy to run(so many errors). Please help me~ Thank you

MultiPath commented 2 years ago

I think the default parameters are set to train 300K updates. Do you have some training logs?

dedoogong commented 2 years ago

It becomes faster than before with fp16 but still it takes 2 days. Its too slow compared to the original nerf.

Ballzy0706 commented 1 year ago

Hi, I suffer the same promble, have you solved it now?