NVlabs / neuralangelo

Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023)
https://research.nvidia.com/labs/dir/neuralangelo/
Other
4.33k stars 387 forks source link

Training Time #104

Open Dragonkingpan opened 1 year ago

Dragonkingpan commented 1 year ago

Why is it that you are also using the multi resolution hash of instant NGP, but your training speed is much slower than that of instant NGP? On my A6000 machine, each epoch takes about 10 seconds, but according to the requirement of 50k epoch in the article, I need 140 hours, which seems too long compared to the 10 seconds of NGP. I think the current radiation field is at two extremes, either a few seconds or a few days. In fact, what the market needs is a few minutes or hours of training time. Hahaha, I just complain casually.

mli0603 commented 1 year ago

Hi @Dragonkingpan

Thank you for your interest in the project! Very good insights! Our project primarily uses python for development, therefore there is a speed disadvantage compared to instant NGP implemented fully in CUDA. We hope the use of python could make it more accessible to developers to build upon. It is indeed also interesting to accelerate Neuralangelo so that it is more efficient!

blacksino commented 11 months ago

Hi @Dragonkingpan

Thank you for your interest in the project! Very good insights! Our project primarily uses python for development, therefore there is a speed disadvantage compared to instant NGP implemented fully in CUDA. We hope the use of python could make it more accessible to developers to build upon. It is indeed also interesting to accelerate Neuralangelo so that it is more efficient!

It seems like NeuralAngelo is a rather compact work, there isn't much room for acceleration algorithmically. I wonder if you have any good suggestions. Thanks in advance.