zju3dv / NeuralRecon-W

Code for "Neural 3D Reconstruction in the Wild", SIGGRAPH 2022 (Conference Proceedings)
Apache License 2.0
694 stars 32 forks source link

Too long training time (36 hours for a single epoch on Phototourism dataset using NVIDIA V100 single GPU of 32GB) #43

Closed purplebutterfly79 closed 1 year ago

purplebutterfly79 commented 1 year ago

For training on scenes of the phototourism dataset (Pantheon exterior without image downscaling) using an NVIDIA Tesla V100 GPU of 32 GB memory, it is taking me 36 hours for a single epoch. Considering the default number of 20 epochs, it would take as long as 30 days for full training.

Could you please provide the exact specification of the GPU's used in your experiments? In the paper it is mentioned that 8 NVIDIA A100 GPU's are used. How much GPU memory it has? Does it have 80GB per GPU?

Burningdust21 commented 1 year ago

Hi, the GPU we used for experiments is 8 NVIDIA A100 GPU with 40GB GPU memory. However, you do not need to run all epochs, following the training time specified in the paper is sufficient.