For training on scenes of the phototourism dataset (Pantheon exterior without image downscaling) using an NVIDIA Tesla V100 GPU of 32 GB memory, it is taking me 36 hours for a single epoch. Considering the default number of 20 epochs, it would take as long as 30 days for full training.
Could you please provide the exact specification of the GPU's used in your experiments? In the paper it is mentioned that 8 NVIDIA A100 GPU's are used. How much GPU memory it has? Does it have 80GB per GPU?
Hi, the GPU we used for experiments is 8 NVIDIA A100 GPU with 40GB GPU memory. However, you do not need to run all epochs, following the training time specified in the paper is sufficient.
For training on scenes of the phototourism dataset (Pantheon exterior without image downscaling) using an NVIDIA Tesla V100 GPU of 32 GB memory, it is taking me 36 hours for a single epoch. Considering the default number of 20 epochs, it would take as long as 30 days for full training.
Could you please provide the exact specification of the GPU's used in your experiments? In the paper it is mentioned that 8 NVIDIA A100 GPU's are used. How much GPU memory it has? Does it have 80GB per GPU?