For Mipnerf360 dataset, the model is trained with a downsample factor of 4 for outdoor scene and 2 for indoor scene(same as in paper). Training speed is about 1.5x slower than paper(1.5 hours on 8 A6000).
I test on A6000 cloud GPU. With ONE GPUs, the GARDEN SCENE speed is 1.1-1.2 hours. With Multiple GPUs (2-4) , the speed drop down, need more time like 1.2-1.5 hours.
As the document write:
I test on A6000 cloud GPU. With ONE GPUs, the GARDEN SCENE speed is 1.1-1.2 hours. With Multiple GPUs (2-4) , the speed drop down, need more time like 1.2-1.5 hours.
ONE GPUs speed 1.2 hours:
ONE GPUs, add batch_size and render_chunk_size limit even faster need only 1.1 hours
So if the multiple GPUs does not working effectively? Or performance bottleneck?