I am trying to implement the code using 4 GPUs. Do you have any suggestions to increase the training speed per epoch?
Currently, the model takes about 1 minute per epoch. I am using --gpus=4 --batch=64 --gamma=400 --dmtet_scale 0.8 --use_shapenet_split 1 --one_3d_generator 1 --fp32 0.
I also tried using 8 GPUs too: --gpus=8 --batch=64 --gamma=400 --dmtet_scale 0.8 --use_shapenet_split 1 --one_3d_generator 1 --fp32 0. But the training time did not change.
Hi,
I am trying to implement the code using 4 GPUs. Do you have any suggestions to increase the training speed per epoch?
Currently, the model takes about 1 minute per epoch. I am using --gpus=4 --batch=64 --gamma=400 --dmtet_scale 0.8 --use_shapenet_split 1 --one_3d_generator 1 --fp32 0.
I also tried using 8 GPUs too: --gpus=8 --batch=64 --gamma=400 --dmtet_scale 0.8 --use_shapenet_split 1 --one_3d_generator 1 --fp32 0. But the training time did not change.