I use 74000 iterations(100 epochs) and 8 batch sizes, with a dataset of 5920 training images, which will use 24G for one GPU training, and the estimated running time is 17 hours. However, when I use 'bash tools/dist_train.sh MyConfigs/Upernet_Swin.py 2' to run on two GPUs, the estimated running time is even a little higher, up to 18 hours.
Is it running for 200 epochs, or is there a running time error?
I use 74000 iterations(100 epochs) and 8 batch sizes, with a dataset of 5920 training images, which will use 24G for one GPU training, and the estimated running time is 17 hours. However, when I use 'bash tools/dist_train.sh MyConfigs/Upernet_Swin.py 2' to run on two GPUs, the estimated running time is even a little higher, up to 18 hours. Is it running for 200 epochs, or is there a running time error?