Closed baojun701 closed 3 years ago
The current code does not support multi-gpu. It needs to be modified for that purpose. However, multi-gpu is not recommended here since currently the batchsize is already around 200. If you want to train faster you can reduce the epochs, e.g. change to 150 or 200. The performance should not hurt much.
I modify the CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7, in the train.sh, but it always training by only one GPU, how can I train by multiply GPU?