WongKinYiu / PyTorch_YOLOv4

PyTorch implementation of YOLOv4
1.86k stars 585 forks source link

Multi GPU takes longer #394

Open yunxi1 opened 2 years ago

yunxi1 commented 2 years ago

Hello, I use DDP mode to find that the training time of two GPUs is 26min and that of one GPU is 16min in an epoch. Do you know why?

batch size = 16 device = 0,1 GPU is TITAN RTX this is config: python -m torch. distributed. launch --nproc per node 2 train. py