Closed whatsups closed 3 years ago
Hi @pigcv89 Maybe the number of GPUs is necessary because nn.BatchNorm2d is used instead of SynchronizedBatchNorm.
I follow the default experiment settings (bs=8 with 4 GPU cards),but it looks like each card only took up about 4G memory. Is this because I am wrong about something? Or 'bs=8' means 'bs=8 for each gpu'?
@pigcv89 bs=8, not bs=8 for each GPU You can reduce the GPU number and have a try.
Thanks for your replay. I close this issue.
Thanks for your great work! I noticed that in your paper you mentioned: The model is trained on 4 TITAN-Xp GPUs with batch size 8 for 8 epochs. However, I train the SEAM on 4 2080Ti GPUs with batch size 8, and find that each card only took up about 4G memory. So I wonder, are 4×12G GPUs necessary? Thanks for your reply.