Open SebastienThibert opened 1 month ago
Hi, batch size 8192 is quite big even for 4 GPUs (the original paper used batch size 4096 on 128 TPUs). Are you using CIFAR (32x32) or normal ImageNet sized images (224x224)?
I use exactly the code from the example so 32x32 I think
cordialement, Sébastien Thibert
Le ven. 20 sept. 2024, 16:55, guarin @.***> a écrit :
Hi, batch size 8192 is quite big even for 4 GPUs (the original paper used batch size 4096 on 128 TPUs). Are you using CIFAR (32x32) or normal ImageNet sized images (224x224)?
— Reply to this email directly, view it on GitHub https://github.com/lightly-ai/lightly/issues/1650#issuecomment-2363925571, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATLKMZ4EBK56DGUVEFFUNMDZXQZORAVCNFSM6AAAAABOSGOIIGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNRTHEZDKNJXGE . You are receiving this because you authored the thread.Message ID: @.***>
I just tested it on 4x24GB GPUs and it indeed fails with OOM. I had to reduce the batch size to 4096 for it to succeed. I think this is expected. Please note that the batch size is per GPU as it is set directly in the dataloader. You can train with larger batch sizes if you set precision="16-mixed"
to enable half precision.
Ok, I thought we could split the batch on all the GPUs. So any other tips to increase the batch size ?
When I run this example runs on multiple gpus using Distributed Data Parallel (DDP) training on AWS SageMaker with 4 GPUS and a batch_size of 8192, I got a OoM issue despite the 96GiB capacity: