lightly-ai / lightly

A python library for self-supervised learning on images.
https://docs.lightly.ai/self-supervised-learning/
MIT License
2.93k stars 250 forks source link

OoM issue with multiple gpus using Distributed Data Parallel (DDP) training #1650

Open SebastienThibert opened 4 hours ago

SebastienThibert commented 4 hours ago

When I run this example runs on multiple gpus using Distributed Data Parallel (DDP) training on AWS SageMaker with 4 GPUS and a batch_size of 8192, I got a OoM issue despite the 96GiB capacity:

Tried to allocate 4.00 GiB. GPU 2 has a total capacity of 21.99 GiB of which 1.21 GiB is free. Including non-PyTorch memory, this process has 20.77 GiB memory in use.
guarin commented 4 hours ago

Hi, batch size 8192 is quite big even for 4 GPUs (the original paper used batch size 4096 on 128 TPUs). Are you using CIFAR (32x32) or normal ImageNet sized images (224x224)?

SebastienThibert commented 4 hours ago

I use exactly the code from the example so 32x32 I think

cordialement, Sébastien Thibert

Le ven. 20 sept. 2024, 16:55, guarin @.***> a écrit :

Hi, batch size 8192 is quite big even for 4 GPUs (the original paper used batch size 4096 on 128 TPUs). Are you using CIFAR (32x32) or normal ImageNet sized images (224x224)?

— Reply to this email directly, view it on GitHub https://github.com/lightly-ai/lightly/issues/1650#issuecomment-2363925571, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATLKMZ4EBK56DGUVEFFUNMDZXQZORAVCNFSM6AAAAABOSGOIIGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNRTHEZDKNJXGE . You are receiving this because you authored the thread.Message ID: @.***>

guarin commented 3 hours ago

I just tested it on 4x24GB GPUs and it indeed fails with OOM. I had to reduce the batch size to 4096 for it to succeed. I think this is expected. Please note that the batch size is per GPU as it is set directly in the dataloader. You can train with larger batch sizes if you set precision="16-mixed" to enable half precision.

SebastienThibert commented 1 hour ago

Ok, I thought we could split the batch on all the GPUs. So any other tips to increase the batch size ?