tianrun-chen / SAM-Adapter-PyTorch

Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts
MIT License
968 stars 83 forks source link

OutOfMemoryError #12

Closed saskra closed 1 year ago

saskra commented 1 year ago

What hardware did you train on? Even with 4 times 24 GB graphics cards I get the error torch.cuda.OutOfMemoryError.

Harry-zzh commented 1 year ago

When I test the model using batch size 1 on a V100 gpu, it raises the OutOfMemory error.

tianrun-chen commented 1 year ago

Hello! We use 4xA100 (80G) for training as we mentioned in the manuscript and in the readme file. The SAM model consume much memory. You can try to reduce the batch size for smaller memory consumption. Moreover, we have updated the config file so that you may use ViT-L or ViT-B version of SAM for testing, which consume less memory.

tianrun-chen commented 1 year ago

When I test the model using batch size 1 on a V100 gpu, it raises the OutOfMemory error.

Hello! We use 4xA100 (80G) for training as we mentioned in the manuscript and in the readme file. The SAM model consume much memory. You can try to reduce the batch size for smaller memory consumption. Moreover, we have updated the config file so that you may use ViT-L or ViT-B version of SAM for testing, which consume less memory.

saskra commented 1 year ago

Thanks for the info, that's quite a memory to have. Has anyone tried this on AWS?