Closed saskra closed 1 year ago
When I test the model using batch size 1 on a V100 gpu, it raises the OutOfMemory error.
Hello! We use 4xA100 (80G) for training as we mentioned in the manuscript and in the readme file. The SAM model consume much memory. You can try to reduce the batch size for smaller memory consumption. Moreover, we have updated the config file so that you may use ViT-L or ViT-B version of SAM for testing, which consume less memory.
When I test the model using batch size 1 on a V100 gpu, it raises the OutOfMemory error.
Hello! We use 4xA100 (80G) for training as we mentioned in the manuscript and in the readme file. The SAM model consume much memory. You can try to reduce the batch size for smaller memory consumption. Moreover, we have updated the config file so that you may use ViT-L or ViT-B version of SAM for testing, which consume less memory.
Thanks for the info, that's quite a memory to have. Has anyone tried this on AWS?
What hardware did you train on? Even with 4 times 24 GB graphics cards I get the error torch.cuda.OutOfMemoryError.