Open charliess123 opened 12 months ago
Hello! Thank you for your interest in our work.
I have tried training MQ-Grounding-DINO using a batch size of 1, and it requires approximately 9.6GiB of RAM per GPU.
Hello! Thank you for your interest in our work.
I have tried training MQ-Grounding-DINO using a batch size of 1, and it requires approximately 9.6GiB of RAM per GPU.
In my case, it required more than 11g of gpu memory and show up that OutOfMemoryError. I already set the batch size as 1, and use the tiny backbone of the g-dino. And also the optimizer I change to sgd to make the training using less gpu memory. But still it doesn't work. Is there any way I can reduce the usage of gpu memory?
Oh, maybe you can try the followings:
By the way, ensure that SOLVER.IMS_PER_BATCH=1 (=N if you run on N GPUs) and SOLVER.TUNING_HIGHLEVEL_OVERRIDE="vision_query" (to only tune the GCPs).
Hello! Thank you for your interest in our work. I have tried training MQ-Grounding-DINO using a batch size of 1, and it requires approximately 9.6GiB of RAM per GPU.
In my case, it required more than 11g of gpu memory and show up that OutOfMemoryError. I already set the batch size as 1, and use the tiny backbone of the g-dino. And also the optimizer I change to sgd to make the training using less gpu memory. But still it doesn't work. Is there any way I can reduce the usage of gpu memory?
Hi,I encountered the same problem,and my GPU is 2080ti too.Did you solve the problem? Appreciate for your reply!
Hi, thanks for the amazing job. I'm trying to train a mq-groundingdino-t model with the object365 data. But seems like I don't have enough gpu memory. Would you mind telling me how much gpu memory I need to reserve for training of 1 batch?
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 10.75 GiB total capacity; 9.34 GiB already allocated; 27.62 MiB free; 9.77 GiB reserved in total by PyTor ch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentat ion. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF