YifanXu74 / MQ-Det

Official PyTorch implementation of "Multi-modal Queried Object Detection in the Wild" (accepted by NeurIPS 2023)
Apache License 2.0
271 stars 13 forks source link

Unable to start training due to the gpu memory #35

Open charliess123 opened 12 months ago

charliess123 commented 12 months ago

Hi, thanks for the amazing job. I'm trying to train a mq-groundingdino-t model with the object365 data. But seems like I don't have enough gpu memory. Would you mind telling me how much gpu memory I need to reserve for training of 1 batch?

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 10.75 GiB total capacity; 9.34 GiB already allocated; 27.62 MiB free; 9.77 GiB reserved in total by PyTor ch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentat ion. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

YifanXu74 commented 12 months ago

Hello! Thank you for your interest in our work.

I have tried training MQ-Grounding-DINO using a batch size of 1, and it requires approximately 9.6GiB of RAM per GPU.

charliess123 commented 12 months ago

Hello! Thank you for your interest in our work.

I have tried training MQ-Grounding-DINO using a batch size of 1, and it requires approximately 9.6GiB of RAM per GPU.

In my case, it required more than 11g of gpu memory and show up that OutOfMemoryError. I already set the batch size as 1, and use the tiny backbone of the g-dino. And also the optimizer I change to sgd to make the training using less gpu memory. But still it doesn't work. Is there any way I can reduce the usage of gpu memory? QQ图片20231208161626

YifanXu74 commented 12 months ago

Oh, maybe you can try the followings:

  1. Set DATASETS.RANDOM_SAMPLE_NEG to a lower value (default 85), for example, 60. This controls the number of textual categories fed into the model in one forward pass, a lower value results in a shorter input sequence length.
  2. Maybe you can try gradient checkpointing. But I didn't implement this in the code :(

By the way, ensure that SOLVER.IMS_PER_BATCH=1 (=N if you run on N GPUs) and SOLVER.TUNING_HIGHLEVEL_OVERRIDE="vision_query" (to only tune the GCPs).

Real-UtopiaNo commented 10 months ago

Hello! Thank you for your interest in our work. I have tried training MQ-Grounding-DINO using a batch size of 1, and it requires approximately 9.6GiB of RAM per GPU.

In my case, it required more than 11g of gpu memory and show up that OutOfMemoryError. I already set the batch size as 1, and use the tiny backbone of the g-dino. And also the optimizer I change to sgd to make the training using less gpu memory. But still it doesn't work. Is there any way I can reduce the usage of gpu memory? QQ图片20231208161626

Hi,I encountered the same problem,and my GPU is 2080ti too.Did you solve the problem? Appreciate for your reply!