wusize / ovdet

[CVPR2023] Code Release of Aligning Bag of Regions for Open-Vocabulary Object Detection
https://openaccess.thecvf.com/content/CVPR2023/papers/Wu_Aligning_Bag_of_Regions_for_Open-Vocabulary_Object_Detection_CVPR_2023_paper.pdf
Other
176 stars 4 forks source link

GPU and Batch Size #11

Open krbuettner opened 1 year ago

krbuettner commented 1 year ago

Hello I was wondering if there is information on the GPU count, batch size, and GPU type for the results reported in the paper. Thanks!

wusize commented 1 year ago

By defaul we used 8 GPUs and set per-gpu batchsize as 2 for all experiments in the paper. However, when implementing the detector on MMDet3.x for the code release, we encounted issue on the training speed when using SyncBN. We sidestep this issue by using more GPUS (16), adjust the learning rate following the linear scaling rule, and train half of the original iterations.

For the configs that are not using SyncBN, we retain the default setting of 8 GPUs.

krbuettner commented 1 year ago

Thank you for the response. I am asking because I plan to run on fewer GPUs (4) and may need to change the batch size in the codebase. Do you know where I can adjust per-GPU batch size? By default, would the per-gpu batch size just remain 2 if using fewer GPUs?

wusize commented 1 year ago

Hi! Please refer to this config file.

wusize commented 1 year ago

And increase the per-gpu batch size when you use fewer gpus. Otherwise, linearly adjust the learning rate and increase the total number of iterations.

krbuettner commented 1 year ago

Great, thank you for the guidance.

kunpeng337 commented 1 month ago

Hello, I encountered some problems in the process of deploying code, such as not finding configs and checkpoints when running test.py, and their positions (as shown below). I hope you can help me. My email is limi.1232321@gmail.com. And qq email 120001098@qq.com 7O(5EZB(A{%PV()ZHXQ1$JE OS16TPHX79JO%XXUBZK B88