Open krbuettner opened 1 year ago
By defaul we used 8 GPUs and set per-gpu batchsize as 2 for all experiments in the paper. However, when implementing the detector on MMDet3.x for the code release, we encounted issue on the training speed when using SyncBN. We sidestep this issue by using more GPUS (16), adjust the learning rate following the linear scaling rule, and train half of the original iterations.
For the configs that are not using SyncBN, we retain the default setting of 8 GPUs.
Thank you for the response. I am asking because I plan to run on fewer GPUs (4) and may need to change the batch size in the codebase. Do you know where I can adjust per-GPU batch size? By default, would the per-gpu batch size just remain 2 if using fewer GPUs?
Hi! Please refer to this config file.
And increase the per-gpu batch size when you use fewer gpus. Otherwise, linearly adjust the learning rate and increase the total number of iterations.
Great, thank you for the guidance.
Hello, I encountered some problems in the process of deploying code, such as not finding configs and checkpoints when running test.py, and their positions (as shown below). I hope you can help me. My email is limi.1232321@gmail.com. And qq email 120001098@qq.com
Hello I was wondering if there is information on the GPU count, batch size, and GPU type for the results reported in the paper. Thanks!