aim-uofa / AdelaiDet

AdelaiDet is an open source toolbox for multiple instance-level detection and recognition tasks.
https://git.io/AdelaiDet
Other
3.39k stars 652 forks source link

about batch_size of BoxInst #325

Open zhaoxiaodong789 opened 3 years ago

zhaoxiaodong789 commented 3 years ago

Dear Author: If we reduce batch_size, should we keep a linearly scaled lr?

tianzhi0549 commented 3 years ago

Scaled the lr might result in better performance.

zhaoxiaodong789 commented 3 years ago

Hello! Because of the limitation of gpu source, I have to reduce the batch_size to run your codes.

I follow your advice to linearly scale the lr according to the batch_size. My lr is 0.005 and the batch_size is 8. However, I got the following results,log1.txt log2.txt which makes me very confused. The AP is about 27%, which is ~2% lower than yours.

Could you please help me to solve this problem? I think it is very important for those who have little gpu source.

tianzhi0549 commented 3 years ago

@zhaoxiaodong789 If you reduce the batch size by twice, in order to get a similar performance, you should increase the training schedule by twice accordingly.

zhaoxiaodong789 commented 3 years ago

Dear Author: It's very kind of you to answer my question in a very short time!

I'm very sorry to tell you that I used MMDetection2 before, and this is my first time to use Detectron2. I followed the official document to change some parameters, but it doesn't mention the iteration, as is shown on the following picture. 微信图片_20210318221509

I'll change the iteration and re-run your code. Very Thanks!

guanyichen commented 3 years ago

@zhaoxiaodong789 Thanks for your hint. This parameter works fine for Google Colab Environment with a limited GPU source (Nvidia tesla v100 16GB).