Open zhaoxiaodong789 opened 3 years ago
Scaled the lr might result in better performance.
Hello! Because of the limitation of gpu source, I have to reduce the batch_size to run your codes.
I follow your advice to linearly scale the lr according to the batch_size. My lr is 0.005 and the batch_size is 8. However, I got the following results,log1.txt log2.txt which makes me very confused. The AP is about 27%, which is ~2% lower than yours.
Could you please help me to solve this problem? I think it is very important for those who have little gpu source.
@zhaoxiaodong789 If you reduce the batch size by twice, in order to get a similar performance, you should increase the training schedule by twice accordingly.
Dear Author: It's very kind of you to answer my question in a very short time!
I'm very sorry to tell you that I used MMDetection2 before, and this is my first time to use Detectron2. I followed the official document to change some parameters, but it doesn't mention the iteration, as is shown on the following picture.
I'll change the iteration and re-run your code. Very Thanks!
@zhaoxiaodong789 Thanks for your hint. This parameter works fine for Google Colab Environment with a limited GPU source (Nvidia tesla v100 16GB).
Dear Author: If we reduce batch_size, should we keep a linearly scaled lr?