Closed inder9999 closed 3 years ago
It seems like your learning rate is too small.
You use Repeated Dataset and the repeat number is 3, you should set max_epochs=5
to train your model for 15 epochs.
Hi! @AronLin Thanks for your reply and time.
I have used the learning rate as described here (https://github.com/open-mmlab/mmdetection/blob/2a856efb6f79afe8daa5ac97bf1711ca41b5fbb0/docs/1_exist_data_model.md#:~:text=Important%3A%20The%20default,4%20imgs/gpu. ) So I think lr=0.005 should be fine.
I did the experiment with the same configurations as above on a single GPU with lr=0.001
. I am able to get the mAP score of 65.2 when trained for max_epochs=5
. The command that I used was :for
python tools/train.py configs/pascal_voc/faster_rcnn_r50_caffe_dc5_voc0712.py --work-dir work_dirs/res50dc5_voc_singlegpu --cfg-options data.samples_per_gpu=2 optimizer.lr=0.001 runner.max_epochs=5
Further I also did the same experiment with VGG16 backbone again on 3 GPUs and faced the same issue. I trained the model for max_epochs=30
and the mAP score was around 39. I think the problem occurs when I do training on Multiple GPUs. What do you think about it?
Thanks again for the help!!
I trained faster_rcnn_r50_fpn_1x_voc0712 with 3 GPUs and got 80.9 mAP. I have not changed any settings but the number of GPUs. Here is the command:
GPUS=3 GPUS_PER_NODE=3 CPUS_PER_TASK=5 ./tools/slurm_train.sh openmmlab voc ./configs/pascal_voc/faster_rcnn_r50_fpn_1x_voc0712 --work-dir my_work_dir
You can try to train your model without setting --cfg-options
and directly use the origin settings.
Maybe setting lr=0.01
, samples_per_gpus=2
and max_epochs=4
will be better.
Too many iterations can lead to overfitting.
Describe the bug Training of faster R-cnn with resnet50 dc5 on voc07 is very slow. Have obtained only 0.208 mAP for 12 epochs.
Reproduction
Dataset pascal voc07
Environment
Configs
Bug fix If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!