eriklindernoren / PyTorch-YOLOv3

Minimal PyTorch implementation of YOLOv3
GNU General Public License v3.0
7.29k stars 2.64k forks source link

very low mAP on coco val2014 when training from scratch #818

Open fyw1999 opened 1 year ago

fyw1999 commented 1 year ago

The code is elegant and concise, but the training performance on coco val2014 is poor. The mAP is only 0.00912 after 24 epochs when I train the model from scratch.

Flova commented 1 year ago

Did you use the imagenet pretrained backbone weights (weights/darknet53.conv.74)?

Training totally from random initialization is not feasible on coco in 24 epochs. Even if you use the pretrained backbone, more than 50 epochs are needed. Ideal are a couple hundred epochs (default value is 300).

fyw1999 commented 1 year ago

Did you use the imagenet pretrained backbone weights (weights/darknet53.conv.74)?

Training totally from random initialization is not feasible on coco in 24 epochs. Even if you use the pretrained backbone, more than 50 epochs are needed. Ideal are a couple hundred epochs (default value is 300).

Thanks for your quick reply! I forgot to load the imagenet pretrained backbone weights before, but now I have already loaded it and trained the model for 60 epochs and the mAP is 0.03232. Is this a normal value? Can you give me some suggestions? Thank you very much!

Flova commented 1 year ago

I also started a training after you opened the issue and I have also a mAP of ~3 at the same epoch. I would train it for a couple hundred more and maybe try to tune the hyperparameters a bit. I don't train coco from scratch that often with this repo. I mostly train with in house datasets and get mAPs in the high 90s for these datasets, but the default hyperparameters should work for coco, so I will check that.

fyw1999 commented 1 year ago

I also started a training after you opened the issue and I have also a mAP of ~3 at the same epoch. I would train it for a couple hundred more and maybe try to tune the hyperparameters a bit. I don't train coco from scratch that often with this repo. I mostly train with in house datasets and get mAPs in the high 90s for these datasets, but the default hyperparameters should work for coco, so I will check that.

Thank you for your attention to this issue. I have now trained on the basis of imagenet pretrained backbone for 114 epochs, but the mAP is only 0.03480. I feel that even after 300 training epochs, the performance will not be good.

Flova commented 1 year ago

That is strange. You could try to deactivate the data augmentation and see what happens.

Flova commented 1 year ago

I am currently testing a hyperparameter set that achieves 11.6 mAP at epoch 3. I'll keep you updated.

fyw1999 commented 1 year ago

I am currently testing a hyperparameter set that achieves 11.6 mAP at epoch 3. I'll keep you updated.

I train the code with default hyperparameter using my dataset, and it works well. So I think the default hyperparameter is not suitable for coco dataset. What dataset did you test with, Is it coco? Thanks for you sharing.

Flova commented 1 year ago

I am currently trying to find better hyperparameters for coco and already have a few promising sets. Sadly training on coco takes quite some time. Even it you are running 4 nodes with different hyperparameters progress is slow :/ I'll keep you updated.

fyw1999 commented 1 year ago

I am currently trying to find better hyperparameters for coco and already have a few promising sets. Sadly training on coco takes quite some time. Even it you are running 4 nodes with different hyperparameters progress is slow :/ I'll keep you updated.

I think the reason for the poor performance on the coco data set is that the learning rate decays so fast that the learning rate is too small after 50 epochs. In your code, the lr multiplied by 0.1 when epoch greater than 50 and multiplied by 0.01 when epoch greater than 56.

Flova commented 1 year ago

Exactly. This is one of the reasons. I implemented a few fixes and training is still going on. They include:

I will create a PR soon, but I am currently on vacation and my training is still running.

Flova commented 1 year ago

Screenshot_20230326_220921_Chrome

Sorry for the bad phone screenshot, my laptop broke during my vacation... Green is SGD with higher LR etc. and blue is the beginning of the training with fixed burn in, and longer, interpolated LR decay.

J-LINC commented 1 year ago

Exactly. This is one of the reasons. I implemented a few fixes and training is still going on. They include:

  • Fix for burn in (leads to better "initialization" in the beginning). It was skipped due to an off by one error...
  • Linear interpolation for LR decay
  • Decay LR based on optimizer steps and not the number of batches, as they differ due to gradiant accumulation, this leads as you suggested to a slower decay
  • Usage of SGD with nesterov instead of Adam (brings a surprising benefit with some hyperparameters)
  • Higher initial learning rate (0.01 for sgd)
  • Multiplication of loss by the mini batch size to account for split gradients

I will create a PR soon, but I am currently on vacation and my training is still running.

May I ask if you are using the Adam optimizer in. cfg without making any changes?

Flova commented 1 year ago

Adam is the default at the moment afaik

J-LINC commented 1 year ago

Adam is the default at the moment afaik

wow can you show me the modification strategy you mentioned above in the yolov3.cfg?I see you mentioned changing adam to sgd maybe work better

J-LINC commented 1 year ago

Adam is the default at the moment afaik

I found that there are only four data enhancement operations that rotate and change the saturation of the image and those are the first ones in the cfg file what else is there and I found that there seems to be only one strategy for adjusting the learning rate which is multiplying by 0.1 according to the steps

J-LINC commented 1 year ago

Adam is the default at the moment afaik

I'm sorry I'm asking a little bit too many questions but I really want to know that it might be useful for me to train large data sets

maximelianos commented 1 year ago

I can confirm current settings work well for COCO, using pretrained darknet weights. On COCO test this checkpoint gets mAP 0.52318 image

lmz123321 commented 10 months ago

Setting the learning rate to 1e-3 and cancelling the lr_decrease helps in my case.