ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.48k stars 16.29k forks source link

Hunt for the highest MAP #2313

Closed tiwarikaran closed 3 years ago

tiwarikaran commented 3 years ago

Question

How to increase MAP?

Additional context

I trained the yolov5 for arounf 1500 epochs and it had a single class only, the images I downloaded were from google and the sizes were also kinda small majority of them < 640. I made a single class detection and the MAP I got after that was ~0.3. Any ideas how to increase this? I am relatively new to the yolo family.

Any help, any comment, any single idea would be of much much help.

THANKS!

These are the labels labels

These are the predictions predictions

We can talk more here https://www.linkedin.com/in/karan-tiwari-a86673200/

github-actions[bot] commented 3 years ago

👋 Hello @tiwarikaran, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 3 years ago

👋 Hello @tiwarikaran! Thanks for asking about improving training results. Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement.

If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your project/name directory, typically yolov5/runs/train/exp.

We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below.

Dataset

COCO Analysis

Model Selection

Larger models like YOLOv5x will produce better results in nearly all cases, but have more parameters and are slower to run. For mobile applications we recommend YOLOv5s/m, for cloud or desktop applications we recommend YOLOv5l/x. See our README table for a full comparison of all models.

To start training from pretrained weights simply pass the name of the model to the --weights argument. Models download automatically from the latest YOLOv5 release.

python train.py --data custom.yaml --weights yolov5s.pt
                                             yolov5m.pt
                                             yolov5l.pt
                                             yolov5x.pt

YOLOv5 Models

Training Settings

Before modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.

tiwarikaran commented 3 years ago

Hello @glenn-jocher your wonderful and insightful comment has helped me in many unexpected ways, I thought lets write it out and try if some fellow student like me is kinda free to share his knowledge. Thanks again for this.

I tried to run the same code today afternoon and it seemed to give me wonderful results, here's a snippet of the metrics from the latest experiment I ran.

image

Also when I saw your profile and learnt you're one of the many people who've made YOLO what it is today. I would say using the yolov5 is more easier than many of the auto ML libraries there exist. Yolov4 said that "this" should be accessible to everyone. V5 stands firmly on those grounds.

PS YOLO (you only look once) is hands down the coolest name for an algorithm that exists in Computer Vision.

glenn-jocher commented 3 years ago

@tiwarikaran thanks buddy! Yes, looking at your results everything looks good, though as the guide says since your val losses have not started overfitting you can actually benefit from even longer training, i.e. 2k or 3k epochs. Sometimes with small datasets especially much longer training is required to get the best results.

glenn-jocher commented 3 years ago

@tiwarikaran also you should be very careful with your labels, I noticed not all instances are labelled in all images, which is going to produce worse results.

kinoute commented 3 years ago

Epochs. Start with 300 epochs. If this overfits early then you can reduce epochs. If overfitting does not occur after 300 epochs, train longer, i.e. 600, 1200 etc epochs.

@glenn-jocher Is there no way to apply early stopping? I can see we're getting a best.pt model at the end of the training, which seems to be generated according to an equation mixing Precision, Recall, mAP@.5 and mAP@.5-0.95 here:

https://github.com/ultralytics/yolov5/blob/c2026a5f35fd632c71b10fdbaf9194e714906f02/train.py#L376-L379

Is it similar to what other kind of models are doing in image classification when they stop the training of the model if the validation loss stops decreasing after X epochs (named "patience")?

To reformulate my question: is it safe to always use the best.pt model as it shouldn't have any sign of overfitting or should we maybe implement an early stopping feature that tracks the validation loss instead?

Thanks!

glenn-jocher commented 3 years ago

@kinoute I think 'early stopping' term is used widely in TF and officially supported there, but you are right there is no comparable functionality here. So if you train with --epochs 1000 and overfitting occurs after 100 epochs it will keep training all the way to 1000.

best.pt will work as intended, saved at the maximum fitness. We used to define fitness as inverse val loss (so best.pt would be saved at the min val loss epoch), but based on user feedback we changed this to the current combination of metrics. The two are usually never the same epoch, but instead are in the same vicinity. Still, users don't like to see best.pt test to a lower mAP than the best mAP observed in training.

The most important point though in the above user guide is that you want to observe some overfitting in your results. If validation loss is lowest at the final epoch as in https://github.com/ultralytics/yolov5/issues/2313#issuecomment-787132909 then you are not achieving your best performance, and should restart your training with longer epochs.

github-actions[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

kinoute commented 3 years ago

@glenn-jocher Not sure if it's already the case but your comment https://github.com/ultralytics/yolov5/issues/2313#issuecomment-787128847 is great and it might be a good idea to add it to the Wiki!