ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.16k stars 16.19k forks source link

Sudden performance decrease in training #5721

Closed gboeer closed 2 years ago

gboeer commented 2 years ago

Search before asking

Question

Hi, first, thanks for the great yolo implementation you provide.

In my recent training, I noticed some behavior I haven't seen before. The loss was decreasing very nicely for a lot of epochs and the performance metrics increasing respectively. Then suddenly the performance drops by a large margin. I was suspecting an issue with the adaptive learning rate, however it is decreasing nicely as expected. I'm pretty satisfied with the performance of the best model but was curious if somebody may have other aspects of the training I may look into for debugging this behavior.

I'm using the yolov5l6.pt model with pretrained weights and train on a custom dataset.

Additional

grafik

github-actions[bot] commented 2 years ago

👋 Hello @Legor, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 2 years ago

@Legor that's an odd result. This may be due to a sudden spike in a loss component, perhaps by something in your dataset combining in an odd way during a particular augmentation. I suspect the reproducibility is near zero however (if you retrain, do you get the same drop?), so this would be difficult to debug.

Zengyf-CVer commented 2 years ago

@Legor I have encountered your problem many times, so I can share with you some of my experience. First of all, this sudden drop in AP is largely a problem of custom data sets. The following is my solution:

  1. Confirm that the negative sample is an instance that has not participated in the training, but has a label. This kind of data needs to be converted into a negative sample, that is, the label file txt is empty.
  2. Confirm that all instances are allocated. If the instances do not participate in the training, they are allocated as negative samples.
gboeer commented 2 years ago

Hi, thanks for your comments. @glenn-jocher I can't confirm, as of now, if this happens in an identical training run again. I will run a new training again in time. In fact, I have trained several yolo models on the same data before and did never encounter this behavior until now.

@Zengyf-CVer I do have several negative samples in the dataset as well. To my understanding, those samples simply do not have to have an annotation file supplied with them. Hence, for negative samples I simply put the respective image in the image folder, but no (empty) text file for the annotations. I couldn't quite understand your second point. What do you mean by the instances have to be allocated? Do you mean loaded into memory and if so, why would unallocated images be handled as negatives?

Edit: It seems to me that maybe you meant annotated and not allocated? Because then it would make more sense to me ;)

glenn-jocher commented 2 years ago

@Legor yes for background images you can simply place images in your images directories, no labels files are necessary.

Zengyf-CVer commented 2 years ago

@Legor For a simple example, if you have 30 categories, but only use 20, then the remaining 10 categories will be stored in some pictures, then pay attention, if there are remaining 10 categories in some pictures, but they do not exist For the 20 classes you use, you should set these images as negative samples, with the label as an empty file.

github-actions[bot] commented 2 years ago

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

realgump commented 2 years ago

Hi, so how did you fix this issue finally? I have met the same annoying problem when training. The metric of mAP reaches 0.9666 at the 96 epoch and but drops to 0 suddenly. I use one-class training thus I might not need to consider the categories which don't participate in the training. I have used the same methods to generate my custom dataset many times, and everything goes well except this time.
image

glenn-jocher commented 2 years ago

@realgump is this reproducible? If you train again does the same thing happen?

I suspect it may be a dataset issue, as I've not seen this on any of the official datasets.

sonovice commented 2 years ago

Just encountered a similar behaviour. I am training with custom data (15k images) that contains many tiny objects. For unknown reasons, all metrics drop significantly around epoch 25 and do not recover fully, even if training for 100 more epochs.

image

This is my training command utilizing GPUs (4x RTX 2080 Ti):

python -m torch.distributed.launch --nproc_per_node 4 train.py
--data dataset/dataset.yaml
--cfg yolov5l6.yaml
--weights ""
--img 1280
--hyp datasets/hyper.yaml
--save-period 10
--epochs 500
--batch-size 12
--device 0,1,2,3
--name yolo_2022-07-15_DDP_seed_10
--seed 10

The first two training runs had identical configuration, so I changed the random seed for the third (blue) run to exclude the possibility of very unfortunate combinations in image and augmentations at the same step in training. It helped slightly but the performance is still worse than before.

Augmentation in general is very limited to only scale (0.4) and translation (0.3), but due to heavy class imbalance I opted for fl_gamma = 1.0.

Since these runs seem to be more reproducable than the other ones above, they might give a hint at where to start looking for the reasons for such performance drop? (Unfortunately, I cannot share the dataset for legal reasons, but I will happily reproduce the runs with any helpful suggestions.)

glenn-jocher commented 2 years ago

@sonovice can you try training with the gradient clipping PR here? https://github.com/ultralytics/yolov5/pull/8598

sonovice commented 2 years ago

@glenn-jocher Unfortunately it did not help: 20220718_064605.jpg

glenn-jocher commented 2 years ago

@sonovice hmm really strange. There might be something wrong with your dataset in that case, especially since we don't see anything similar on the other datasets, i.e. COCO, VOC, Objects365 etc.

sonovice commented 2 years ago

@glenn-jocher I don't want to rule that out, but it's a bit surprising that it works for many epochs up to the point of failure. Could the implementation of Focal Loss and the imbalanced dataset play a role in this? Or the high count of tiny objects?

I have checked examples of all classes visually with fiftyone and did not notice any errors. The dataset is generated artificially thus the possibility of erroneous annotations is rather thin.

Are there any internals that would be worth to log to get a better understanding for this problem?

glenn-jocher commented 2 years ago

@sonovice focal loss is not recommended. And of course I can't speak to any other performance except that of the default hyperparameters. Once you start playing with those you are on your own.

sonovice commented 2 years ago

@glenn-jocher Thank you. I will try configurations without focal loss and also one with no augmentation at all. Will post again when results arrive.

If focal loss is not recommended, are there any other ways to fix class imbalance other than smarter image sampling?

glenn-jocher commented 2 years ago

@sonovice class imbalance is present in every dataset, and default training already performs well on these datasets. I would simply review the Tips for Best Results tutorial below and ensure you are in alignment there on your dataset statistics.

Tips for Best Training Results

Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement.

If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your project/name directory, typically yolov5/runs/train/exp.

We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below.

Dataset

COCO Analysis

Model Selection

Larger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.

YOLOv5 Models

Training Settings

Before modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.

Further Reading

If you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: http://karpathy.github.io/2019/04/25/recipe/

Good luck 🍀 and let us know if you have any other questions!

sonovice commented 2 years ago

@glenn-jocher Thanks for the tutorial. The dataset was in fact assembled after these recommendations.

Turns out the actual cause for the performance drop is indeed the many small (rather tiny) objects combined with too strong scaling augmentation. At some point the model decides to randomly pick pixels in the images. Increasing the model input resolution or slicing the images has helped to overcome this at the expense of increased training/inference time.

gboeer commented 2 years ago

Hi @sonovice I'm curious how you debugged this, since my dataset as well contains several very small objects. Also, maybe you can explain shortly what you mean by slicing the images?

Greetings

sonovice commented 2 years ago

@Legor I simply split my images into 2x3 slices, do object detection on all 6 images individually and merge the outputs with https://github.com/obss/sahi/