ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.22k stars 16.21k forks source link

Fine-tune dataset preparation #10216

Closed HerneSong closed 1 year ago

HerneSong commented 1 year ago

Search before asking

Question

Hello, I have a quick question. I have a pre-trained model with 10 classes. But on one class (pedestrian), the performance is not good enough. I want to continue fine-tine my model. My idea is to use a video, in which we have cars and pedestrians. I know that even though I only want to fine-tune pedestrians, but I still have to label every existing object in the video. But the video doesn't have other classes, say cyclists. Can I train on this dataset? Will this bring bias to the model (decline performance on other classes not appeared in the video)?

Additional

No response

github-actions[bot] commented 1 year ago

👋 Hello @HerneSong, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 1 year ago

@HerneSong 👋 Hello! Thanks for asking about improving YOLOv5 🚀 training results.

Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement.

If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your project/name directory, typically yolov5/runs/train/exp.

We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below.

Dataset

COCO Analysis

Model Selection

Larger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.

YOLOv5 Models

Training Settings

Before modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.

Further Reading

If you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: http://karpathy.github.io/2019/04/25/recipe/

Good luck 🍀 and let us know if you have any other questions!

HerneSong commented 1 year ago

Hello, glenn. I trained my model with 9 backbone layers frozen for 300 epochs just as this tutorial guided. The metrics were good, loss converged. My dataset was VisDrone which should be big enough and trained with default hyperparameters. But since the scale of VisDrone's pedestrian is not exactly the same as our inference video scene, I want to improve specifically class pedestrian.

And as I asked "I have a pre-trained model with 10 classes. But on one class (pedestrian), the performance is not good enough. I want to continue fine-tine my model. My idea is to use a video, in which we have cars and pedestrians. I know that even though I only want to fine-tune pedestrians, but I still have to label every existing object in the video. But the video doesn't have other classes, say cyclists. Can I train on this dataset? Will this bring bias to the model (decline performance on other classes not appeared in the video)?"

Do you have any suggestions?

glenn-jocher commented 1 year ago

@HerneSong yeah sure, sounds like a good idea. You can train on multiple datasets, see link below. As you mentioned though make sure all classes are labelled in both datasets.

https://community.ultralytics.com/t/how-to-combine-weights-to-detect-from-multiple-datasets

Zephyr69 commented 1 year ago

As long as the image space of your finetuning dataset diverges from that of the original dataset, ofc you are introducing biases. And in general the effect of finetuning on non-tuned classes will be unpredictable as the weights are only being adapted to classes being tuned. You'd better prepare a new validation set to verify the model performance after finetuning.

Idk if finetuning is strictly needed due to some resource constraints. I'd train from scratch using the whole dataset instead.

github-actions[bot] commented 1 year ago

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

glenn-jocher commented 11 months ago

@Zephyr69 yes, you are absolutely right. Introducing biases when fine-tuning on a dataset with different characteristics is a valid concern.

Consideration of a new validation set is also a smart move to assess the model's performance post fine-tuning.

Training from scratch using the entire dataset might indeed be a preferable approach, as it allows the model to learn from the full dataset characteristics without any biases introduced from the fine-tuning process.

Good luck with your training!