Closed vasnakh closed 3 years ago
👋 Hello @vasnakh, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.
For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.
Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7
. To install run:
$ pip install -r requirements.txt
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
Yes, you can add some negative images. Simply put images with no label or empty label in your dataset.
Thank you. It was a bit confusing since I saw the following being printed but now I understand that this is just about the labels.
train: Scanning 'data/dataset/sar_dataset/labels/train' images and labels... 1593 found, 12737 missing, 0 empty, 0 corrupted
@vasnakh yes it's fine to have missing labels. In your example it looks like you've included many background images in relation to your training images. Generally we recommend a number around 0-10%, though you can raise this higher to reduce FPs. In your dataset the ratio is about 800%, so you may want to either add more training images or reduce your backgrounds a bit. Full training recommendations are below.
👋 Hello! Thanks for asking about improving training results. Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement.
If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your project/name
directory, typically yolov5/runs/train/exp
.
We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below.
Larger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.
--weights
argument. Models download automatically from the latest YOLOv5 release.
python train.py --data custom.yaml --weights yolov5s.pt
yolov5m.pt
yolov5l.pt
yolov5x.pt
--weights ''
argument:
python train.py --data custom.yaml --weights '' --cfg yolov5s.yaml
yolov5m.yaml
yolov5l.yaml
yolov5x.yaml
Before modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.
--img 640
, though due to the high amount of small objects in the dataset it can benefit from training at higher resolutions such as --img 1280
. If there are many small objects then custom datasets will benefit from training at native or higher resolution. Best inference results are obtained at the same --img
as the training was run at, i.e. if you train at --img 1280
you should also test and detect at --img 1280
.--batch-size
that your hardware allows for. Small batch sizes produce poor batchnorm statistics and should be avoided.hyp['obj']
will help reduce overfitting in those specific loss components. For an automated method of optimizing these hyperparameters, see our Hyperparameter Evolution Tutorial.If you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: http://karpathy.github.io/2019/04/25/recipe/
Dear, @glenn-jocher Why do you usually recommend a number around 0-10% for negative samples? Is there a paper or experiment that supports what you say? Many thanks in advance.
@jaqub-manuel higher background images reduce FPs at the expense of reduced TPs. Use as many or as few as you see fit for your use case.
Hello @glenn-jocher, I want to improve an already-trained model with negative images. Can I fine-tune the model by providing only the background images? Or do I need to retrain the model from scratch with labeled and background images?
@kbratsy yes, you can fine-tune the already-trained model by providing only background images. There is no need to retrain the model from scratch with labeled images. Including background images in the fine-tuning process can help reduce False Positives (FP) and improve the performance of your model. Make sure to provide a sufficient number of background images to ensure better results.
Hi @glenn-jocher, thank you for your information. I want to ask one more question. Does the number of epochs matter when fine-tuning the already-trained model in background images? Is a small epoch number like 1 or 10 enough? Or would it be better to increase the number of epochs (e.g. 100) to reduce FPs?
@kbratsy, glad to assist you! When fine-tuning an already-trained model with background images, the number of epochs does indeed matter. A small epoch number like 1 or 10 may not be sufficient to effectively fine-tune the model and reduce False Positives (FPs). It is generally recommended to increase the number of epochs, such as 100 or even more, to allow the model to better adapt to the background images and improve its performance. Keep in mind that the optimal number of epochs can vary depending on your specific dataset and training scenario, so you may need to experiment with different values to find the best results.
❔Question
Was wondering if it is possible to include images that have no bounding boxes to help the object detector have less false positives? I am seeing many false positives and I believe that's because the model hasn't seen the backgrounds well enough. Note that the image size of the original image is large and there are few objects in the image so I applied tiling and basically the training tiles are all around the object itself not on the edges of the actual/large image. Any suggestions/helps is highly appreciated.
Additional context