Closed MartinPedersenpp closed 2 years ago
@MartinPedersenpp 👋 Hello! Thanks for asking about improving YOLOv5 🚀 training results.
Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement.
If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your project/name
directory, typically yolov5/runs/train/exp
.
We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below.
train_batch*.jpg
on train start to verify your labels appear correct, i.e. see example mosaic.Larger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.
--weights
argument. Models download automatically from the latest YOLOv5 release.
python train.py --data custom.yaml --weights yolov5s.pt
yolov5m.pt
yolov5l.pt
yolov5x.pt
custom_pretrained.pt
--weights ''
argument:
python train.py --data custom.yaml --weights '' --cfg yolov5s.yaml
yolov5m.yaml
yolov5l.yaml
yolov5x.yaml
Before modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.
--img 640
, though due to the high amount of small objects in the dataset it can benefit from training at higher resolutions such as --img 1280
. If there are many small objects then custom datasets will benefit from training at native or higher resolution. Best inference results are obtained at the same --img
as the training was run at, i.e. if you train at --img 1280
you should also test and detect at --img 1280
.--batch-size
that your hardware allows for. Small batch sizes produce poor batchnorm statistics and should be avoided.hyp['obj']
will help reduce overfitting in those specific loss components. For an automated method of optimizing these hyperparameters, see our Hyperparameter Evolution Tutorial.If you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: http://karpathy.github.io/2019/04/25/recipe/
Good luck 🍀 and let us know if you have any other questions!
Thank you @glenn-jocher Given that I have already trained the two models, I have seen this stock guide many times and it doesn't really answer my questions regarding if I can retrain using only images of the object that cannot be detected without messing up the weights to detect any other object or if there is another way of training the model that allows it to learn extracting any object from the background without classifying it. I know that you are busy responding to all of the issues submitted everyday, but any personal input would be great :D
@glenn-jocher Great read 😁 So seeing that I don't want to add a new class, but just more information in my dataset, I should just be able to add the data and keeps training. I probably would need to clear my cache files, but that is probably established somewhere if I search for it. If you can confirm this, I will close the issue and thank you for your help.
@MartinPedersenpp *.cache file is cleared automatically if the underlying dataset changes.
@glenn-jocher Th ank you!
@MartinPedersenpp you're welcome! If you have any more questions in the future, feel free to ask. Good luck with your continued training! 😊
Search before asking
Question
I am working on an agnostic / generalized object detection model that should be able to create a bounding box around any object that separates from the background. What I have done so far: Custom trained YoloV5s model with the coco dataset + SKU110K dataset with all label classes replaced by one class. It works really well except for one small object (a cardboard box of with candy) so far. When it faces with the front towards the camera it has a really low confidence. I thought it might had something to do with the fitting of the model, so I started training the YoloV5X on the same dataset. Again, it works great, except for this small object that it won't even detect now.
As far as I understand, if I want to retrain the model to include images of this item, I also have to use the old datasets again for training or else the weights will get shifted? Is it possible to add more data to model without training from scratch again? I also thought that maybe keep training the YoloV5s model until it starts overfitting could be an idea, since this should lead to more detections with more false positives included, am I correct?
I also thought about adding the PASCAL VOC dataset or the GroceryStore dataset, but again, coco works great except for this one small box of candy. Any advice on how to achieve an agnostic model would be greatly appreciated.
Additional
No response