Closed sezer-muhammed closed 3 years ago
@sezer-muhammed you can turn agnostic NMS on, or you can turn multi-label NMS off, both of which will remove multiple classes labelled on the same instance. A bit more information is below:
👋 Hello, thank you for asking about the differences between train.py, detect.py and test.py in YOLOv5.
These 3 files are designed for different purposes and utilize different dataloaders with different settings. train.py dataloaders are designed for a speed-accuracy compromise, test.py is designed to obtain the best mAP on a validation dataset, and detect.py is designed for best real-world inference results. A few important aspects of each:
@sezer-muhammed you can turn agnostic NMS on, or you can turn multi-label NMS off, both of which will remove multiple classes labelled on the same instance. A bit more information is below:
👋 Hello, thank you for asking about the differences between train.py, detect.py and test.py in YOLOv5.
These 3 files are designed for different purposes and utilize different dataloaders with different settings. train.py dataloaders are designed for a speed-accuracy compromise, test.py is designed to obtain the best mAP on a validation dataset, and detect.py is designed for best real-world inference results. A few important aspects of each:
train.py
- trainloader LoadImagesAndLabels(): designed to load train dataset images and labels. Augmentation capable and enabled. https://github.com/ultralytics/yolov5/blob/fca5e2a48fb526b57bda0c66be6b7ac1aaa8d83d/train.py#L188-L192
- testloader LoadImagesAndLabels(): designed to load val dataset images and labels. Augmentation capable but disabled. https://github.com/ultralytics/yolov5/blob/fca5e2a48fb526b57bda0c66be6b7ac1aaa8d83d/train.py#L199-L202
- image size: 640
- confidence threshold: 0.001
- iou threshold: 0.6
- multi-label: True
- padding: None
test.py
- dataloader LoadImagesAndLabels(): designed to load train, val, test dataset images and labels. Augmentation capable but disabled. https://github.com/ultralytics/yolov5/blob/fca5e2a48fb526b57bda0c66be6b7ac1aaa8d83d/test.py#L89-L90
- image size: 640
- confidence threshold: 0.001
- iou threshold: 0.6
- multi-label: True
- padding: 0.5 * maximum stride
detect.py
- dataloaders (multiple): designed for loading multiple types of media (images, videos, globs, directories, streams). https://github.com/ultralytics/yolov5/blob/fca5e2a48fb526b57bda0c66be6b7ac1aaa8d83d/detect.py#L46-L53
- image size: 640
- confidence threshold: 0.25
- iou threshold: 0.45
- multi-label: False
- padding: None
YOLOv5 PyTorch Hub
- autoShape() class used for image loading, preprocessing, inference and NMS. For more info see YOLOv5 PyTorch Hub Tutorial https://github.com/ultralytics/yolov5/blob/fca5e2a48fb526b57bda0c66be6b7ac1aaa8d83d/models/common.py#L225-L250
Hi again,
I tried turning multiple label off it did not work. now i turned on agnostic variable, did not work
and i turned on agnostic and tund off multiple label and here is the result :(
@sezer-muhammed you've incorrectly applied the above advice. With agnostic NMS in place the above image you are showing is not produceable.
@sezer-muhammed you've incorrectly applied the above advice. With agnostic NMS in place the above image you are showing is not producable.
I will look again, I changed codes a lot maybe i have broken something. so, I wont close this issue for now.
Thanks!
@sezer-muhammed yes try with a clean git clone. Agnostic NMS passes all detections through NMS and treats them as a single class, so there can be no inter-class overlaps like what you show.
@sezer-muhammed yes try with a clean git clone. Agnostic NMS passes all detections through NMS and treats them as a single class, so there can be no inter-class overlaps like what you show.
DONE! There was two codes that I wrote. I was changing original one but not the one I used...... :(
Can I choose a particular class if there's an inter-class overlapping? Moreover, my model is working a bit weird. I'm training an Object detector for 2 classes (Good and Defective) and it's detecting the object as both the classes with confidence 0.7 and 0.8. Is there any way through which I can improve on the training @glenn-jocher
@Ankit-Vohra 👋 Hello! Thanks for asking about improving YOLOv5 🚀 training results.
Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement.
If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your project/name
directory, typically yolov5/runs/train/exp
.
We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below.
Larger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.
--weights
argument. Models download automatically from the latest YOLOv5 release.
python train.py --data custom.yaml --weights yolov5s.pt
yolov5m.pt
yolov5l.pt
yolov5x.pt
custom_pretrained.pt
--weights ''
argument:
python train.py --data custom.yaml --weights '' --cfg yolov5s.yaml
yolov5m.yaml
yolov5l.yaml
yolov5x.yaml
Before modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.
--img 640
, though due to the high amount of small objects in the dataset it can benefit from training at higher resolutions such as --img 1280
. If there are many small objects then custom datasets will benefit from training at native or higher resolution. Best inference results are obtained at the same --img
as the training was run at, i.e. if you train at --img 1280
you should also test and detect at --img 1280
.--batch-size
that your hardware allows for. Small batch sizes produce poor batchnorm statistics and should be avoided.hyp['obj']
will help reduce overfitting in those specific loss components. For an automated method of optimizing these hyperparameters, see our Hyperparameter Evolution Tutorial.If you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: http://karpathy.github.io/2019/04/25/recipe/
Hi, As can you see below, some objects labeled twice by model. NMS prevents double label if the labels are same, but for different labels it does not prevent.
I looked the code and change merge_nms vallue to True, but it does not help.
How can I prevent that? I need to apply nms to my class 0, 1, 2, 3, 4, 5, 6 together, and class 7 is different than them and does not need to nms with them. But it does not matter, how can ı apply nms to all of them together?