ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
51.2k stars 16.43k forks source link

Double Labeling Problem #3080

Closed sezer-muhammed closed 3 years ago

sezer-muhammed commented 3 years ago

Hi, As can you see below, some objects labeled twice by model. NMS prevents double label if the labels are same, but for different labels it does not prevent.

I looked the code and change merge_nms vallue to True, but it does not help.

How can I prevent that? I need to apply nms to my class 0, 1, 2, 3, 4, 5, 6 together, and class 7 is different than them and does not need to nms with them. But it does not matter, how can ı apply nms to all of them together?

image

glenn-jocher commented 3 years ago

@sezer-muhammed you can turn agnostic NMS on, or you can turn multi-label NMS off, both of which will remove multiple classes labelled on the same instance. A bit more information is below:

👋 Hello, thank you for asking about the differences between train.py, detect.py and test.py in YOLOv5.

These 3 files are designed for different purposes and utilize different dataloaders with different settings. train.py dataloaders are designed for a speed-accuracy compromise, test.py is designed to obtain the best mAP on a validation dataset, and detect.py is designed for best real-world inference results. A few important aspects of each:

train.py

test.py

detect.py

YOLOv5 PyTorch Hub

sezer-muhammed commented 3 years ago

@sezer-muhammed you can turn agnostic NMS on, or you can turn multi-label NMS off, both of which will remove multiple classes labelled on the same instance. A bit more information is below:

👋 Hello, thank you for asking about the differences between train.py, detect.py and test.py in YOLOv5.

These 3 files are designed for different purposes and utilize different dataloaders with different settings. train.py dataloaders are designed for a speed-accuracy compromise, test.py is designed to obtain the best mAP on a validation dataset, and detect.py is designed for best real-world inference results. A few important aspects of each:

train.py

test.py

detect.py

YOLOv5 PyTorch Hub

Hi again,

I tried turning multiple label off it did not work. now i turned on agnostic variable, did not work

and i turned on agnostic and tund off multiple label and here is the result :( image

glenn-jocher commented 3 years ago

@sezer-muhammed you've incorrectly applied the above advice. With agnostic NMS in place the above image you are showing is not produceable.

sezer-muhammed commented 3 years ago

@sezer-muhammed you've incorrectly applied the above advice. With agnostic NMS in place the above image you are showing is not producable.

I will look again, I changed codes a lot maybe i have broken something. so, I wont close this issue for now.

Thanks!

glenn-jocher commented 3 years ago

@sezer-muhammed yes try with a clean git clone. Agnostic NMS passes all detections through NMS and treats them as a single class, so there can be no inter-class overlaps like what you show.

sezer-muhammed commented 3 years ago

@sezer-muhammed yes try with a clean git clone. Agnostic NMS passes all detections through NMS and treats them as a single class, so there can be no inter-class overlaps like what you show.

DONE! There was two codes that I wrote. I was changing original one but not the one I used...... :(

Ankit-Vohra commented 3 years ago

Can I choose a particular class if there's an inter-class overlapping? Moreover, my model is working a bit weird. I'm training an Object detector for 2 classes (Good and Defective) and it's detecting the object as both the classes with confidence 0.7 and 0.8. Is there any way through which I can improve on the training @glenn-jocher

glenn-jocher commented 3 years ago

@Ankit-Vohra 👋 Hello! Thanks for asking about improving YOLOv5 🚀 training results.

Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement.

If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your project/name directory, typically yolov5/runs/train/exp.

We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below.

Dataset

COCO Analysis

Model Selection

Larger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.

YOLOv5 Models

Training Settings

Before modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.

Further Reading

If you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: http://karpathy.github.io/2019/04/25/recipe/