ultralytics / yolov5

YOLOv5 πŸš€ in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.36k stars 16.26k forks source link

Doubts about the confusion matrix and YOLO evaluation indicators #13373

Open ayitime opened 3 hours ago

ayitime commented 3 hours ago

Search before asking

Question

I have a computational problem regarding the recall and precision of the model. First of all the dataset I'm training on has a category of 1, and the validation set has a total of 357 ground truth boxes. I tried to look at the console output precision, recall metrics, and the values in the confusion matrix, but I found some ambiguity in both.

The precision and recall printed in the console seem to be the results corresponding to the optimal f1 value when IoU=0.5, and by combining these two values with the number of ground truth frames I can back-calculate the values of tp and fp, which are 179 and 301, respectively. And these two values are supposed to be obtained corresponding to a certain conf threshold, although I don't know the value of that threshold for now.

However when I set the conf threshold of the confusion matrix to 0.3 and iou=0.5, something strange happened. I observed that the tp value of the confusion_matrix is 189 and the fp value is 271. i.e. the tp value of the confusion matrix is greater than the tp value of the console and the fp value of the obfuscation matrix is less than the fp value of the console. If I increase the conf threshold of the confusion matrix, then the tp and fp in the confusion matrix increase at the same time; similarly if I decrease the conf threshold of the obfuscation matrix, then the tp and fp decrease at the same time. But here is the problem, there is no way to change the conf threshold of the confusion matrix to make the tp and fp output of the confusion matrix and the console output equal, so I can't reproduce the tp and fp output of the console through the confusion matrix.

I can't figure out the reason behind this so I'm here to ask, looking forward to your answer! 1729607251465

1729607282929

1729607324902

Additional

No response

UltralyticsAssistant commented 2 hours ago

πŸ‘‹ Hello @ayitime, thank you for your interest in YOLOv5 πŸš€! It seems like you have some questions regarding the confusion matrix and evaluation metrics, specifically precision and recall. This is an automated response to let you know that an Ultralytics engineer will assist you shortly to address your query.

In the meantime, if this pertains to a πŸ› Bug Report, please consider providing a minimum reproducible example to aid in debugging your issue. If it's a custom training ❓ question, please share as much additional information as possible, including dataset specifics and training logs. This will enable us to provide more precise guidance.

To ensure you meet all necessary requirements, please verify you have Python>=3.8.0 installed along with all other dependencies listed in requirements.txt. YOLOv5 can also be run in various environments with all dependencies pre-installed, such as Notebooks with free GPU, Google Cloud Deep Learning VM, Amazon Deep Learning AMI, and Docker Images.

We're excited to share that YOLOv8, our latest state-of-the-art object detection model, is also available. With features designed for speed, accuracy, and ease of use, YOLOv8 offers improved performance for a range of tasks.

Thank you for your patience, and stay tuned for further assistance! 😊