ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.89k stars 16.38k forks source link

Using validation code for tesing to get evaluation metrices! #12694

Closed naveenvj25 closed 8 months ago

naveenvj25 commented 9 months ago

Search before asking

Question

Hi @glenn-jocher.. I have used validation code for detection also..! Since the yolov5 detection code just outputs the detected image, I have passed the test images and trained best weight to the validation code. I got the confusion matrix, MAP value, etc, but there is a difference in the detection results when cross-checked with the detected images detected using the Yolo detection code. Why does this mismatch occur when the same weight and image directory are passed to detection and validation codes?

Additional

I have made the iou and confidence scores as 0.2 and 0.25, respectively, for both the codes. I actually need the confusion matrix, precision and MAP

glenn-jocher commented 9 months ago

@naveenvj25 hello! Thanks for reaching out with your question. It's great to hear you're diving into the evaluation metrics using the validation code.

The difference you're observing between the detection results and the validation metrics could be due to several factors, such as differences in the non-maximum suppression (NMS) settings, data preprocessing, or even slight variations in the input parameters that you might have overlooked.

To ensure consistency, double-check that all settings are identical between your detection and validation runs. This includes IOU thresholds, confidence thresholds, image sizes, augmentation settings, and any other hyperparameters.

If everything matches and you're still seeing discrepancies, it might be helpful to look at individual cases where the detection results differ to understand what's happening. Sometimes, examining specific examples can reveal insights into why the overall metrics are different.

Remember, the validation code is designed to provide a comprehensive evaluation of the model's performance across the entire dataset, while the detection code is typically used for inference on individual images.

If you continue to experience issues, please provide more detailed information about the exact commands and parameters you're using for both detection and validation. This will help us troubleshoot the issue more effectively.

Thanks for contributing to the YOLOv5 community, and keep up the great work! If you need further assistance, don't hesitate to ask. For more detailed guidance, you can always refer to our documentation at https://docs.ultralytics.com/yolov5/. 🚀

naveenvj25 commented 9 months ago

Hello @glenn-jocher...! I have set the batch size to 16, IOU:0.2, and Conf_Thres: 0.25 for both scripts and the rest of the hyperparameters are the default. I hope the default parameters are the same for both tasks. Could you pls explain to me how to check non-maximum suppression (NMS) settings?

glenn-jocher commented 9 months ago

Hello @naveenvj25! It's good to hear that you've kept the IOU and confidence thresholds consistent. The default hyperparameters should indeed be the same for both tasks, provided you haven't made any changes to the code or configuration files.

To check the Non-Maximum Suppression (NMS) settings, you'll want to ensure that the --iou-thres parameter for NMS is the same in both your detection and validation commands. This parameter controls the IOU threshold used for NMS and should be consistent across both tasks to ensure comparable results.

For example, if you're using the detect.py script for detection and the val.py script for validation, both should include the --iou-thres 0.2 argument if you're setting the IOU threshold to 0.2.

If you're using the default settings without specifying this in your command, the default NMS IOU threshold from the configuration file will be used. You can check this value in the data/hyp.scratch.yaml file or whichever hyperparameter file you're using.

If you've confirmed that all these settings match and you're still seeing discrepancies, it might be worth revisiting the data preprocessing steps or any custom modifications to the code that could affect the results.

Keep up the great work, and if you have any more questions or need further clarification, feel free to ask. Your contributions help improve YOLOv5 for everyone! 🌟

github-actions[bot] commented 8 months ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐