Open Grhanas opened 1 month ago
👋 Hello @Grhanas, thank you for reaching out with your question! 🚀
To get started with understanding your validation metrics, you might want to check out our ⭐️ Tutorials where we cover various aspects including Custom Data Training and more.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us investigate further.
For training-related questions, ensure you're following our Tips for Best Training Results. Including more details like dataset examples and full training logs can be very helpful.
Ensure you are using Python>=3.8.0 with all dependencies from requirements.txt, including PyTorch>=1.8. Here's a quick setup guide:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 can be run in various environments. Some options include:
Current CI status: If this badge is green, all tests are passing.
Explore the capabilities of our latest model YOLOv8 for even better performance in object detection, image segmentation, and classification. Check the YOLOv8 Docs and get started with this command:
pip install ultralytics
This is an automated response, but an Ultralytics engineer will review and assist you soon. Thank you for your patience! 😊
@Grhanas to view precision, recall, and F1-score curves for the validation set, you can check the results.png
file generated in the runs/train/exp
directory after training. This file includes precision-recall (PR) curves, which can help you assess model performance and overfitting.
Search before asking
Question
Hello! I can see the F1-score for the training dataset after completing the training process. Additionally, I have access to the dfl_loss, box_loss, and cls_loss for both the training and validation sets, as shown in "results.png." However, I'm unable to find precision and recall values for the validation set. I would like to understand if my model is experiencing overfitting or underfitting.
Additional
No response