Open JoaoLopesFerreira99 opened 2 months ago
The val mode uses different conf
and iou
values than prediction by default. You need to add conf=0.001, iou=0.6
to your prediction code.
so I should set model.conf=0.001, right? where should I change the IoU then?
By the way, is there any other method than .val() o retrieve metrics of a dataset?
ok, just set model.conf=0.001 and model.iou = 0.6 but got the same metrics
no, that is not how you do it. You need to pass these arguments to the prediction call. As in results = model(..., conf=0.001, iou=0.6)
By the way, is there any other method than .val() o retrieve metrics of a dataset?
No
Precision: 0.176 Recall: 0.938 F1: 0.297 mAP50: 0.938 mAP50-95: 0.680 Okay, the metrics changed but are not good either...
It seems like the precision is low. You might want to check your confidence threshold and ensure your dataset annotations are accurate. Adjusting these can help improve precision.
Yeah, I understand that. I only set the confidence threshold to 0.001 because it was what Skillnoob suggested. However, I still do not understand what I am missing in my script.
You can look at how ultralytics calculates the mAP etc. https://github.com/ultralytics/ultralytics/blob/c2068df9d981c5ae27fedee550bdaedddcec3c53/ultralytics/engine/validator.py#L39 & https://github.com/ultralytics/ultralytics/blob/c2068df9d981c5ae27fedee550bdaedddcec3c53/ultralytics/models/yolo/detect/val.py#L17
What are the default conf and iou thresholds when the YOLOv8 report precision and recall in training or validation?
Search before asking
Question
I wrote this script based on the metrics.py script to try to compute the metrics manually (I'll need it when I merge two different YOLO models in a decision level approach) However, the results I'm getting don't match the ones from the .val method. Can anyone detect what I am doing wrong?
The results I get are
They should be
Thanks in advance.
Additional
No response