ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
49.27k stars 16.04k forks source link

The confidence or category probability value is too small #837

Closed silicon2006 closed 3 years ago

silicon2006 commented 3 years ago

❔Question

When testing, the confidence or category probability value is too small, even when testing on the training data set, the confidence value is also small. However, the confidence and category probability of the darknet version of yolov4 and yolov3 can reach 0.99 (on the training data set), why? and how to fix it?

Additional context

Snipaste_2020-08-25_10-48-14

glenn-jocher commented 3 years ago

@silicon2006 we output object confidence times class confidence times predicted IoU.

I think yolov3 and yolov4 are outputing something a bit simpler (and less performant).

Aktcob commented 3 years ago

iou aware. And score = sqrt(pred_iou*cls) can get score with 0.9+++

silicon2006 commented 3 years ago

Good Idea, Now the socre (object confidence times class probability) can reach 0.95+ when testing on the train dataset; Thanks!

github-actions[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

lucasjinreal commented 3 years ago

@Aktcob what's pred_iou when inference?

glenn-jocher commented 3 years ago

@jinfagang if training is too short, predicted iou will be lower, since iou is lower. This will result in lower confidences as well, so low confidences during inference is a good indicator that you have not trained long enough.

maryamag85 commented 3 years ago

I am using default pretrained resnet101 and scores are still too small.

Steinwang commented 2 years ago

@silicon2006 we output object confidence times class confidence times predicted IoU.

I think yolov3 and yolov4 are outputing something a bit simpler (and less performant).

So how can I cancel this setting? Output objectness confidence score directly without scaled by IOU and class confidence

glenn-jocher commented 2 years ago

@Steinwang objectness targets can be modified in the loss function: https://github.com/ultralytics/yolov5/blob/71621df87589faea19ba4c4098bb68e73201f30c/utils/loss.py#L144-L152