Closed glenn-jocher closed 3 years ago
I suppose the way to read this is that at confidence 0.90 there is very little confusion between classes. At confidence 0.25 there is greater confusion, but not necessarily between classes, moreso simply between detections and backgrounds, and then at confidence 0.001 the vast majority of detections are FPs (and actually background).
Oddly the person-background FN cell stays the same throughout, around 0.40. Not sure what that indicates.
Hi,
Firstly, sorry for the late response.
the confidence threshold effects are expected, I think. In conf 0.001, there should be a lot of false alarms, and in conf 0.90, there should be a few to no false alarms.
For the second question, currently, I do not have any answers. It maybe caused because the number of objects in person class is significantly higher than the other classes in Pascal VOC dataset.
Please feel free to ask any further questions.
@kaanakan thanks! Confusion matrix is integrated now and automatically produced at the end of trianing YOLOv5. Seems to be working well.
I have implemented YOLO on my own dataset and when I plot the confusion matrix, it displays an additional class named 'background'. I didn't include this class while training. Can someone please explain this?
@arroobamaqsood
I have implemented YOLO on my own dataset and when I plot the confusion matrix, it displays an additional class named 'background'. I didn't include this class while training. Can someone please explain this?
My Two Cents while learning ML: The object loss is basically the binary cross entropy to differentiate between an object and the background. This helps with localising and counting the objects on an image. Thus with a detection running on an image and no background being trained in the model, the whole thing would be image classification what yolo isn’t doing.
for these above, how calculate accuracy of the model from this confusion matrics?
Hi,
Firstly, sorry for the late response.
the confidence threshold effects are expected, I think. In conf 0.001, there should be a lot of false alarms, and in conf 0.90, there should be a few to no false alarms.
For the second question, currently, I do not have any answers. It maybe caused because the number of objects in person class is significantly higher than the other classes in Pascal VOC dataset.
Please feel free to ask any further questions.
Hi @kaanakan , I have confused the person-background FN stay around 0.40. How to reduce the confusion on background FN in the person class? does it affect the model results? Could you explain more clearly?
Hi, @glenn-jocher . May I confirm if you used val.py
to generate that confusion matrix and tweaked the --conf-thresh
parameter to 0.001, 0.25, and 0.90? Because I think I had a high background false positives (FP) in my Class 2 despite of having a pretty high true negatives for it so I thought that there might be something wrong. Thus, I am planning to replicate what you did on my end. Here is the image of the confusion matrix that was generated after training. I have two (2) classes.
I hope for your kind response. Thank you.
Hi, I have this confusion matrix implementation integrated into our YOLOv5 PR here: https://github.com/ultralytics/yolov5/pull/1474
I noticed during testing that the results depend significantly on the confidence threshold used. I ran an experiment across 3 different common confidence thresholds, but I'm not sure what conclusion to draw from the results.
conf 0.001
conf 0.25
conf 0.90