rafaelpadilla / review_object_detection_metrics

Object Detection Metrics. 14 object detection metrics: mean Average Precision (mAP), Average Recall (AR), Spatio-Temporal Tube Average Precision (STT-AP). This project supports different bounding box formats as in COCO, PASCAL, Imagenet, etc.
Other
1.08k stars 215 forks source link

Confusion matrix implementation #56

Closed benoitdes closed 2 years ago

benoitdes commented 3 years ago

Hello @rafaelpadilla,

first thanks for this repo, it's been very useful! However, for my work I needed to compute confusion matrices to understand better why my model was not performing better on some classes. I saw that on the "old repo", there were two issues related to that (https://github.com/rafaelpadilla/Object-Detection-Metrics/issues?q=confusion+matrix) but no implementation has been done. Given that I implemented a function to compute confusion matrix based on your repo, would you be interested if I implement this as a new feature in this repo? I based the logic of my function on that article https://towardsdatascience.com/confusion-matrix-in-object-detection-with-tensorflow-b9640a927285

See below the logic of the algorithm Screenshot 2021-05-31 at 11 12 00 AM

Let me know,

Benoit

rafaelpadilla commented 3 years ago

@benoitdes,

I think the confusion matrix feature is well desired and other users can benefit.

But as the current repository aims to support the metrics in a user interface, I believe we can first implement it in the old repository in an independent script in the root directory, like confusion_matrix.py.

We just have to think about some points:

1) We need to reference very well the computation of the confusion matrix. The way it is implemented is referenced in a paper or published somewhere else?

2) The confidence and iou thresholds in steps 2 and 3 could be defined by the user, right?

3) We have to guarantee that our results are exactly identical to some benchmarking in different datasets. Do you know if tools such as tensorflow or pytorch already implement the confusion matrix for object detection?

Please, open a PR in here, so we can move on the development of the confusion matrix. :)

Thank you for your contribution.

benoitdes commented 3 years ago

We need to reference very well the computation of the confusion matrix. The way it is implemented is referenced in a paper or published somewhere else?

The very first implementation was the one in Tensorflow, then the one in python and finally the one in Pytorch (the 2nd is inspired by the 1st and the 3nd is inspired by the 2nd one). I did not find any papers referring to this metric.

The confidence and iou thresholds in steps 2 and 3 could be defined by the user, right? --> yes

We have to guarantee that our results are exactly identical to some benchmarking in different datasets. Do you know if tools such as tensorflow or pytorch already implement the confusion matrix for object detection? --> cf the repo mentionned above

I'll open a PR in the other repo with the following information

github-actions[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.