Closed benoitdes closed 2 years ago
@benoitdes,
I think the confusion matrix feature is well desired and other users can benefit.
But as the current repository aims to support the metrics in a user interface, I believe we can first implement it in the old repository in an independent script in the root directory, like confusion_matrix.py
.
We just have to think about some points:
1) We need to reference very well the computation of the confusion matrix. The way it is implemented is referenced in a paper or published somewhere else?
2) The confidence and iou thresholds in steps 2 and 3 could be defined by the user, right?
3) We have to guarantee that our results are exactly identical to some benchmarking in different datasets. Do you know if tools such as tensorflow or pytorch already implement the confusion matrix for object detection?
Please, open a PR in here, so we can move on the development of the confusion matrix. :)
Thank you for your contribution.
We need to reference very well the computation of the confusion matrix. The way it is implemented is referenced in a paper or published somewhere else?
The very first implementation was the one in Tensorflow, then the one in python and finally the one in Pytorch (the 2nd is inspired by the 1st and the 3nd is inspired by the 2nd one). I did not find any papers referring to this metric.
The confidence and iou thresholds in steps 2 and 3 could be defined by the user, right?
--> yes
We have to guarantee that our results are exactly identical to some benchmarking in different datasets. Do you know if tools such as tensorflow or pytorch already implement the confusion matrix for object detection?
--> cf the repo mentionned above
I'll open a PR in the other repo with the following information
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hello @rafaelpadilla,
first thanks for this repo, it's been very useful! However, for my work I needed to compute confusion matrices to understand better why my model was not performing better on some classes. I saw that on the "old repo", there were two issues related to that (https://github.com/rafaelpadilla/Object-Detection-Metrics/issues?q=confusion+matrix) but no implementation has been done. Given that I implemented a function to compute confusion matrix based on your repo, would you be interested if I implement this as a new feature in this repo? I based the logic of my function on that article https://towardsdatascience.com/confusion-matrix-in-object-detection-with-tensorflow-b9640a927285
See below the logic of the algorithm
Let me know,
Benoit