rafaelpadilla / review_object_detection_metrics

Object Detection Metrics. 14 object detection metrics: mean Average Precision (mAP), Average Recall (AR), Spatio-Temporal Tube Average Precision (STT-AP). This project supports different bounding box formats as in COCO, PASCAL, Imagenet, etc.
Other
1.06k stars 213 forks source link

Scores are all zero #68

Closed youonlytrackonce closed 2 years ago

youonlytrackonce commented 3 years ago

Hello,

let me express the problem I face:

the task is multi object tracking and we have annotated videos. We can upload video and annotations in MOTChallenge format and see on CVAT tool. Then we can export annotations in yolo format. yolo_gt.zip

We would like to measure AP, precision and recall scores of our detector (CenterNet) on this groundtruth dataset. we create detection annotations in yolo format. yolo_det.zip

As stated in the title, we cannot get the scores. Could you please help us? Thank you.

youonlytrackonce commented 3 years ago

Hello again,

We have solved the problem,

in details.py line: self.dir_images, extensions=['jpg', 'jpge','png', 'bmp', 'tiff', 'tif']) in run_ui.py line: self.dir_images_gt, extensions=['jpg', 'jpge', 'png', 'bmp', 'tiff', 'tif']) accepted image extensions are listed. However, CVAT Tool extracts images in '.PNG' extension and your tool does not recognize it.

Could you please update your code to accept images in uppercase or lowercase (case insensitive) extensions? Thank you

rafaelpadilla commented 3 years ago

Hi @youonlytrackonce ,

Thank you for your message.

You pointed out something very interesting. I will apply this modification in the next couple of days and will inform you here.

youonlytrackonce commented 3 years ago

Hello again,

another issue is that we can see scores on popup screen but the precision recall curves etc are not saved on the pointed directory.

thank you,

rafaelpadilla commented 3 years ago

Hi @youonlytrackonce ,

I updated the code to solve the problem with CVAT extensions. Could you please, check if it working as expected, please? :)

About the issue: I downloaded your detection, ground-truth files and created dummy image files so I could run the metric evaluator. The plot is actually being saved in the output directory informed in the interface. Could you please check that again, please?

Best regards! :)

youonlytrackonce commented 3 years ago

Hello,

thank you for your correction. I will check it asap.

I can get pascal voc plots but cannot get a text file that reflects the scores seen on the pop up screen

ooobelix commented 2 years ago

Hello,

I have the same issue!

I'm running the tool from "1ce6b8192aca1e102b3843cc12c01619502a7e11".

I specify ground truth annotations as PascalVOC format and images. The ground truth statistics works well. I specify detections annotations with the right coordinates format. The detections statistics works well. I specify the output metrics and I the results are all at 0 and generated images doesn't show any statistics.

Please, have a look to the attached files.tar.gz if there any problem.

Thanks for your job and your help!

github-actions[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.