Closed janedoesrepo closed 3 years ago
Hi @janedoesrepo,
In this repository, there is no verification to disconsider the difficult objects in the VOC.
But such objects can be easily identified and not accounted.
Here the XML files (VOC and Imagenet use the same format) are parsed. You could put an if
here to verify if the tag <difficult>1</difficult>
is there. If so, just use a continue
, so this object is not added in the list ret
and, therefore, discarded.
I hope it helps. Let me know if that worked for you.
Hey @rafaelpadilla,
thank you that was a quick fix. However, the mAP I get with your toolkit is 52.89%., while both the official VOC devkit and this code give me 55.04% mAP.
This is just fyi. My problem with your code was fixed!
Hi,
This is strange. You should get the same results.
Did you consider the "difificult" objects in both VOC and our tool?
If possible, could you share your ground-truth and detection annotations, (XML files) so I can investigaste the reason of this difference?
@janedoesrepo ,
I made further tests and the results are exactly the same as the official matlab version.
The only difference is that the MATLAB code and the referred tool ignores objects tagged as "difficult", while our tool makes no distinction of such tag. :)
Hi @rafaelpadilla. sorry didn’t get notified. I can share the files and results later with you today.
Maybe it’s a mistake on my side.
Maybe the exclusion of difficult files as I implemented in your version wasn’t exactly the same as devkit handles it. However, the official challenge does/did not consider these files so imo there is kind of something missing in your tool.
Edit: I excluded difficult objects in my experiments.
Regards
@janedoesrepo ,
Thanks for the response. As our tests run different types of detections and files to assert the results are identical with the official tools, when there is a report such as this, I like to investigate where the problem is.
So, please, send me your ground-truth and detections files, so I can see what's wrong. :)
Thank you very much for your comments.
Please find attached the detection files. 55.04% mAP on the VOC2007 test data from https://pjreddie.com/projects/pascal-voc-dataset-mirror/ with VOCDevkit 2012 and all-point AP.
annotations_voc2007_test.zip detection_files_padilla.zip detection_files_vocdevkit.zip
Update: all coordinates are absolute values, left-top right-bottom.
Thanks for the great tool! However, when I evaluate on VOC, it includes objects marked as difficult in the evaluation. That leads to an unwanted decrease in mAP. Is there a built-in way to exclude these objects from the ground truth and the predictions?