Open ouceduxzk opened 6 years ago
Your result is similar to #11 . I think there is something wrong while testing. I found your testing images are 40504 unique images. We test the results on the COCO minival dataset which contains 5000 images and so as the provided detection boxes. You might get the wrong human detection in your dataset.
Thanks for your quick reply, you are right that my val json files is not the same, I am using the person_keypoints_val2014.json , can you provide those json files, in coco dataset official website, they are not existing anymore
COCO 2014 minival json and its detection result json is provided.
Thanks, now it looks normal
DONE (t=0.37s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.697 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.883 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.770 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.662 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.761 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.764 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.927 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.823 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.715 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.830
First of all, thanks for sharing the work. I quickly run a test of AP with following results, do you know why it is too low?
I added the AP calculation and saved the json file already