jin-s13 / COCO-WholeBody

ECCV2020 paper "Whole-Body Human Pose Estimation in the Wild"
768 stars 73 forks source link

Evalution result of groundtruth is not 1.0 #15

Closed HaoyiZhu closed 3 years ago

HaoyiZhu commented 3 years ago

Hi! Thanks for your wonderful work!

However, when I tried to use your evaluation code, I found a strange thing. I took the 'annotations' part of your 'coco_wholebody_val_v1.0.json' file and set every element's score to 1.0, then I passed it to evaluate_mAP function as res_file (and the gt_file is still 'coco_wholebody_val_v1.0.json'). The confusing thing is that the result AP is not equal to 1.0 and even very low.

Is there anything important that I missed? And here is the result: image image image

luminxu commented 3 years ago

Thanks for your interest in our work.

According to cocodataset/cocoapi, the persons that have no any keypoint annotations ([0, 0, 0] for all the joints) are ignored for ground truth, but these cases are not processed in the prediction results automatically when evaluation. They are usually filtered in dataloader code if ground truth detection results are applied.

If you directly put all the ground truth cases into result file, false positives happen when evaluation. You should filter out all the cases without any annotations of the part in the result file, then this part can achieve 1.0 AP.

In order to solve the problem, we polish our evaluation tools and conduct the filter for all-zero cases in the prediction results before compute AP. Now, you can get 1.0 AP using our tools of the revised version, even if you directly compare the ground truth with itself.

luminxu commented 3 years ago

closed via #16