Closed wangxuanhan closed 6 years ago
The COCO keypoint evaluation is done at the instance level and only the ground-truth visible keypoints for each person are evaluated. Hence there's no need to threshold away the predictions. I suggest reading http://cocodataset.org/#keypoints-eval in detail.
The thresholding for visualization is done on the logits (softmax inputs), which is why the number looks odd (though you should think of the 2 as a floating point number).
I hope these brief answers clarify your question.
@rbgirshick Hi, i used the code for implementation of keypoint detection and met some problems when i review the keypoint evaluation code.
The function of _heatmaps_tokeypoints in lib/utils/keypoints.py transforms all heatmaps(e.g. 17 in COCO) to coordiates and return the results without any refinement(i.e, threshold). That is to say, it will generate predictions for all keypoints even if the particular joint does not exist. So, i wonder whether i miss some process details.
The function of _vis_oneimage in lib/utils/vis.py used to visulize all prediction results. And the parameter of kps_thresh is set to 2 and is used to filtered the keypoints for visulization. I am curious about why this thresh value is a integer of 2? Is there any reason behind it? And why you do not use it when doing the evaluation of keypoints.