This is an official implementation of our CVPR 2021 paper "Deep Dual Consecutive Network for Human Pose Estimation" (https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_Deep_Dual_Consecutive_Network_for_Human_Pose_Estimation_CVPR_2021_paper.pdf)
380
stars
61
forks
source link
The comparison with PoseWarper might not be fair. #32
Due to some untold reason, the "neck" joint that PoseWarper used in training is not the ground truth, but the average of left and right shoulder. This brings down the PoseWarper's results about 2~3 mAP across PoseTrack2017 and PoseTrack2018 valid sets and test sets. In other words, the "true" accuracy of the PoseWarper would be 2~3 mAP higher than what they report in their paper.
In your paper, you compare your results with the PoseWarper's "faulty" results. From your code, we can see you correctly use the "neck" ground truth in training. That's probably why your "Head" results are much better across PoseTrack valid and test datasets.
I'm curious, you guys really know nothing about the "neck" problem in PoseWarper?
Due to some untold reason, the "neck" joint that PoseWarper used in training is not the ground truth, but the average of left and right shoulder. This brings down the PoseWarper's results about 2~3 mAP across PoseTrack2017 and PoseTrack2018 valid sets and test sets. In other words, the "true" accuracy of the PoseWarper would be 2~3 mAP higher than what they report in their paper.
In your paper, you compare your results with the PoseWarper's "faulty" results. From your code, we can see you correctly use the "neck" ground truth in training. That's probably why your "Head" results are much better across PoseTrack valid and test datasets.
I'm curious, you guys really know nothing about the "neck" problem in PoseWarper?