xingyizhou / pytorch-pose-hg-3d

PyTorch implementation for 3D human pose estimation
GNU General Public License v3.0
615 stars 141 forks source link

How to mapping from xy in joint_2d to x'y' in joint_3d_mono? #27

Closed Fangyh09 closed 5 years ago

Fangyh09 commented 6 years ago

Hi, is there any relationship between x,y in pts and x,y in pts_3d_mono?
The model output x,y is in pts, but the ground truth is in pts_3d_mono. So I want to figure out whether there is a mapping. Thanks! https://github.com/xingyizhou/pytorch-pose-hg-3d/blob/84ad44e7a8aa15307b9a371ce85b3dee8d5ad2dc/src/datasets/h36m.py#L40-L43

xingyizhou commented 6 years ago

Hi, This is a good question. The detailed transformation code from image xy to world xy is here https://github.com/xingyizhou/pytorch-pose-hg-3d/blob/master/src/utils/eval.py#L70, or a more detailed explanation at https://github.com/xingyizhou/pose-hg-3d/issues/3 . Also, we provide the detailed projection formulation at Section 3.2 of https://arxiv.org/pdf/1803.09331.pdf.

Fangyh09 commented 6 years ago

Thanks @xingyizhou I want to know that during evaluation whether we could use the extrinsic parameters or not?

xingyizhou commented 6 years ago

Yes, you can use camera parameters to perform the full perspective projection. However, the depth of the root should be known to resolve the scale-depth ambiguity.

Fangyh09 commented 6 years ago

@xingyizhou
It uses gt meta[i, root] in L84, so the error is relative error instead of absolute error? I think the difference between protocol I and protocol II is the procruste instead of using gt meta[i, root]. Is it right? Thanks~ https://github.com/xingyizhou/pytorch-pose-hg-3d/blob/84ad44e7a8aa15307b9a371ce85b3dee8d5ad2dc/src/utils/eval.py#L84

xingyizhou commented 5 years ago

Hi, Sorry for the very delayed reply. I think all evaluations on H36M are based on relative depth. You can check the official H36M ECCV Challenge data and its evaluation script.