Closed Fangyh09 closed 5 years ago
Hi, This is a good question. The detailed transformation code from image xy to world xy is here https://github.com/xingyizhou/pytorch-pose-hg-3d/blob/master/src/utils/eval.py#L70, or a more detailed explanation at https://github.com/xingyizhou/pose-hg-3d/issues/3 . Also, we provide the detailed projection formulation at Section 3.2 of https://arxiv.org/pdf/1803.09331.pdf.
Thanks @xingyizhou I want to know that during evaluation whether we could use the extrinsic parameters or not?
Yes, you can use camera parameters to perform the full perspective projection. However, the depth of the root should be known to resolve the scale-depth ambiguity.
@xingyizhou
It uses gt meta[i, root]
in L84, so the error is relative error instead of absolute error?
I think the difference between protocol I and protocol II is the procruste
instead of using gt meta[i, root]
. Is it right? Thanks~
https://github.com/xingyizhou/pytorch-pose-hg-3d/blob/84ad44e7a8aa15307b9a371ce85b3dee8d5ad2dc/src/utils/eval.py#L84
Hi, Sorry for the very delayed reply. I think all evaluations on H36M are based on relative depth. You can check the official H36M ECCV Challenge data and its evaluation script.
Hi, is there any relationship between x,y in pts and x,y in pts_3d_mono?
The model output x,y is in pts, but the ground truth is in pts_3d_mono. So I want to figure out whether there is a mapping. Thanks! https://github.com/xingyizhou/pytorch-pose-hg-3d/blob/84ad44e7a8aa15307b9a371ce85b3dee8d5ad2dc/src/datasets/h36m.py#L40-L43