Nicholasli1995 / EvoSkeleton

Official project website for the CVPR 2020 paper (Oral Presentation) "Cascaded deep monocular 3D human pose estimation wth evolutionary training data"
https://arxiv.org/abs/2006.07778
MIT License
333 stars 44 forks source link

Location of the hip/pelvis joint? #49

Closed skumar-ml closed 2 years ago

skumar-ml commented 2 years ago

Does the 2d-to-3d lifting model actually predict a location for the hip/pelvis joint? OR, does the model predict all of the other 16 joints in relation to the hip joint?

It seems like when we run UnNormalizeData in examples/inference.py, the first joint location (which corresponds to the hip) is always [0,0,0] because we are choosing to ignore that joint in stats['dim_ignore_3d']. Is there a way to avoid setting the hip to (0,0,0)?

In my use case, I would like to avoid "fixing" or "pinning" a joint to the origin (or any other arbitrary point) for rendering purposes.

sunmengnan commented 2 years ago

@sk1840939 how about transfering the depth points of all the joints into the real world coordinates then plot in color map?

Nicholasli1995 commented 2 years ago

Does the 2d-to-3d lifting model actually predict a location for the hip/pelvis joint? OR, does the model predict all of the other 16 joints in relation to the hip joint?

It seems like when we run UnNormalizeData in examples/inference.py, the first joint location (which corresponds to the hip) is always [0,0,0] because we are choosing to ignore that joint in stats['dim_ignore_3d']. Is there a way to avoid setting the hip to (0,0,0)?

In my use case, I would like to avoid "fixing" or "pinning" a joint to the origin (or any other arbitrary point) for rendering purposes.

The prediction is 3D pose relative to the hip joint and does not include subject location. If you want to plot the trajectory in 3D, you may consider recording the root location in 3D. Then you can add the prediction back to the root location.

skumar-ml commented 2 years ago

Thank you for your reply!

If you want to plot the trajectory in 3D, you may consider recording the root location in 3D. Then you can add the prediction back to the root location.

Can you expand on "recording the root location in 3D"? Is this information that the model already captures in the output from the cascaded-lifter model? Or, is this information I am responsible for generating?

sunmengnan commented 2 years ago

Thank you for your reply!

If you want to plot the trajectory in 3D, you may consider recording the root location in 3D. Then you can add the prediction back to the root location.

Can you expand on "recording the root location in 3D"? Is this information that the model already captures in the output from the cascaded-lifter model? Or, is this information I am responsible for generating?

I guess it's the absolute real world coordinate of hip. So that you only need to add or minus relative depth of other 16points after the net inference.

Nicholasli1995 commented 2 years ago

Thank you for your reply!

If you want to plot the trajectory in 3D, you may consider recording the root location in 3D. Then you can add the prediction back to the root location.

Can you expand on "recording the root location in 3D"? Is this information that the model already captures in the output from the cascaded-lifter model? Or, is this information I am responsible for generating?

This information is generated in this prepare_data_dict method: there is root location in the camera coordinate system https://github.com/Nicholasli1995/EvoSkeleton/blob/b2b355f4c1fa842709f100d931189ce80008f6ef/libs/dataset/h36m/data_utils.py#L416 You can keep these information and use them when you need to visualize the 3D trajectory.

skumar-ml commented 2 years ago

Thank you! That should solve my issue. Closing the issue, but I will reopen it if there are any related problems.