nghorbani / moshpp

Motion and Shape Capture from Sparse Markers
Other
201 stars 30 forks source link

From human keypoint to SMPL parameters? #2

Closed SizheAn closed 2 years ago

SizheAn commented 2 years ago

Hi,

Great work! I'm wondering if it is possible to reconstruct smpl model parameters when the human keypoint aren't the same format as the default one. For example, I'm using a in-house dataset obtained keypoint coordinates from Kinect V2, which output 25 keypoints coordinates in 3D (https://lisajamhoury.medium.com/understanding-kinect-v2-joints-and-coordinate-system-4f4b90b9df16). How do I specify the correspondence from my keypoint to the mesh? Does this code include any similar functions?

Appreciate it!

MichaelJBlack commented 2 years ago

It is common to train a regressor from 2D or 3D joints to SMPL joints. This is done for OpenPose joints for example since they are in different places than SMPL joints. You need some training data for this and then people typically learn a linear regressor. One way to get the training data would be to carefully fit SMPL to your rgb-d data, eg using PROX-D and then learn the mapping from your joints to SMPL joints.

SizheAn commented 2 years ago

It is common to train a regressor from 2D or 3D joints to SMPL joints. This is done for OpenPose joints for example since they are in different places than SMPL joints. You need some training data for this and then people typically learn a linear regressor. One way to get the training data would be to carefully fit SMPL to your rgb-d data, eg using PROX-D and then learn the mapping from your joints to SMPL joints.

Hey Professor,

Thanks for replying. I enjoy reading your papers a lot!

Back to this question, from what you describe it sounds like simplify: RGB image -> 2D keypoints -> 3D keypoints -> SMPL parameters. What if I don't have the RGB-D ground truth? When recording I only save the 3D keypoints coordinates, are there any methods that I can specify those keypoints' correspondence to the mesh vertices?

nghorbani commented 2 years ago

for this specific case you can use an inverse kinematics packge VPoser. There are demo code for converting 3D joint centers and 3D body surface land marks into smpl-x bodies.

SherlockHarris commented 8 months ago

It is common to train a regressor from 2D or 3D joints to SMPL joints. This is done for OpenPose joints for example since they are in different places than SMPL joints. You need some training data for this and then people typically learn a linear regressor. One way to get the training data would be to carefully fit SMPL to your rgb-d data, eg using PROX-D and then learn the mapping from your joints to SMPL joints.

Hey Professor,

Thanks for replying. I enjoy reading your papers a lot!

Back to this question, from what you describe it sounds like simplify: RGB image -> 2D keypoints -> 3D keypoints -> SMPL parameters. What if I don't have the RGB-D ground truth? When recording I only save the 3D keypoints coordinates, are there any methods that I can specify those keypoints' correspondence to the mesh vertices?

Hello, I would like to know if you have successfully completed your work? In fact, I now need to do the work with the data collected by Kinect. This is my mailbox: xsherlockharris@nwafu.edu.cn