mks0601 / I2L-MeshNet_RELEASE

Official PyTorch implementation of "I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image", ECCV 2020
MIT License
720 stars 128 forks source link

Questions about testing on human3.6M #63

Open booker-max opened 3 years ago

booker-max commented 3 years ago

Hello, I would like to ask if the pose_coord_gt_h36m you provided on the Human3.6M test set was obtained by using Smplify-x to estimate the SMPL parameters? image

mks0601 commented 3 years ago

No. As you can see in here, joint_cam is from Human3.6M annotation file, provided by the dataset capture.

booker-max commented 3 years ago

ok,Thank you!

booker-max commented 3 years ago

I’m sorry to bother you again. I saw a sentence in your paper "Alternatively, we obtain groundtruth SMPL parameters by applying SMPLify-X on the groundtruth 3D joint coordinates of Human3.6M dataset." time,

  1. Did you use groundtruth?

  2. How are groundtruth 3D joint coordinates used when calculating smpl parameters?

  3. Can you share your code for calculating smpl parameters, thank you very much.