Closed sunwonlikeyou closed 3 years ago
Could you provide your codes?
These are 2 ways I did. :) But both didn't equal to 'fitted_3D_pose'
1) smpl_pose = torch.tensor(np.array(smpl_pose), dtype = torch.float32).reshape(1,-1) # pose parameter smpl_pose = smpl_pose[:,3:] global_rot = smpl_pose[:,:3] betas = torch.tensor(betas, dtype = torch.float32) trans = torch.tensor(items['trans'], dtype = torch.float32) output = model(betas=betas, body_pose = smpl_pose,global_orient = global_rot,transl = trans, return_verts=True) smpl_joints = output.joints
2) smpl_pose = torch.tensor(np.array(smpl_pose), dtype = torch.float32).reshape(1,-1) smpl_pose = smpl_pose[:,3:] global_rot = smpl_pose[:,:3] betas = torch.tensor(betas, dtype = torch.float32) trans = torch.tensor(items['trans'], dtype = torch.float32) # translation vector output = model(betas=betas, body_pose = smpl_pose,global_orient = global_rot, return_verts=True) smpl_joints = output.joints[0] # 1number of joints3 -> number of joints* 3 smpl_joints += np.array(trans).reshape(-1,3)
Thank you so much
fitted_3d_pose
is obtained by multiplying the joint regressor of H36M to the mesh vertices.
For example,
joint_regressor = np.load('J_regressor_h36m_correct.npy')
fitted_3d_pose = np.dot(joint_regressor, output.vertices[0].detach().numpy())
Please note that output
may be in meter scale, while the fitted_3d_pose
is in milimeter scale.
Thank you for your response, in your code data/Human36M/Human36M.py#L167 , get_smpl_coord(): there is root rotation by extrinsic( as i guess), then you make the smpl mesh coord and joint coord,
Does root rotation converts mesh from world coord to image coord? because i didnt do root rotation and recovered mesh from pseudo gt pose and shape parameters. And i got fitted_3d_pose as you told.. but it was wrong.
i didnt do any processing with betas, poses, trans. because i want to get 3d joints in world coord system.
here is my code: smpl_mesh_coord, smpl_joint_coord = smpl.layer['neutral'](pose, betas) joints = np.dot(j_regressor, smpl_mesh_coord[0].detach().cpu().numpy()) and 'joints' is
array([[ -2.26274995, -202.07269126, 29.33887128], [-132.24332546, -196.92100979, 14.50901273], [-100.43902743, -332.34089376, -411.80347005], [-169.44907574, -178.52225572, -813.61341234], [ 129.08142705, -209.80587144, 41.28862008], [ 83.19172609, -193.98901343, -418.75548536], [ 35.31415721, -137.16443902, -853.02031447], [ -32.33942816, -138.6967878 , 256.80618984], [ -16.16297266, -159.56645446, 486.55226386], [ -26.02612294, -227.97142143, 592.52210028], [ -29.88050038, -166.60170137, 681.19847508], [ 112.10942874, -159.11400304, 419.34550739], [ 197.07801112, -241.13499633, 147.83923118], [ 216.93467699, -403.36760939, -32.91794381], [-148.94944358, -150.62589509, 434.8089572 ], [-228.95918571, -192.2272681 , 159.37845371], [-264.59957264, -313.9294589 , -55.02796612]])
and fitted_3d_pose is array([[ -89.93064 , 153.98553 , 916.4011 ], [-219.91147 , 159.13824 , 901.5738 ], [-188.10794 , 23.721457, 475.269 ], [-257.09247 , 177.4365 , 73.200966], [ 41.4142 , 146.24966 , 928.3441 ], [ -4.478338, 162.07803 , 468.3287 ], [ -52.343136, 218.85074 , 33.93467 ], [-120.002846, 217.34323 , 1143.8231 ], [-103.8348 , 196.50772 , 1373.6542 ], [-113.70598 , 128.13538 , 1479.7053 ], [-117.552124, 189.47165 , 1568.2983 ], [ 24.435305, 196.96953 , 1306.4708 ], [ 109.403725, 114.94921 , 1034.9662 ], [ 129.2722 , -47.331383, 854.08954 ], [-236.62077 , 205.44627 , 1321.9059 ], [-316.63077 , 163.84592 , 1046.478 ], [-352.25516 , 42.07876 , 831.9097 ]], dtype=float32)
and I projected fitted_3d_pose to image with projection matrix(extr, intri), there are order missing but it seems fitted_3d_pose fits well.
what was wrong in my progress...?
Please clarify and itemize your questions with codes. I can't get your questions clearly. The returned values in here are in 3D camera-centered coordinate system.
Sorry about that..
Well I'd like to get 3D joints in world coordinate system from parameters.
smpl_mesh_coord, smpl_joint_coord = smpl.layer['neutral'](theta, betas)
j_regressor = np.load('J_regressor_h36m_correct.npy', allow_pickle=True)
joints_fitted = np.dot(j_regressor, smpl_mesh_coord[0].detach().cpu().numpy())
but joints_fitted
was different with fitted_3d_pose
.
theta
and beta
are SMPL parameters. And i didnt do any process on provided data.
and how to use trans
? Is it used for generating 3d joints in camera coordinate system?
To obtain H36M joint set,
smpl_mesh_coord, _ = smpl.layer['neutral'](theta, betas, trans)
j_regressor = np.load('J_regressor_h36m_correct.npy')
joint_h36m = np.dot(j_regressor, smpl_mesh_coord)
It works. Thank you very much for your prompt reply and valuable comments !!
Hello!
I'm trying to calcualte 3D global pose from pseudo gt smpl parameters. There are pose parameters and trans in annotation file.
As I know, output of SMPL is global 3D world coordinate.(= fitted_3d_pose in annotation file) So i did this.
vertices, joints = SMPL(pose param, shape param) absoulte 3D joints = joints+trans
but it's not fitted_3d_pose. How to calculate this??