Closed moondabaojian closed 4 years ago
32 is the size of the embedding that is used to initialize the body_pose using the vposer prior.
left_hand_pose = torch.einsum( 'bi,ij->bj', [left_hand_pose, self.left_hand_components]) left_hand_pose.shape : (1, 12) self.left_hand_components : (45, 45)
I am not so sure here, but it is multiplying the hand pose with the 12 pca_components specified. Each row signifies the product of the pca_component with the self.left_hand_component.
Thank you for your answer
but run left = torch.einsum( 'bi,ij->bj', [left_hand_pose, self.left_hand_components])
RuntimeError: size of dimension does not match previous size, operand 1, dim 0
How can two values(left_hand_pose, self.left_hand_components) be multiplied successfully?
and What should I do to enable "body_pose(32)" and "body_pose(63)" to switch to each other, through the vposer program?
@moondabaojian For body_pose, https://github.com/vchoutas/smplify-x/blob/e34b20c09f4de38099481e0d25b4f4536db5fc88/smplifyx/fit_single_frame.py#L485 will help.
I will look into the left-hand_pose again.
@moondabaojian hand pose is estimated using PCA components. The lines you copied essentially decode the PCA coefficients to axis-angles. To go from the 32 dimensional embedding to the 63 axis angle parameters you need to use the trained vposer decoder. For an example you can check the following code: link
@vchoutas @Anirudh257 Thank you for your answer! So does ‘hand_pose’ need to multiply 'hand_components' to get a complete hand_pose? But direct multiplication cannot succeed. Should I preprocess ‘hand_pose’?
@moondabaojian If you run the SMPL-X code through the instructions, you will get a pkl file in the output folder. If you load it and check the keys, it should contain the left_hand_pose and right_hand_pose values. These are the parameters in the body_model. If you follow the link that I posted earlier. You can use joints to get the entire pose (body, left hand, right hand, facial expression, etc) as well. @vchoutas should confirm if my methodology is correct.
@moondabaojian If you take a look at the code here you can see that we select num_pca_comps from left_hand_components. As @Anirudh257 said you can plug these to get the vertices, 3D joints and optionally, with return_full_pose=True, the full pose vector (55x3) in axis angle format.
Hello, When I was getting the mesh, I had some problems, in "fit_single_frame.py" line 498
model_output = body_model(return_verts=True, body_pose=body_pose)
Only ‘body_pose’ parameter seems to be passed, Do other parameters(such as 'betas', 'expression', 'right_hand_pose') not play a role in mesh generation ?
So is the generated mesh only for display?
Thank you for answering many of my questions!
So the other parameters are attributes of the model. body_pose is passed here to the forward function because it is computed from the V-Poser latent code. The mesh is need to get some extra keypoints, such as the feet, that are not in the SMPL-X joint hierarchy and for visualization.
You mean that the parameters other than body_pose are the attributes of the model, do not need to pass them to the forward function? Do these model attributes also participate in iterative fitting in the smplifyx program? If I read the parameters in the output pkl file of the smplifyx program and pass them directly to the lbs function, can I get the same mesh as the output of smplifyx?
@moondabaojian When doing fitting, that is what is happening. I have also added a script where you can see how to load a pkl and display the mesh.
Hello! Thanks for the great work! I want to know why number of parameters of "body_pose" is 32 ? and how to change "32" to "63" ? Can I get 63 parameters of "body_pose" directly from the smplifyx program? And in the following code
left_hand_pose = torch.einsum( 'bi,ij->bj', [left_hand_pose, self.left_hand_components])
left_hand_pose.shape : (1, 12)
self.left_hand_components : (45, 45)
Does left_hand_pose need to go through some processing? Thank you!