Open anilesec opened 2 years ago
Thanks, @samarth-robo for the response. I think I understand what you mean. This means that MANO pose and shape parameters are the same for all the frames (as hand grasp is fixed) however to get the global orientation of hand in each frame, we will have to use the rotation component of the ContactPose.object_pose().
I understood how to get the meshes(of hand) for each frame. But, I am interested in mano parameters for each frame( the meshes of each frame are trivial for my application). Also, in mano_fits_10.json
there are 13 parameters where the last three parameters correspond to the global orientation of the hand. Does this global orientation correspond to the orientation of the hand in the first frame of the sequence or is it random orientation?
Thank you!
Thanks, @samarth-robo for the response. I think I understand what you mean. This means that MANO pose and shape parameters are the same for all the frames (as hand grasp is fixed) however to get the global orientation of hand in each frame, we will have to use the rotation component of the ContactPose.object_pose().
correct
I understood how to get the meshes(of hand) for each frame. But, I am interested in mano parameters for each frame( the meshes of each frame are trivial for my application). Also, in
mano_fits_10.json
there are 13 parameters where the last three parameters correspond to the global orientation of the hand. Does this global orientation correspond to the orientation of the hand in the first frame of the sequence or is it random orientation?Thank you!
no it is not a random rotation. It is a necessary output of the optimizer that optimizes the MANO params for L2 distance of joint 3D locations from ground truth.
Here is the chain of transforms if you are curious:
Denote the output dict of ContactPose.mano_params()
by mp
, and start with a hand vertex m0_p
. The coordinate system m0
corresponds to MANO's PCA vertex regressor (internal detail you can understand if you read the MANO paper).
mp['pose']
) -> hand vertex in MANO coordinates m_p
mp['hTm']
-> hand vertex in hand coordinates h_p
. mp['hTm']
is the inverse of mTc
and mTc
is a rigid body transform I remove before constructing the optimization objective, to make the optimization easier.ContactPose._oTh
-> hand vertex in object coordinates o_p
. oTh
is usually identity, but it can be different if the hand is dynamic w.r.t. object, which happens in some double-handed graspscTo
, the output of ContactPose.object_pose()
-> hand vertex in camera coordinates c_p
This is just FYI. You don't need to worry about applying steps 1, 2, and 3: ContactPose.mano_meshes()
does that for you.
Thanks for elucidating! I will use the per-frame MANO parameters(with object pose as the global orientation of each frame) and see how it works.
@anilesec Hi, could you share how to get the mano pose for each frame?
@gs-ren you can find the info here https://github.com/facebookresearch/ContactPose/issues/16#issuecomment-1031823614
@gs-ren you can find the info here #16 (comment) Could you describe step 4 process in detail please? I want to get the pose, 3d points and vertex in camera coordinate. Thank you! @anilesec
@samarth-robo issue continuing from an email conversation