Closed erezposner closed 5 years ago
Hi @erezposner
MANO 3d joint and vertices are predicted aligned with the camera view, but root centered. This means that:
python webcam_demo.py --resume release_models/hands_only/checkpoint.pth.tar
you will see that the predicted joints are reprojected onto the image.I hope this answers your question !
Best,
Yana
Are the scaling and translation estimated within the net? or using closed form solution? could you kindly direct me this part in the code? thank you
Thanks! got it. I have another question, more in the context of MANO layer. How can one generate multiple perspectives of the same MANO generated hand? In the sense of beta,thetas.
If I understand correctly, for the same hand viewed from two different perspectives I would have two different theta vectors, Is that correct? If this is the case, How can I determine the theta vector of a hand viewed from another perspective? thank you
This is correct, the first 3 parameters of theta are the global axis-angle rotation vector, so this is the part that needs to be modified to generate the vector from a different perspective.
Got it, Thank you
Hi, I would like to understand the relation between the MANO 3d joint and vertices 3d location with the camera space.
Let's assume that I capture an RGB image using a calibrated camera and use "Learning joint reconstruction of hands and manipulated objects" to estimate MANO 3d joints. Are the 3d joints are in normalized camera space?
Would the MANO estimation is oriented towards camera? Thank you