patienceFromZhou / simpleHand

This is the project page for paper "A Simple Baseline for Efficient Hand Mesh Reconstruction, CVPR2024"
MIT License
71 stars 6 forks source link

Reconstruct both the left and right hands #8

Closed ZYX-MLer closed 6 months ago

ZYX-MLer commented 6 months ago

Have you taken into account the scenario involving both hands? If I aim to reconstruct both the left and right hands simultaneously, would I require two Token Generator models? Alternatively, should I directly mirror the feature map of the Token Generator, or is it necessary to mirror the input before feeding it into the Token Generator model?

Thank you.

patienceFromZhou commented 6 months ago

This method can indeed be extended to scenarios involving both hands, but it requires training a new model since there is very little data on dual-hand interactions in FreiHAND. Considering that a single image may contain two hands, each image in the point sampling stage needs to select 42 points. These 42 points can come from the same feature map, or separate feature maps can be set up for the left and right hands.

It is important to note that during the mesh regressor stage, the output of joints and vertices needs to be changed to 21x2 and 778x2, respectively, to simultaneously output both hands. If you use the parameters of both hands for supervision, there is no need to mirror the feature map. However, if you still want to use only the right hand's parameters, you will need to model the left hand using the mirrored feature map.

BTW, have you managed to run the training code successfully?

ZYX-MLer commented 6 months ago

Thank you for your response. I haven't debugged the code yet.