shubham-goel / 4D-Humans

4DHumans: Reconstructing and Tracking Humans with Transformers
https://shubham-goel.github.io/4dhumans/
MIT License
1.25k stars 120 forks source link

Extract head pose from 4D-Humans outputs #144

Open mtran14 opened 2 months ago

mtran14 commented 2 months ago

Hello,

Thank you for open-sourcing such an excellent project. I’m relatively new to 3D body mesh reconstruction, so I apologize in advance if my question seems straightforward.

I’m currently working on a research project that explores the interaction between head and body movements. I was wondering if you could provide any guidance on how to extract head pose (yaw/pitch/roll, or orientation matrix) from the computed body mesh/vertices generated by 4D-Humans. Can this information even be extracted from some outputs of 4D-Humans?

While I can use a separate software for head pose estimation, the estimated poses may not align perfectly with the body movements predicted by 4D-Humans. Therefore, extracting head pose directly from 4D-Humans would likely offer better consistency.

Thank you in advance for your help!