Thank you for open-sourcing such an excellent project. I’m relatively new to 3D body mesh reconstruction, so I apologize in advance if my question seems straightforward.
I’m currently working on a research project that explores the interaction between head and body movements. I was wondering if you could provide any guidance on how to extract head pose (yaw/pitch/roll, or orientation matrix) from the computed body mesh/vertices generated by 4D-Humans. Can this information even be extracted from some outputs of 4D-Humans?
While I can use a separate software for head pose estimation, the estimated poses may not align perfectly with the body movements predicted by 4D-Humans. Therefore, extracting head pose directly from 4D-Humans would likely offer better consistency.
Hello,
Thank you for open-sourcing such an excellent project. I’m relatively new to 3D body mesh reconstruction, so I apologize in advance if my question seems straightforward.
I’m currently working on a research project that explores the interaction between head and body movements. I was wondering if you could provide any guidance on how to extract head pose (yaw/pitch/roll, or orientation matrix) from the computed body mesh/vertices generated by 4D-Humans. Can this information even be extracted from some outputs of 4D-Humans?
While I can use a separate software for head pose estimation, the estimated poses may not align perfectly with the body movements predicted by 4D-Humans. Therefore, extracting head pose directly from 4D-Humans would likely offer better consistency.
Thank you in advance for your help!