eth-ait / motion-infilling

Convolutional Autoencoders for Human Motion Infilling (3DV 2020)
Other
25 stars 3 forks source link

positions convert to bvh #2

Closed Yuefengxin closed 1 year ago

Yuefengxin commented 3 years ago

Thanks for sharing such a good job! I noticed that the final generated result are positions. I want to know how to convert it into a BVH file? Looking forward to your response!

kaufManu commented 3 years ago

Hi @Yuefengxin

Thanks :)! Unfortunately, this is not so straight-forward. BVH uses joint angles to represent the motion data. Converting the positions into joint angles requires to run Inverse Kinematics (IK) for which this repo does not have code for. Also, in this IK optimization problem, the rotation of the end-effectors around their own axes is not well defined without some additional assumptions.

May I ask why you need this as BVH file?

Yuefengxin commented 3 years ago

Thanks very much for your quick response! Because I noticed that the visualization in your paper is in the form of BVH like the picture below.

So I think you maybe have converted the final result. Since you didn't convert, is it convenient to tell me how you achieve the motion visualization shown in the paper?

kaufManu commented 3 years ago

We used Unity to produce those visualizations in the paper. The positions of the joints (plus the information about connectivity) are enough to do this, you do not need the data in BVH format (actually, if you had it in BVH, you would have to convert it to positions anyway in order to visualize it). Unfortunately I don't have the Unity script for this handy at the moment, but the visualization works very similar to how we do it in this visualization script. You basically just draw cylinders for each bone. You should be able to do this with any basic 3D visualization tool (e.g. pyrender).