MoyGcc / vid2avatar

Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition (CVPR2023)
https://moygcc.github.io/vid2avatar/
Other
1.23k stars 100 forks source link

Generate animated SMPL model #19

Closed khelkun closed 1 year ago

khelkun commented 1 year ago

test.py ran successfuly after around 11 hours, so I have the 42 canocial PLY meshes and 42 deformed PLY meshes in the outputs/Video/parkinglot/test_mesh.

Now I'd like to understand how I could obtain an SMPL animated model exported to an FBX format for example.

In #5 you mentioned how one could eventually obtain an animated SMPL model that could be imported into Blender for example. It seems you also gave some leads in #8

I saw a similar explanation in the Supplementary Material of Vid2Avatr at section "4. Animation". However I'm a bit confused here although I've not tried yet to read the related [15,19] papers.

Could you give me some hints to generate an animated SMPL model?
May I find relevant information in the others project of your team? Especially in the X-Avatar project?

MoyGcc commented 1 year ago

If I understand correctly, you mean the animated meshes shown in the supplementary video, and the animation source comes from some pre-defined SMPL parameters captured under the MoCap system.

I didn't get time to fully clean that part of the code but will update the repo for that part later. The main idea is to replace this line of code with the driven SMPL model parameters. And yes, there should be some references you can borrow from X-Avatar where we drive the avatars with SMPL-X parameters instead of SMPL, but should be easy to do the modifications.