MoyGcc / vid2avatar

Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition (CVPR2023)
https://moygcc.github.io/vid2avatar/
Other
1.2k stars 102 forks source link

What openpose model did you use? #25

Closed soobinseo closed 1 year ago

soobinseo commented 1 year ago

Hello, thank you for sharing such great code. I appreciate it. In the run_openpose.py script, you used the OpenPose model, but I noticed that no specific model is specified in the params. I'm curious to know which model you used: body_25, COCO, or MPI?

Also, have you tried training with init pose parameters that do not involve refining with OpenPose? If you have, I'm curious to know if it still converges to good results.

MoyGcc commented 1 year ago

Hi, thank you for your interest. We used body_25 model for OpenPose and will clarify this later in the repo. Thank you for pointing out.

Yes, we did try this at the very beginning but the quality is worse than the version with OpenPose-based refinement. I would say this refinement step is rather crucial for the final results even though we can jointly optimize the poses during the avatar training process but a bad initialization makes the learning way more difficult.