MoyGcc / vid2avatar

Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition (CVPR2023)
https://moygcc.github.io/vid2avatar/
MIT License
1.25k stars 104 forks source link

pretrained shape network #3

Closed xiyichen closed 1 year ago

xiyichen commented 1 year ago

Thank you for releasing the code! I see you mentioned in the supplementary materials that you did some pretraining on the shape network. Can you share this pretrained model? Also, would it still be able to learn the mask well if trained from scratch?

MoyGcc commented 1 year ago

Hi, the pre-trained model can be found in assets/smpl_init.pth.

Yes, it's still possible to learn the mask from scratch but you first need to disable in_shape_loss, since in that case, the shape network will be initialized as a sphere and this loss doesn't make sense anymore. But sometimes, training from scratch could also fail (according to my experience). A coarse human shape initialization is encouraged in any case though.