lizhe00 / AnimatableGaussians

Code of [CVPR 2024] "Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling"
https://animatable-gaussians.github.io/
Other
813 stars 48 forks source link

Question About Custom Data Training for Arbitrary Self-Shot Video Reconstruction #33

Open YaqiChang opened 5 days ago

YaqiChang commented 5 days ago

First, I'd like to express my gratitude for your outstanding work and congratulations on your acceptance to CVPR 2024!

I am currently testing he provided model on the AvatarRex dataset to animate avatars, and it works wonderfully. However, I am facing some confusion regarding the process of reconstructing an avatar from an arbitrary self-shot video.

According to GEN_DATA.md, it seems that I need to provide a dataset for a single avatar. Given that the THuman4.0 dataset uses 24 cameras for each avatar, does this imply that I need to create an independent dataset for each new avatar I wish to reconstruct?

This requirement seems to suggest that the reconstruction cost for each avatar might be high. Could you please guide me on whether I have misunderstood the process?

Thank you for your assistance!

lizhe00 commented 5 days ago

Hi, our work is based on multi-view videos for creating an animatable avatar. If you want to reduce the device cost to a single camera, our model also supports a monocular video as input given the SMPL-X registration. But the quality will degenerate because of the absence of 3D supervision and inaccurate SMPL fitting.

YaqiChang commented 2 days ago

Thanks for your explanation!