lizhe00 / AnimatableGaussians

Code of [CVPR 2024] "Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling"
https://animatable-gaussians.github.io/
Other
896 stars 59 forks source link

StyleUNet Conditions #22

Closed taeksuu closed 5 months ago

taeksuu commented 5 months ago

Hi, thank you for the amazing work!

I just have a question regarding the inputs of the StyleUNet.

According to the paper, your StyleUNets takes as inputs both front and back posed position maps and outputs the front and back pose-dependent gaussian maps. Screenshot from 2024-04-15 20-40-22

Meanwhile, I noticed that only the front posed position map is used as the condition to predict both front and back gaussian maps. https://github.com/lizhe00/AnimatableGaussians/blob/4a618272891c57683e89bacd58dd719ba4456e43/network/avatar.py#L166

I wonder whether this is intentionally done since the outputs still look good.

Thank you in advance.

lizhe00 commented 5 months ago

Hi, inputting 3-dim and 6-dim position map are both okay, since the front map has already encoded the pose information.

taeksuu commented 5 months ago

Oh, it makes sense. Thanks for the reply.