YuelangX / LatentAvatar

A PyTorch implementation of "LatentAvatar: Learning Latent Expression Code for Expressive Neural Head Avatar"
MIT License
103 stars 8 forks source link

About weights of loss terms #6

Open qxforever opened 1 year ago

qxforever commented 1 year ago

Thanks for your great work!

I found that $\lambda_{lr}$ is different in the paper and code. In the paper it is set as $\lambda_{lr} = 0.1$. But in lib/trainer/AvatarTrainer.py it is set to $1.0$ .

https://github.com/YuelangX/LatentAvatar/blob/257aea312dc1322ac9fd5994a61ce6def702fcca/lib/trainer/AvatarTrainer.py#L57

Should I change the code if I want to reproduce your results on demo dataset?