snap-research / articulated-animation

Code for Motion Representations for Articulated Animation paper
https://snap-research.github.io/articulated-animation/
Other
1.24k stars 352 forks source link

VoxCeleb examples? #6

Open yaseryacoob opened 3 years ago

yaseryacoob commented 3 years ago

I noticed there are no examples from VoxCeleb in the paper or the code. Also no sufficient information or data so one can replicate the experiments. Can you please share?

AliaksandrSiarohin commented 3 years ago

We did not include vox, because there is no benifit in using current model. As explained in the paper. The checkpoint and config for vox is still provided.

yaseryacoob commented 3 years ago

Since you actually went through effort of evaluating it, even if it didn't improve, there are insights that can develop from the new framework on such data. Your work so it is your call. Thanks for responding.

ChengBinJin commented 3 years ago

We did not include vox, because there is no benifit in using current model. As explained in the paper. The checkpoint and config for vox is still provided.

@AliaksandrSiarohin We evaluated the vox256.pth that provided in this repository on our own face-related test set. The quantitative result shows that this model is better than the vox-adv-cpk from FOMM. Can you speculate on the reason why the model results become better?

AliaksandrSiarohin commented 3 years ago

Just a speculation. This model has better generalisation when estimating affine transformations, you can see explanation for this phenomenon in the toy experiment section. Because we evaluate model on faces we did not observe an improvement, since vox dataset is already large and generalisation is not a problem. Since your dataset is face-like, generalization may still be the issue.

ChengBinJin commented 3 years ago

Thank you for your explanation! I really like your this work, and MonKeyNet and FOMM too.