Open Nyquist0 opened 7 months ago
Yes. We use the video encoder and the generator in our code to replace the motion encoder and the motion decoder in the official FaceFormer code.
Thanks for your reply. That sounds make-sense. And the other config is the same with your TalkLip algorithm? May I ask is it possible to get the adjusted faceformer code tested in your experiment part?
Sorry. As it is just a baseline, I did not modify the official code in a concise and careful manner, which resulted in messy code. I am not sure which code is correct and which checkpoint is compatible. However, I found the synthesized videos on LRS2 using the faceformer mentioned in my paper. I can send them to you if you are interested.
Sure. That would be appreciated. Would you mind to send that by email? lancel@nvidia.com
Hi I found the quality results you showed in the paper including FaceFormer. But as I know, this is a 3D-mesh animation algorithm.
May I ask which code base is the one you used in the paper? Did you directly use the official FaceFormer code and adjust video encoder and decoder and retrained it?
Looking forward to your reply. Best.