-
Thanks for the great work; do you know how we can run audio-driven reenactment but have the input be a video instead of an image? (i.e. sync the lips in that video)
-
Can I use RGB video from a monocular camera for training? We look forward to hearing from you, thank you.
-
Great work! How would you suggest doing cross-person renactment (i.e. face, expression and pose of person A into video of person B)
-
Dear Authors,
Thanks for your excelent package. I have a problem: the target head from an image with human body and hands, after head reenactment, how can I stitch the head back to the origin image…
-
I am following Training face inpainting.
How to solve Gr = obj_factory(checkpoint['arch']).to(device) KeyError: 'arch'
I also wrote the reenactment_model location correctly. @YuvalNirkin
-
Hey @YuvalNirkin
Thanks for providing such a wonderful algorithm!
I tried to use `swap.py` and the face-swapping did happen but wasn't satisfactory. I believe the facial attributes like eyes, nos…
-
Hi,
Thanks for your amazing work!
Just want to know when you guys will release the code about the One-shot portrait reenactment and stylization.
Cannot wait to try it!
-
Thanks for sharing your work.Could you tell me how you train the non-id speace and what kind of loss did you use.Cause I find that there are seems not loss for non-id space training have been mentio…
-
Hi,
I'm running the scripts of cross-id reenactment with provided demo images (pic 1). But the results I'm getting (pic 2) look suspiciously stylized. Is this simply expected results from styleGAN, o…
-
Thank you for providing your code.
In the train.py file, `driving.png` and `driving_r.png` are used.
Can you please clarify what is the difference between these two? and Is this model limited to tr…