Open oijoijcoiejoijce opened 1 year ago
I have similiar questions. I have tried training on one identity and then running cross-reenactment on the other identy via RGB image signals. However the result is really poor:
Above are source, drive and result respectively.
May I ask if this is expected? I.e, cross-reenactment via RGB signals does give quite bad quality? I noticed that in the paper there is only audio-driven/3DMM coefficients driven reenactment.
Many thanks.
Great work! How would you suggest doing cross-person renactment (i.e. face, expression and pose of person A into video of person B)