MooreThreads / Moore-AnimateAnyone

Character Animation (AnimateAnyone, Face Reenactment)
Apache License 2.0
3.1k stars 241 forks source link

(IDEA/REQUEST) Possible consistancy for Face Reenactment and better animation #132

Open A-2-H opened 4 months ago

A-2-H commented 4 months ago

I was searching through github to find solutions for face animation and the consistancy od face resemblence and I found that project DreamTalk was made with Alibaba contribution so maybe it is technology used in their project EMO: DreamTalk

image

and it is based on this program: PIRender and it uses this program: Deep3DFaceRecon also this script was used for audio driven movements by PIRender: StyleGestures

In their samples we can observe face consistancy and random head movements (maybe we could have option of random movements or video driving movements?). image image

So it is audio driven and it generates random movements of the body and also make "realistic" face movement. I think that's what we need. Maybe we can learn/implement it to achive similar face animations?

When I tried face reenactment of Moore-AnimateAnyone I saw many deformations of the face and also flickering, so maybe those solutions could help reduce it and make it more consistancy?

liangyang-mt commented 4 months ago

Thank you very much for your suggestion. The stability and consistency of generated videos is indeed a difficult research point. Combined with 3D modeling ideas, I personally feel that it is beneficial to the stability of facial and head movements. At the same time, combining prior generated frames will also help inter-frame consistency. We are currently trying more optimization solutions, which may have better results.