HumanAIGC / EMO

Emote Portrait Alive: Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions
7.52k stars 919 forks source link

fyi - referencenet project (with training code) based off previous paper HumanAIGC/AnimateAnyone - https://github.com/MooreThreads/Moore-AnimateAnyone #132

Open johndpope opened 9 months ago

johndpope commented 9 months ago

https://github.com/MooreThreads/Moore-AnimateAnyone

Backround - MooreThreads is chinese nvidia competitor - can convert cuda code https://www.theregister.com/2023/12/20/moore_threads_mtt_s4000_gpu/

This code seems like 70% there.

Please upvote developers to implement EMO paper - https://github.com/MooreThreads/Moore-AnimateAnyone/issues/98

Obviously the pose guider stuff is unrelated - but it may not be so hard to swap this out with the wav2vec + speed buckets. animate_anyone_architecture

can use this for training - https://github.com/HumanAIGC/EMO/issues/131

UPDATE the paper talks about MediaPipe to calculate the 6 degrees of freedom

Screenshot from 2024-03-02 17-25-57 https://github.com/google/mediapipe/

anishmenon commented 9 months ago

GitHub public repository can be used as market place.

don’t worry guys you will get funded soon. ‘ poor beggars have more standard than this repo authors

I really recommend #github to bring a Chinese anti F*** moderation to save others time.

johndpope commented 9 months ago

I believe I have the HeadRotation correctly calculating (took me half a dozen attempts) - https://github.com/johndpope/Emote-hack/blob/main/Net.py https://github.com/HumanAIGC/EMO/issues/166