mayuelala / FollowYourEmoji

[Siggraph Asia 2024] Follow-Your-Emoji: This repo is the official implementation of "Follow-Your-Emoji: Fine-Controllable and Expressive Freestyle Portrait Animation"
327 stars 25 forks source link

Impressive #1

Open Inferencer opened 4 months ago

Inferencer commented 4 months ago

Great work, couple of questions which might save a bunch of issues opening up

  1. ETA on code release?

  2. model released will be 256 or 512?

  3. inference time on GPU (please state tested GPU & driving vid frame rate & duration)

  4. lowest vram required? (if tested, if not please ignore and await user inputs)

zhanghongyong123456 commented 4 months ago

Great work, couple of questions which might save a bunch of issues opening up

  1. ETA on code release?
  2. model released will be 256 or 512?
  3. inference time on GPU (please state tested GPU & driving vid frame rate & duration)
  4. lowest vram required? (if tested, if not please ignore and await user inputs)

+1

lymhust commented 4 months ago

An alternative project EchoMimic is open-sourced EchoMimic is capable of generating portrait videos not only by audios and facial landmarks individually, but also by a combination of both audios and selected facial landmarks. Project link: https://badtobest.github.io/echomimic.html GitHub link: https://github.com/BadToBest/EchoMimic

Inferencer commented 4 months ago

An alternative project EchoMimic is open-sourced EchoMimic is capable of generating portrait videos not only by audios and facial landmarks individually, but also by a combination of both audios and selected facial landmarks. Project link: https://badtobest.github.io/echomimic.html GitHub link: https://github.com/BadToBest/EchoMimic

to add to that we also now have a SOTA with 512px https://github.com/KwaiVGI/LivePortrait

mayuelala commented 4 months ago

We are going to release the code... don't be worry

mayuelala commented 4 months ago

We release the inference code now!