Open lixunsong opened 10 months ago
Thank you for releasing it and making it open source. We appreciate the efforts from the community. Also, did you check this other open source project which aims to replicate the results from this paper? I hope you both could collaborate together to make faster progress?
Check this project repo if you haven't already: https://github.com/guoqincode/Open-AnimateAnyone
you can reach out to @guoqincode
Also, when can we expect training codes to be released?
Note The training code involves private data and packages. We will organize this portion of the code as soon as possible and then release it.
As mentioned in the readme of https://github.com/MooreThreads/Moore-AnimateAnyone @rohit901
Thank you @yhyu13, looking forward to the training code as it can help the community immensely.
looks like training code has dropped - https://github.com/MooreThreads/Moore-AnimateAnyone/commit/d31bf2a1819060723f1fe220bda9f5c5ccbdf251
wow these people are the best!! more power to you guys @lixunsong!! Thank you for truly open sourcing the knowledge.
BTW, a different paper for animating people called CHAMP was just released, and they credit MooreThreads/Moore-AnimateAnyone as being what they built on top of. So this project even helped out researchers! That code is here: https://github.com/fudan-generative-vision/champ
Hello guys, we are devloping to reproduce this work and happy to release our codes and pretrained weights now. Our reproduction approximates the performance demonstrated by the original paper, for example:
https://github.com/HumanAIGC/AnimateAnyone/assets/138439222/0e45be5b-4e43-4a9c-8c6d-ad0e96e55da5
The repo is avaliable at: https://github.com/MooreThreads/Moore-AnimateAnyone. Hoping for your feedbacks and ideas!