Closed Tortoise17 closed 6 months ago
The avatar used in this project is from https://github.com/sign-language-processing/pose-to-video There are four methods, including training code.
If you need it to run fast, and have the ability to add some code, use stylegan
. If you don't have the ability to change code, use pix2pix
.
If you are OK with slow, fine-tune the ControlNet model, and use it with the animatediff
option.
You can see outputs here: https://github.com/sign-language-processing/pose-to-video/tree/main/assets/outputs
Dear Friends, this is really a great implementation. I want to ask if the change of avatar is possible to deploy some other ? and fine tune of the engine if there is any training code available as opensource?