update ποΈποΈποΈ We release our training codes!! Now you can train your own AnimateAnyone models. See here for more details. Have fun!
updateοΌπ₯π₯π₯ We launch a HuggingFace Spaces demo of Moore-AnimateAnyone at here!!
This repository reproduces AnimateAnyone. To align the results demonstrated by the original paper, we adopt various approaches and tricks, which may differ somewhat from the paper and another implementation.
It's worth noting that this is a very preliminary version, aiming for approximating the performance (roughly 80% under our test) showed in AnimateAnyone.
We will continue to develop it, and also welcome feedbacks and ideas from the community. The enhanced version will also be launched on our MoBi MaLiang AIGC platform, running on our own full-featured GPU S4000 cloud computing platform.
Here are some results we generated, with the resolution of 512x768.
Limitation: We observe following shortcomings in current version:
These issues will be addressed and improved in the near future. We appreciate your anticipation!
prerequisites: 3.11>=python>=3.8
, CUDA>=11.3
, ffmpeg
and git
.
Python and Git:
Python 3.10.11: https://www.python.org/ftp/python/3.10.11/python-3.10.11-amd64.exe
Install ffmpeg for your operating system (https://www.geeksforgeeks.org/how-to-install-ffmpeg-on-windows/)
notice:step 4 use windows system Set Enviroment Path.
Give unrestricted script access to powershell so venv can work:
Set-ExecutionPolicy Unrestricted
and answer Agit clone --recurse-submodules https://github.com/sdbds/Moore-AnimateAnyone-for-windows/
Install with Powershell run install.ps1
or install-cn.ps1
(for Chinese)
Add loading local safetensors or ckpt,you can change config/prompts/animation.yaml
about pretrained_weights
for your local SD1.5 model.
such as "D:\\stablediffusion-webui\\models\\Stable-diffusion\\v1-5-pruned.ckpt"
Download weights
Download our trained weights, which include four parts: denoising_unet.pth
, reference_unet.pth
, pose_guider.pth
and motion_module.pth
.
Download pretrained weight of based models and other components:
- StableDiffusion V1.5
- sd-vae-ft-mse
- image_encoder
Download dwpose weights (dw-ll_ucoco_384.onnx
, yolox_l.onnx
) following this.
Put these weights under a directory, like ./pretrained_weights
, and orgnize them as follows:
./pretrained_weights/
|-- DWPose
| |-- dw-ll_ucoco_384.onnx
| `-- yolox_l.onnx
|-- image_encoder
| |-- config.json
| `-- pytorch_model.bin
|-- denoising_unet.pth
|-- motion_module.pth
|-- pose_guider.pth
|-- reference_unet.pth
`-- stable-diffusion-v1-5
|-- feature_extractor
| `-- preprocessor_config.json
|-- model_index.json
|-- unet
| |-- config.json
| `-- diffusion_pytorch_model.bin
`-- v1-inference.yaml
Note: If you have installed some of the pretrained models, such as StableDiffusion V1.5
, you can specify their paths in the config file (e.g. ./config/prompts/animation.yaml
).
Here is the cli command for running inference scripts:
python -m scripts.pose2vid --config ./configs/prompts/animation.yaml -W 512 -H 784 -L 64
You can refer the format of animation.yaml
to add your own reference images or pose videos. To convert the raw video into a pose video (keypoint sequence), you can run with the following command:
python tools/vid2pose.py --video_path /path/to/your/video.mp4
Launch local gradio demo on GPU:
Powershell run with run_gui.ps1
Then open gradio demo in local browser.
Note: package dependencies have been updated, you may upgrade your environment via pip install -r requirements.txt
before training.
Extract keypoints from raw videos:
python tools/extract_dwpose_from_vid.py --video_root /path/to/your/video_dir
Extract the meta info of dataset:
python tools/extract_meta_info.py --root_path /path/to/your/video_dir --dataset_name anyone
Update lines in the training config file:
data:
meta_paths:
- "./data/anyone_meta.json"
Put openpose controlnet weights under ./pretrained_weights
, which is used to initialize the pose_guider.
Put sd-image-variation under ./pretrained_weights
, which is used to initialize unet weights.
Run command:
accelerate launch train_stage_1.py --config configs/train/stage1.yaml
Put the pretrained motion module weights mm_sd_v15_v2.ckpt
(download link) under ./pretrained_weights
.
Specify the stage1 training weights in the config file stage2.yaml
, for example:
stage1_ckpt_dir: './exp_output/stage1'
stage1_ckpt_step: 30000
Run command:
accelerate launch train_stage_2.py --config configs/train/stage2.yaml
HuggingFace Demo: We launch a quick preview demo of Moore-AnimateAnyone at HuggingFace Spaces!!
We appreciate the assistance provided by the HuggingFace team in setting up this demo.
To reduce waiting time, we limit the size (width, height, and length) and inference steps when generating videos.
If you have your own GPU resource (>= 16GB vram), you can run a local gradio app via following commands:
python app.py
We will launched this model on our MoBi MaLiang AIGC platform, running on our own full-featured GPU S4000 cloud computing platform. Mobi MaLiang has now integrated various AIGC applications and functionalities (e.g. text-to-image, controllable generation...). You can experience it by clicking this link or scanning the QR code bellow via WeChat!
This project is intended for academic research, and we explicitly disclaim any responsibility for user-generated content. Users are solely liable for their actions while using the generative model. The project contributors have no legal affiliation with, nor accountability for, users' behaviors. It is imperative to use the generative model responsibly, adhering to both ethical and legal standards.
We first thank the authors of [AnimateAnyone](). Additionally, we would like to thank the contributors to the majic-animate, animatediff and Open-AnimateAnyone repositories, for their open research and exploration. Furthermore, our repo incorporates some codes from dwpose and animatediff-cli-prompt-travel, and we extend our thanks to them as well.