MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation.
Zhengyan Tong, Chao Li, Zhaokang Chen, Bin Wu†, Wenjiang Zhou (†Corresponding Author, benbinwu@tencent.com)
Lyra Lab, Tencent Music Entertainment
github huggingface space (comming soon) Project (comming soon) Technical report (comming soon)
MusePose is an image-to-video generation framework for virtual human under control signal such as pose. The current released model was an implementation of AnimateAnyone by optimizing Moore-AnimateAnyone.
MusePose
is the last building block of the Muse opensource serie. Together with MuseV and MuseTalk, we hope the community can join us and march towards the vision where a virtual human can be generated end2end with native ability of full body movement and interaction. Please stay tuned for our next milestone!
We really appreciate AnimateAnyone for their academic paper and Moore-AnimateAnyone for their code base, which have significantly expedited the development of the AIGC community and MusePose.
Update:
Join Lyra Lab, Tencent Music Entertainment!
We are currently seeking AIGC researchers including Internships, New Grads, and Senior (实习、校招、社招).
Please find details in the following two links or contact zkangchen@tencent.com
MusePose is a diffusion-based and pose-guided virtual human video generation framework.
Our main contributions could be summarized as follows:
pose align
algorithm so that users could align arbitrary dance videos to arbitrary reference images, which SIGNIFICANTLY improved inference performance and enhanced model usability.MusePose
and pretrained models.inference_v2.yaml
.We provide a detailed tutorial about the installation and the basic usage of MusePose for new users:
To prepare the Python environment and install additional packages such as opencv, diffusers, mmcv, etc., please follow the steps below:
We recommend a python version >=3.10 and cuda version =11.7. Then build environment as follows:
pip install -r requirements.txt
pip install --no-cache-dir -U openmim
mim install mmengine
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
You can download weights manually as follows:
Download our trained weights.
Download the weights of other components:
yolox_l_8x8_300e_coco.pth
Finally, these weights should be organized in pretrained_weights
as follows:
./pretrained_weights/
|-- MusePose
| |-- denoising_unet.pth
| |-- motion_module.pth
| |-- pose_guider.pth
| └── reference_unet.pth
|-- dwpose
| |-- dw-ll_ucoco_384.pth
| └── yolox_l_8x8_300e_coco.pth
|-- sd-image-variations-diffusers
| └── unet
| |-- config.json
| └── diffusion_pytorch_model.bin
|-- image_encoder
| |-- config.json
| └── pytorch_model.bin
└── sd-vae-ft-mse
|-- config.json
└── diffusion_pytorch_model.bin
Prepare your referemce images and dance videos in the folder ./assets
and organnized as the example:
./assets/
|-- images
| └── ref.png
└── videos
└── dance.mp4
Get the aligned dwpose of the reference image:
python pose_align.py --imgfn_refer ./assets/images/ref.png --vidfn ./assets/videos/dance.mp4
After this, you can see the pose align results in ./assets/poses
, where ./assets/poses/align/img_ref_video_dance.mp4
is the aligned dwpose and the ./assets/poses/align_demo/img_ref_video_dance.mp4
is for debug.
Add the path of the reference image and the aligned dwpose to the test config file ./configs/test_stage_2.yaml
as the example:
test_cases:
"./assets/images/ref.png":
- "./assets/poses/align/img_ref_video_dance.mp4"
Then, simply run
python test_stage_2.py --config ./configs/test_stage_2.yaml
./configs/test_stage_2.yaml
is the path to the inference configuration file.
Finally, you can see the output results in ./output/
If you want to reduce the VRAM cost, you could set the width and height for inference. For example,
python test_stage_2.py --config ./configs/test_stage_2.yaml -W 512 -H 512
It will generate the video at 512 x 512 first, and then resize it back to the original size of the pose video.
Currently, it takes 16GB VRAM to run on 512 x 512 x 48 and takes 28GB VRAM to run on 768 x 768 x 48. However, it should be noticed that the inference resolution would affect the final results (especially face region).
If you want to enhance the face region to have a better consistency of the face, you could use FaceFusion. You could use the face-swap
function to swap the face in the reference image to the generated video.
Thanks for open-sourcing!
@article{musepose,
title={MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation},
author={Tong, Zhengyan and Li, Chao and Chen, Zhaokang and Wu, Bin and Zhou, Wenjiang},
journal={arxiv},
year={2024}
}
code
: The code of MusePose is released under the MIT License. There is no limitation for both academic and commercial usage.model
: The trained model are available for non-commercial research purposes only.other opensource model
: Other open-source models used must comply with their license, such as ft-mse-vae
, dwpose
, etc..AIGC
: This project strives to impact the domain of AI-driven video generation positively. Users are granted the freedom to create videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.