ShiJiaying / LivePortrait

Bring portraits to life!
https://liveportrait.github.io
MIT License
41 stars 6 forks source link

LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control

Jianzhu Guo 1†Dingyun Zhang 1,2Xiaoqiang Liu 1Zhizhou Zhong 1,3Yuan Zhang 1
Pengfei Wan 1Di Zhang 1
1 Kuaishou Technology  2 University of Science and Technology of China  3 Fudan University 



showcase
πŸ”₯ Thanks for the wonderful work! Visit the official code and results in homepage πŸ”₯

πŸ”₯Personal Updates

Introduction

I fork this repo from LivePortrait. The official PyTorch implementation of this paper LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control in https://github.com/KwaiVGI/LivePortrait.

Adapting to both human and animal faces, I unified the keypoint detection module using X-Pose https://github.com/IDEA-Research/X-Pose.

πŸ”₯ Getting Started

1. Clone the code and prepare the environment

git clone https://github.com/ShiJiaying/LivePortrait.git
cd LivePortrait

# create env using conda
conda create -n LivePortrait python==3.10
conda activate LivePortrait
# install dependencies with pip
pip install -r requirements.txt

# Compiling CUDA operators for X-Pose
cd XPose/models/UniPose/ops
python setup.py build install

2. Download pretrained weights

Download the pretrained weights from HuggingFace:

# you may need to run `git lfs install` first
git clone https://huggingface.co/KwaiVGI/liveportrait pretrained_weights

Or, download all pretrained weights from Google Drive or Baidu Yun. We have packed all weights in one directory 😊. Unzip and place them in ./pretrained_weights ensuring the directory structure is as follows:

pretrained_weights
β”œβ”€β”€ insightface
β”‚   └── models
β”‚       └── buffalo_l
β”‚           β”œβ”€β”€ 2d106det.onnx
β”‚           └── det_10g.onnx
└── liveportrait
    β”œβ”€β”€ base_models
    β”‚   β”œβ”€β”€ appearance_feature_extractor.pth
    β”‚   β”œβ”€β”€ motion_extractor.pth
    β”‚   β”œβ”€β”€ spade_generator.pth
    β”‚   └── warping_module.pth
    β”œβ”€β”€ landmark.onnx
    └── retargeting_models
        └── stitching_retargeting_module.pth

Then download X-Pose weight from Google Drive and place it in ./XPose

XPose
└── unipose_swint.pth

3. Inference πŸš€

Fast hands-on

python inference.py

πŸ“• Change source_image type in src/config/argument_config.py

source_image_type: str = 'animal'  # animal, human

If the script runs successfully, you will get an output mp4 file named animations/s6--d0_concat.mp4. This file includes the following results: driving video, input image, and generated result.

image

Or, you can change the input by specifying the -s and -d arguments:

python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4

# disable pasting back to run faster
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 --no_flag_pasteback

# more options to see
python inference.py -h

Driving video auto-cropping

πŸ“• To use your own driving video, we recommend:

Below is a auto-cropping case by --flag_crop_driving_video:

python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video

If you find the results of auto-cropping is not well, you can modify the --scale_crop_video, --vy_ratio_crop_video options to adjust the scale and offset, or do it manually.

Template making

You can also use the .pkl file auto-generated to speed up the inference, and protect privacy, such as:

python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl

Discover more interesting results on our Homepage 😊

4. Gradio interface πŸ€—

We also provide a Gradio interface for a better experience, just run by:

python app.py

You can specify the --server_port, --share, --server_name arguments to satisfy your needs!

Or, try it out effortlessly on HuggingFace πŸ€—

5. Inference speed evaluation πŸš€πŸš€πŸš€

We have also provided a script to evaluate the inference speed of each module:

python speed.py

Below are the results of inferring one frame on an RTX 4090 GPU using the native PyTorch framework with torch.compile:

Model Parameters(M) Model Size(MB) Inference(ms)
Appearance Feature Extractor 0.84 3.3 0.82
Motion Extractor 28.12 108 0.84
Spade Generator 55.37 212 7.59
Warping Module 45.53 174 5.21
Stitching and Retargeting Modules 0.23 2.3 0.31

Note: The values for the Stitching and Retargeting Modules represent the combined parameter counts and total inference time of three sequential MLP networks.

Community Resources πŸ€—

Discover the invaluable resources contributed by our community to enhance your LivePortrait experience:

And many more amazing contributions from our community!

Acknowledgements

We would like to thank the contributors of FOMM, Open Facevid2vid, SPADE, InsightFace repositories, for their open research and contributions.

Citation πŸ’–

If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:

@article{guo2024liveportrait,
  title   = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
  author  = {Guo, Jianzhu and Zhang, Dingyun and Liu, Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di},
  journal = {arXiv preprint arXiv:2407.03168},
  year    = {2024}
}