https://github.com/user-attachments/assets/5ca959f9-8fcc-4233-8a1d-9b201bd042c9
https://github.com/user-attachments/assets/24086eea-7075-45ec-8eef-2a9344185746
https://github.com/user-attachments/assets/de259719-d174-4c83-9287-2fa77c3b8fad
https://github.com/user-attachments/assets/ac0e92d7-34e1-4402-a202-d06a2e806abe
Single Face Image
Through Face-ID adapter
https://github.com/user-attachments/assets/197a8d75-3c56-43f5-ac71-e7110d9e53d1
This repo is the optimize task by converted to ONNX and TensorRT models for LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control. We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) 💖.
Also we adding feature:
[✅] 20/07/2024: TensorRT Engine code and Demo
[✅] 22/07/2024: Support Multiple Faces
[✅] 22/07/2024: Face-ID Adapter for Control Face Animation
[✅] 24/07/2024: Multiple Face motion in Video for animation multiples Face in image
[✅] 28/07/2024: Supported Video2Video Live Portrait (only use one Face)
[✅] 30/07/2024: Support SDXL-Lightning Controlnet-Open-Pose from 1 to 8 step for change source image to Art Image
[ ] Integrate Animate-Diff Lightning Motion module
git clone https://github.com/aihacker111/Efficient-Live-Portrait
# create env using conda
conda create -n ELivePortrait python==3.10.14
conda activate ELivePortrait
# install dependencies with CPU
pip install -r requirements-cpu.txt
# install dependencies with GPU
pip install -r requirements-gpu.txt
# install dependencies with pip for mps
pip install -r requirements-mps.txt
Note: make sure your system has FFmpeg installed!
The pretrained weights is also automatic downloading You don't need to download and put model into sources code
pretrained_weights
|
├── landmarks
│ └── models
│ └── buffalo_l
│ | ├── 2d106det.onnx
│ | └── det_10g.onnx
| └── landmark.onnx
└── live_portrait
|
├── appearance_feature_extractor.onnx
├── motion_extractor.onnx
├── generator_warping.onnx
├── stitching_retargeting.onnx
└── stitching_retargeting_eye.onnx
└── stitching_retargeting_lip.onnx
├── appearance_feature_extractor_fp32.engine
├── motion_extractor_fp32.engine
├── generator_fp32.engine
├── stitching_fp32.engine
└── stitching_eye_fp32.engine
└── stitching_lip_fp32.engine
├── appearance_feature_extractor_fp16.engine
├── motion_extractor_fp16.engine
├── generator_fp16.engine
├── stitching_fp16.engine
└── stitching_eye_fp16.engine
└── stitching_lip_fp16.engine
python run_live_portrait.py --driving_video 'path/to/your/video/driving/or/webcam/id' --source_image 'path/to/your/image/want/to/animation' -condition_image 'path/the/single/face/image/to/compute/face-id' --task ['image', 'video', 'webcam'] --run_time --half_precision --use_face_id
For run Multiple Face Motion mode:
python run_live_portrait.py --driving_video 'path/to/your/video/driving/or/webcam/id' --source_image 'path/to/your/image/want/to/animation' --task ['image', 'video', 'webcam'] --run_time --half_precision
For Vid2Vid Live Portrait:
python run_live_portrait.py --driving_video 'path/to/your/video/driving/or/webcam/id' --source_video 'path/to/your/video/want/to/animation' --task ['image', 'video', 'webcam'] --run_time --half_precision
For SDXL-Lightning + OpenPose-Lora + Live Portrait
python /content/Efficient-Live-Portrait/run_live_portrait.py --driving_video 'path/to/your/video' --source_image 'path/to/your/image/want/to/animation' --run_time --task image --use_diffusion --lcm_steps [1, 2, 4, 8] --prompt '1girl, offshoulder, light smile, shiny skin best quality, masterpiece, photorealistic'
Follow in the colab folder
We'll release it soon