This repository contains code for achieving high-fidelity lip-syncing in videos, using the Wav2Lip algorithm for lip-syncing and the Real-ESRGAN algorithm for super-resolution. The combination of these two algorithms allows for the creation of lip-synced videos that are both highly accurate and visually stunning.
The algorithm for achieving high-fidelity lip-syncing with Wav2Lip and Real-ESRGAN can be summarized as follows:
Wav2Lip
algorithm.To test the "Wav2Lip-HD" model, follow these steps:
Clone this repository and install requirements using following command (Make sure, Python and CUDA are already installed):
git clone https://github.com/saifhassan/Wav2Lip-HD.git
cd Wav2Lip-HD
pip install -r requirements.txt
Downloading weights
Model | Directory | Download Link |
---|---|---|
Wav2Lip | checkpoints/ | Link |
ESRGAN | experiments/001_ESRGAN_x4_f64b23_custom16k_500k_B16G1_wandb/models/ | Link |
Face_Detection | face_detection/detection/sfd/ | Link |
Real-ESRGAN | Real-ESRGAN/gfpgan/weights/ | Link |
Real-ESRGAN | Real-ESRGAN/weights/ | Link |
Put input video to input_videos
directory and input audio to input_audios
directory.
Open run_final.sh
file and modify following parameters:
filename=kennedy
(just video file name without extension)
input_audio=input_audios/ai.wav
(audio filename with extension)
Execute run_final.sh
using following command:
bash run_final.sh
Outputs
output_videos_wav2lip
directory contains video output generated by wav2lip algorithm.frames_wav2lip
directory contains frames extracted from video (generated by wav2lip algorithm).frames_hd
directory contains frames after performing super-resolution using Real-ESRGAN algorithm.output_videos_hd
directory contains final high quality video output generated by Wav2Lip-HD.The results produced by Wav2Lip-HD are in two forms, one is frames and other is videos. Both are shared below:
Frame by Wav2Lip | Optimized Frame |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Video by Wav2Lip | Optimized Video |
---|---|
We would like to thank the following repositories and libraries for their contributions to our work: