dhanupriyagithub / SadTalker

MIT License
0 stars 0 forks source link
    [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Winfredy/SadTalker/blob/main/quick_demo.ipynb)   [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/vinthony/SadTalker)   [![sd webui-colab](https://img.shields.io/badge/Automatic1111-Colab-green)](https://colab.research.google.com/github/camenduru/stable-diffusion-webui-colab/blob/main/video/stable/stable_diffusion_1_5_video_webui_colab.ipynb)   [![Replicate](https://replicate.com/cjwbw/sadtalker/badge)](https://replicate.com/cjwbw/sadtalker)
Wenxuan Zhang *,1,2 Xiaodong Cun *,2Xuan Wang 3Yong Zhang 2Xi Shen 2
Yu Guo1 Ying Shan 2 Fei Wang 1

1 Xi'an Jiaotong University   2 Tencent AI Lab   3 Ant Group  

CVPR 2023

![sadtalker](https://user-images.githubusercontent.com/4397546/222490039-b1f6156b-bf00-405b-9fda-0c9a9156f991.gif) TL;DR:       single portrait image 🙎‍♂️      +       audio 🎤       =       talking head video 🎞.

🔥 Highlight

https://user-images.githubusercontent.com/4397546/231495639-5d4bb925-ea64-4a36-a519-6389917dac29.mp4

still+enhancer in v0.0.1 still + enhancer in v0.0.2 input image @bagbag1815

📋 Changelog (Previous changelog can be founded here)

🚧 TODO: See the Discussion https://github.com/OpenTalker/SadTalker/issues/280

If you have any problem, please view our FAQ before opening an issue.

⚙️ 1. Installation.

Tutorials from communities: 中文windows教程 | 日本語コース

Linux:

  1. Installing anaconda, python and git.

  2. Creating the env and install the requirements.

    git clone https://github.com/Winfredy/SadTalker.git
    
    cd SadTalker 
    
    conda create -n sadtalker python=3.8
    
    conda activate sadtalker
    
    pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
    
    conda install ffmpeg
    
    pip install -r requirements.txt
    
    ### tts is optional for gradio demo. 
    ### pip install TTS
    

    Windows (中文windows教程):

  3. Install Python 3.10.6, checking "Add Python to PATH".

  4. Install git manually (OR scoop install git via scoop).

  5. Install ffmpeg, following this instruction (OR using scoop install ffmpeg via scoop).

  6. Download our SadTalker repository, for example by running git clone https://github.com/Winfredy/SadTalker.git.

  7. Download the checkpoint and gfpgan below↓.

  8. Run start.bat from Windows Explorer as normal, non-administrator, user, a gradio WebUI demo will be started.

Macbook:

More tips about installnation on Macbook and the Docker file can be founded here

📥 2. Download Trained Models.

You can run the following script to put all the models in the right place.

bash scripts/download_models.sh

Other alternatives:

we also provide an offline patch (gfpgan/), thus, no model will be downloaded when generating.

Google Driver: download our pre-trained model from this link (main checkpoints) and gfpgan (offline patch)

Github Release Page: download all the files from the lastest github release page, and then, put it in ./checkpoints.

百度云盘: we provided the downloaded model in checkpoints, 提取码: sadt. And gfpgan, 提取码: sadt.

Model Details Model explains: ##### New version | Model | Description | :--- | :---------- |checkpoints/mapping_00229-model.pth.tar | Pre-trained MappingNet in Sadtalker. |checkpoints/mapping_00109-model.pth.tar | Pre-trained MappingNet in Sadtalker. |checkpoints/SadTalker_V0.0.2_256.safetensors | packaged sadtalker checkpoints of old version, 256 face render). |checkpoints/SadTalker_V0.0.2_512.safetensors | packaged sadtalker checkpoints of old version, 512 face render). |gfpgan/weights | Face detection and enhanced models used in `facexlib` and `gfpgan`. ##### Old version | Model | Description | :--- | :---------- |checkpoints/auido2exp_00300-model.pth | Pre-trained ExpNet in Sadtalker. |checkpoints/auido2pose_00140-model.pth | Pre-trained PoseVAE in Sadtalker. |checkpoints/mapping_00229-model.pth.tar | Pre-trained MappingNet in Sadtalker. |checkpoints/mapping_00109-model.pth.tar | Pre-trained MappingNet in Sadtalker. |checkpoints/facevid2vid_00189-model.pth.tar | Pre-trained face-vid2vid model from [the reappearance of face-vid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis). |checkpoints/epoch_20.pth | Pre-trained 3DMM extractor in [Deep3DFaceReconstruction](https://github.com/microsoft/Deep3DFaceReconstruction). |checkpoints/wav2lip.pth | Highly accurate lip-sync model in [Wav2lip](https://github.com/Rudrabha/Wav2Lip). |checkpoints/shape_predictor_68_face_landmarks.dat | Face landmark model used in [dilb](http://dlib.net/). |checkpoints/BFM | 3DMM library file. |checkpoints/hub | Face detection models used in [face alignment](https://github.com/1adrianb/face-alignment). |gfpgan/weights | Face detection and enhanced models used in `facexlib` and `gfpgan`. The final folder will be shown as: image

🔮 3. Quick Start (Best Practice).

WebUI Demos:

Online: Huggingface | SDWebUI-Colab | Colab

Local Autiomatic1111 stable-diffusion webui extension: please refer to Autiomatic1111 stable-diffusion webui docs.

Local gradio demo(highly recommanded!): Similar to our hugging-face demo can be run by:

## you need manually install TTS(https://github.com/coqui-ai/TTS) via `pip install tts` in advanced.
python app.py

Local gradio demo(highly recommanded!):

Manually usages:

Animating a portrait image from default config:
python inference.py --driven_audio <audio.wav> \
                    --source_image <video.mp4 or picture.png> \
                    --enhancer gfpgan 

The results will be saved in results/$SOME_TIMESTAMP/*.mp4.

Full body/image Generation:

Using --still to generate a natural full body video. You can add enhancer to improve the quality of the generated video.

python inference.py --driven_audio <audio.wav> \
                    --source_image <video.mp4 or picture.png> \
                    --result_dir <a file to store results> \
                    --still \
                    --preprocess full \
                    --enhancer gfpgan 

More examples and configuration and tips can be founded in the >>> best practice documents <<<.

🛎 Citation

If you find our work useful in your research, please consider citing:

@article{zhang2022sadtalker,
  title={SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation},
  author={Zhang, Wenxuan and Cun, Xiaodong and Wang, Xuan and Zhang, Yong and Shen, Xi and Guo, Yu and Shan, Ying and Wang, Fei},
  journal={arXiv preprint arXiv:2211.12194},
  year={2022}
}

💗 Acknowledgements

Facerender code borrows heavily from zhanglonghao's reproduction of face-vid2vid and PIRender. We thank the authors for sharing their wonderful code. In training process, We also use the model from Deep3DFaceReconstruction and Wav2lip. We thank for their wonderful work.

See also these wonderful 3rd libraries we use:

🥂 Extensions:

🥂 Related Works

📢 Disclaimer

This is not an official product of Tencent. This repository can only be used for personal/research/non-commercial purposes.

LOGO: color and font suggestion: ChatGPT, logo font:Montserrat Alternates .

All the copyright of the demo images and audio are from communities users or the geneartion from stable diffusion. Free free to contact us if you feel uncomfortable.