This repository contains a pytorch implementation of "Breathing Life into Faces: Speech-driven 3D Facial Animation with Natural Head Pose and Detailed Shape"
This codebase provides:
Other necessary packages:
pip install -r requirements.txt
python 3.7.11
cd Vividtalker/src/
conda create -n vividtalker python=3.7
conda activate vividtalker
pip install -r requirements.txt
3D-VTFSET: https://drive.google.com/file/d/120EXgiqPrYP2ZjNq59ETwhbQdy_7wPmQ/view?usp=drive_link
Training a model from scratch follows a 2-step process.
Training VQ-VAEs for mouth movment and head pose:
cd src/vqgan/
sh train_pose.sh
sh train_exp.sh
Training decoders for mouth movement and head pose synthesis
cd src
sh train_decoder.sh
sh test.sh
@article{zhao2023breathing,
title={Breathing Life into Faces: Speech-driven 3D Facial Animation with Natural Head Pose and Detailed Shape},
author={Zhao, Wei and Wang, Yijun and He, Tianyu and Yin, Lianying and Lin, Jianxin and Jin, Xin},
journal={arXiv preprint arXiv:2310.20240},
year={2023}
}