This is the official implementation of the paper 'Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild'
Why try NFR?
NFR can transfer facial animations to any customized face mesh, even with different topology, without any labor for manual rigging or data capturing. For facial meshes obtained from any source, you can quickly retarget exising animations onto the mesh and see the results in real-time.
This release is tested under Ubuntu 20.04, with a RTX 4090 GPU. Other GPU models with CUDA should be OK as well.
The testing module utilizes vedo for interactive visualization. Thus a display is required.
Windows is currently not supported unless you manually install the pytorch3d package following their official guide.
Create an environment called NFR
conda create -n NFR python=3.9
conda activate NFR
Recommend mamba to accelerate the installation process
conda install mamba -c conda-forge
Install necessary packages via mamba
mamba install pytorch=1.12.1 cudatoolkit=11.3 pytorch-sparse=0.6.15 pytorch3d=0.7.1 cupy=11.3 numpy=1.23.5 -c pytorch -c conda-forge -c pyg -c pytorch3d
Install necessary packages via pip
pip install potpourri3d trimesh open3d transforms3d libigl robust_laplacian vedo
Download the preprocess data and the pretrained model here: Google Drive. Place them in the root directory of this repo.
Run!
python test_user.py -c config/test.yml
Here's the plot when you successfully run the script. You can interact with the sliders and buttons to change the expression of the source mesh, and manually adjust the expression by FACS-like codes.
Currently we have two pre-processed facial animation sequences, one from ICT and another from Multiface. You can swith between them by changing the dataset
and data_head
variables in the config/test.yml
file.
You can test with your own mesh as the target. This has two requirement:
test-mesh
folder as examples.The training module will be released later.
@inproceedings{qin2023NFR,
author = {Qin, Dafei and Saito, Jun and Aigerman, Noam and Groueix Thibault and Komura, Taku},
title = {Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild},
year = {2023},
booktitle = {SIGGRAPH 2023 Conference Papers},
}
This project uses code from ICT, Multiface, Diffusion-Net, data from ICT and Multiface, testing mesh templates from ICT, Multiface, COMA, FLAME, MeshTalk. Thank you!