Official Repository for CVPR 2024 paper SplattingAvatar: Realistic Real-Time Human Avatars with Mesh-Embedded Gaussian Splatting.
The embedding points of 3DGS on the triangle mesh are updated by the walking on triangle scheme.
See the phongsurface
module implemented in c++ and pybind11.
conda create -n splatting python=3.9
conda activate splatting
pip install torch==1.13.1 torchvision torchaudio functorch --extra-index-url https://download.pytorch.org/whl/cu117
git clone https://github.com/facebookresearch/pytorch3d.git cd pytorch3d pip install -e .
pip install tqdm omegaconf opencv-python libigl pip install trimesh plyfile imageio chumpy lpips pip install packaging pybind11 pip install numpy==1.23.1
- Clone this repo *recursively*. Install Gaussian Splatting's submodules.
git clone --recursive https://github.com/initialneil/SplattingAvatar.git cd SplattingAvatar
cd submodules/diff-gaussian-rasterization pip install .
cd ../submodules/simple-knn pip install .
cd ..
- Install `simple_phongsurf` for *walking on triangles*.
cd model/simple_phongsurf pip install -e .
- Download [FLAME model](https://flame.is.tue.mpg.de/download.php), choose **FLAME 2020** and unzip it, copy 'generic_model.pkl' into `./model/imavatar/FLAME2020`,
- Download [SMPL model](https://smpl.is.tue.mpg.de/download.php) (1.0.0 for Python 2.7 (10 shape PCs)) and move them to the corresponding places:
mv /path/to/smpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl model/smplx_utils/smplx_models/smpl/SMPL_FEMALE.pkl mv /path/to/smpl/models/basicmodel_m_lbs_10_207_0_v1.0.0.pkl model/smplx_utils/smplx_models/smpl/SMPL_MALE.pkl
## Preparing dataset
We provide the preprocessed data of the 10 subjects used in the paper.
- Our preprocessing followed [IMavatar](https://github.com/zhengyuf/IMavatar/tree/main/preprocess#preprocess) and replaced the *Segmentation* with [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting).
- ~~Pre-trained checkpoints are provided together with the data.~~
- Please find the data from https://github.com/Zielon/INSTA. We'll update the checkpoint link soon.
<img src="https://github.com/initialneil/SplattingAvatar/raw/master/assets/SplattingAvatar-dataset.jpg" width="800"/>
## Training
python train_splatting_avatar.py --config configs/splatting_avatar.yaml --dat_dir <path/to/subject>
python train_splatting_avatar.py --config configs/splatting_avatar.yaml --dat_dir /path-to/bala
CUDA_VISIBLE_DEVICES=0 python train_splatting_avatar.py ...
CUDA_VISIBLE_DEVICES=0 python train_splatting_avatar.py ... --ip none
SIBR_remoteGaussian_app.exe --path <path/to/output_of_any_standard_3dgs>
## Evaluation
python eval_splatting_avatar.py --config configs/splatting_avatar.yaml --dat_dir <path/to/model_path>
python eval_splatting_avatar.py --config configs/splatting_avatar.yaml --dat_dir /path-to/bala/output-splatting/last_checkpoint
## Full-body Avatar
We conducted experiments on [PeopleSnapshot](https://graphics.tu-bs.de/people-snapshot).
- Please download the parameter files (the same with InstantAvatar) from: [Baidu Disk](https://pan.baidu.com/s/1g4lSPAYfwbOadnnEDoWjzg?pwd=5gy5) or [Google Drive](https://drive.google.com/drive/folders/1r-fHq5Q_szFYD_Wz394Dnc5G79nG2WHw?usp=sharing).
- Download 4 sequences from PeopleSnapshot (male/female-3/4-casual) and unzip `images` and `masks` to corresponding folders from above.
- Use `scripts/preprocess_PeopleSnapshot.py` to preprocess the data.
- Training:
python train_splatting_avatar.py --config "configs/splatting_avatar.yaml;configs/instant_avatar.yaml" --dat_dir <path/to/subject>
python train_splatting_avatar.py --config "configs/splatting_avatar.yaml;configs/instant_avatar.yaml" --dat_dir /path-to/female-3-casual
output-splatting/last_checkpoint
can be evaluated by eval_splatting_avatar.py
python eval_splatting_avatar.py --config "configs/splatting_avatar.yaml;configs/instant_avatar.yaml" --dat_dir /path-to/female-3-casual --pc_dir /path-to/female-3-casual/output-splatting/last_checkpoint/point_cloud/iteration_30000
aist_demo.npz
python eval_animate.py --config "configs/splatting_avatar.yaml;configs/instant_avatar.yaml" --dat_dir /path-to/female-3-casual --pc_dir /path-to/female-3-casual/output-splatting/last_checkpoint/point_cloud/iteration_30000 --anim_fn /path-to/aist_demo.npz
## GPU requirement
We conducted our experiments on a single NVIDIA RTX 3090 with 24GB.
Training with less GPU memory can be achieved by setting a maximum number of Gaussians.
model: max_n_gauss: 300000 # or less as needed
or set by command line
python train_splatting_avatar.py --config configs/splatting_avatar.yaml --dat_dir <path/to/subject> model.max_n_gauss=300000
## Citation
If you find our code or paper useful, please cite as:
@inproceedings{shao2024splattingavatar, title = {{SplattingAvatar: Realistic Real-Time Human Avatars with Mesh-Embedded Gaussian Splatting}}, author = {Shao, Zhijing and Wang, Zhaolong and Li, Zhuang and Wang, Duotun and Lin, Xiangru and Zhang, Yu and Fan, Mingming and Wang, Zeyu}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2024} }
## Acknowledgement
We thank the following authors for their excellent works!
- [instant-nsr-pl](https://github.com/bennyguo/instant-nsr-pl)
- [Gaussian Splatting](https://github.com/graphdeco-inria/gaussian-splatting)
- [IMavatar](https://github.com/zhengyuf/IMavatar)
- [INSTA](https://github.com/Zielon/INSTA)
## License
SplattingAvatar
<br>
The code is released under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode) for Noncommercial use only. Any commercial use should get formal permission first.
[Gaussian Splatting](https://github.com/graphdeco-inria/gaussian-splatting/blob/main/LICENSE.md)
<br>
**Inria** and **the Max Planck Institut for Informatik (MPII)** hold all the ownership rights on the *Software* named **gaussian-splatting**. The *Software* is in the process of being registered with the Agence pour la Protection des Programmes (APP).