aipixel / GaussianAvatar

[CVPR 2024] The official repo for "GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians"
https://huliangxiao.github.io/GaussianAvatar
MIT License
440 stars 33 forks source link
# GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians [Liangxiao Hu](https://huliangxiao.github.io/)1,†, [Hongwen Zhang](https://zhanghongwen.cn/)2, [Yuxiang Zhang](https://zhangyux15.github.io/)3, [Boyao Zhou](https://morpheo.inrialpes.fr/people/zhou/)3, [Boning Liu](https://liuboning2.github.io/)3, [Shengping Zhang](http://homepage.hit.edu.cn/zhangshengping)1,*, [Liqiang Nie](https://liqiangnie.github.io/)1, 1Harbin Institute of Technology 2Beijing Normal University 3Tsinghua University *Corresponding author    †Work done during an internship at Tsinghua University ### [Projectpage](https://huliangxiao.github.io/GaussianAvatar) · [Paper](https://arxiv.org/abs/2312.02134) · [Video](https://www.youtube.com/watch?v=a4g8Z9nCF-k)

:mega: Updates

[4/3/2024] The pretrained models for the other three people from People Snapshot are released on OneDrive.

[7/2/2024] The scripts for your own video are released.

[23/1/2024] Training and inference codes for People Snapshot are released.

Introduction

We present GaussianAvatar, an efficient approach to creating realistic human avatars with dynamic 3D appearances from a single video.

Installation

To deploy and run GaussianAvatar, run the following scripts:

conda env create --file environment.yml
conda activate gs-avatar

Then, compile diff-gaussian-rasterization and simple-knn as in 3DGS repository.

Download models and data

Run on People Snapshot dataset

We take the subject m4c_processed for example.

Training

python train.py -s $gs_data_path/m4c_processed -m output/m4c_processed --train_stage 1

Evaluation

python eval.py -s $gs_data_path/m4c_processed -m output/m4c_processed --epoch 200

Rendering novel pose

python render_novel_pose.py -s $gs_data_path/m4c_processed -m output/m4c_processed --epoch 200

Run on Your Own Video

Preprocessing

Training for Stage 1

cd .. &  python train.py -s $path_to_data/$subject -m output/{$subject}_stage1 --train_stage 1 --pose_op_start_iter 10

Training for Stage 2

Todo

Citation

If you find this code useful for your research, please consider citing:

@inproceedings{hu2024gaussianavatar,
        title={GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians},
        author={Hu, Liangxiao and Zhang, Hongwen and Zhang, Yuxiang and Zhou, Boyao and Liu, Boning and Zhang, Shengping and Nie, Liqiang},
        booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
        year={2024}
}

Acknowledgements

This project is built on source codes shared by Gaussian-Splatting, POP, HumanNeRF and InstantAvatar.