Paper | Video | Project Page
Tobias Kirschstein, Simon Giebenhain, Jiapeng Tang, Markos Georgopoulos, and Matthias Nießner
Siggraph Asia 2024
ggh
with newest PyTorch and CUDA 11.8:
conda env create -f environment.yml
nvcc.exe
is taken from the conda environment and includes can be found:
conda activate gghead
conda env config vars set CUDA_HOME=$CONDA_PREFIX
conda activate base
conda activate gghead
conda activate gghead
conda env config vars set CUDA_HOME=$Env:CONDA_PREFIX
conda env config vars set NVCC_PREPEND_FLAGS="-I$Env:CONDA_PREFIX\Library\include"
conda activate base
conda activate gghead
nvcc
can be found on the path via:
nvcc --version
which should say something like release 11.8
.
nvcc
):
pip install gaussian_splatting@git+https://github.com/tobias-kirschstein/gaussian-splatting.git
TORCH_CUDA_ARCH_LIST
environment variable:
TORCH_CUDA_ARCH_LIST="8.0" pip install gaussian_splatting@git+https://github.com/tobias-kirschstein/gaussian-splatting.git
Choose the correct compute architecture(s) that match your setup. Consult this website if unsure about the compute architecture of your graphics card.
All paths to data / models / renderings are defined by environment variables.
Please create a file in your home directory in ~/.config/diffusion-avatars/.env
with the following content:
GGHEAD_DATA_PATH="..."
GGHEAD_MODELS_PATH="..."
GGHEAD_RENDERS_PATH="..."
Replace the ... with the locations where data / models / renderings should be located on your machine.
GGHEAD_DATA_PATH
: Location of the FFHQ dataset and foreground masks. Only needed for training. See Section 2 for how to obtain the datasets.GGHEAD_MODELS_PATH
: During training, model checkpoints and configs will be saved here. See Section 4 for downloading pre-trained models.GGHEAD_RENDERS_PATH
: Video renderings of trained models will be stored here
If you do not like creating a config file in your home directory, you can instead hard-code the paths in the env.py
.TODO
TODO
From a trained model GGHEAD-xxx
, render short videos of randomly sampled 3D heads via:
python scripts/sample_heads.py GGHEAD-xxx
Replace xxx
with the actual ID of the model.
The generated videos will be placed into ${GGHEAD_RENDERS_PATH}/sampled_heads/
From a trained model GGHEAD-xxx
, render interpolation videos that morph between randomly sampled 3D heads via:
python scripts/render_interpolation.py GGHEAD-xxx
Replace xxx
with the actual ID of the model.
The generated videos will be placed into ${GGHEAD_RENDERS_PATH}/interpolations/
TODO
The notebooks folder contains minimal examples on how to:
You can start the excellent GUI from EG3D and StyleGAN by running:
python visualizer.py
In the visualizer, you can select all checkpoints found in ${GGHEAD_MODELS_PATH}/gghead
freely explore the generated heads in 3D.
Put pre-trained models into ${GGHEAD_MODELS_PATH}/gghead
.
Dataset | GGHead model |
---|---|
FFHQ-512 | GGHEAD-1_ffhq512 |
FFHQ-1024 | GGHEAD-2_ffhq1024 |
AFHQ-512 | GGHEAD-3-afhq512 |
@article{kirschstein2024gghead,
title={GGHead: Fast and Generalizable 3D Gaussian Heads},
author={Kirschstein, Tobias and Giebenhain, Simon and Tang, Jiapeng and Georgopoulos, Markos and Nie{\ss}ner, Matthias},
journal={arXiv preprint arXiv:2406.09377},
year={2024}
}
Contact Tobias Kirschstein for questions, comments and reporting bugs, or open a GitHub issue.