This is the official implementation for the paper GIF: Generative Interpretable Faces. GIF is a photorealistic generative face model with explicit control over 3D geometry (parametrized like FLAME), appearance, and lighting.
If you find our work useful in your project please cite us as
@inproceedings{GIF2020,
title = {{GIF}: Generative Interpretable Faces},
author = {Ghosh, Partha and Gupta, Pravir Singh and Uziel, Roy and Ranjan, Anurag and Black, Michael J. and Bolkart, Timo},
booktitle = {International Conference on 3D Vision (3DV)},
year = {2020},
url = {http://gif.is.tue.mpg.de/}
}
python3 -m venv ~/.venv/gif
source ~/.venv/gif/bin/activate
pip install -r requirements.txt
Before Running any program you will need to download a few resource files and create a suitable placeholder for the training artifacts to be stored
GIF_resources
and unzip the input zip or checpoint zip or both in this directoryGIF_resources
and it should have input_files
and output_fiels
as sub-directoriesconstants.py
script and make changes if necessary if you wish to change names of the subdirectoriesresources_root = '/path/to/the/unzipped/location/of/GIF_resources'
FLAME 2020
texture_data_256.npy
file in the flame resources directory. generic_model.pkl
file in GIF_resources/input_files/flame_resource
generic_model.pkl
, head_template_mesh.obj
, and FLAME_texture.npz
in addition to the already provided files in the zip you just downloaded from the link given above. You can find these files from the official flame website. Link given in point 9.To train GIF you will need to prepare two lmdb datasets
cd prepare_lmdb
python prepare_ffhq_multiscale_dataset.py --n_worker N_WORKER DATASET_PATH
DATASET_PATH
is the parth to the directory that contains the FFHQ imageslmdb
file in the GIF_resources/input_files/FFHQ
directory, alongside ffhq_fid_stats
deca_rendered_with_public_texture.lmdb
with the input_file zip. It is located in GIF_resources_to_upload/input_files/DECA_inferred
python create_deca_rendered_lmdb.py
To resume training from a checkpoint run
python train.py --run_id <runid> --ckpt /path/to/saved.mdl/file/<runid>/model_checkpoint_name.model
Note here that you point to the .model file not the npz one.
To start training from scratch run
python train.py --run_id <runid>
Note that the training code will take all available GPUs in the system and perform data parallelization. You can set visible GPUs by etting the CUDA_VISIBLE_DEVICES
environment variable. Run CUDA_VISIBLE_DEVICES=0,1 python train.py --run_id <runid>
to run on GPU 0 and 1
cd plots
python generate_random_samples.py
cd plots
python role_of_different_parameters.py
it will generate batch_size
number of directories in f'{cnst.output_root}sample/'
named gen_iamges<batch_idx>
. Each of these directory contain a column of images as shown in figure 3 in the paper.Disclaimer: This section can be outdated and/or have changed since the time of writing the document. It is neither intended to advertise nor recommend any particular 3rd party product. The inclusion of this guide is solely for quick reference purposes and is provided without any liability.
you will need 3 accounts
once that is done you may follow the following steps
mturk/mturk_layout.html
or write your own layoutgenerate_csv.py
or the create_csv.py
scriptsplot_results.py
or plot_histogram_results.py
script to visualize AMT resultsGIF uses DECA to get FLAME geometry, appearance, and lighting parameters for the FFHQ training data. We thank H. Feng for prepraring the training data, Y. Feng and S. Sanyal for support with the rendering and projection pipeline, and C. Köhler, A. Chandrasekaran, M. Keller, M. Landry, C. Huang, A. Osman and D. Tzionas for fruitful discussions, advice and proofreading. We specially thank Taylor McConnell for voicing over our video. The work was partially supported by the International Max Planck Research School for Intelligent Systems (IMPRS-IS) and by Amazon Web Services.