This repository contains the code for X2Face, presented at ECCV 2018.
The demo notebooks demonstrate the following:
./UnwrapMosaic/Face2Face_UnwrapMosaic.ipynb
./UnwrapMosaic/Face2Face_UnwrapMosaic.ipynb
./UnwrapMosaic/Pose2Face.ipynb
./UnwrapMosiac/Audio2Face.ipynb
Update: We have added updated code and installation instructions to run the demo notebooks with pytorch 0.4.1 and python 2.7 in the branch 'pytorch_0.4.1' and for pytorch 0.4.1 and python 3.7 in the branch 'py37_pytorch_0.4.1'.
To run the notebooks, you need:
It is important to use the right version of pytorch, as the defaults for sampling and some other things have changed in more recent versions of pytorch. In these cases, the pretrained models will not work properly.
Once the environment is set up, the pre-trained models can be downloaded from the project page and the model paths in the notebooks updated appropriately (this should simply require setting the BASE_MODEL_PATH in the notebook to the correct location).
If you find this useful in your work, please cite the paper appropriately.
Training code requires:
To train a model yourself, we have given an example training file using only the photometric loss. To run this:
python train_model.py --results_folder $WHERE_TO_SAVE_TENSORBOARD_FILES --model_epoch_path $WHERETOSAVEMODELS
(Note that this can be run with any version of pytorch -- it is merely important that you train/test with the same version.)