Official PyTorch Implementation of paper "NeLF: Neural Light-transport Field for Single Portrait View Synthesis and Relighting", EGSR 2021.
Tiancheng Sun1, Kai-En Lin1, Sai Bi2, Zexiang Xu2, Ravi Ramamoorthi1
1University of California, San Diego, 2Adobe Research
*Equal contribution
Project Page | Paper | Pretrained models | Validation data | Rendering script
Make sure you have up-to-date NVIDIA drivers supporting CUDA 11.1 (10.2 could work but need to change cudatoolkit
package accordingly)
Run
conda env create -f environment.yml
conda activate nelf
The following packages are used:
PyTorch (1.7 & 1.9.0 Tested)
OpenCV-Python
matplotlib
numpy
tqdm
OS system: Ubuntu 20.04
Download the dataset
Remove background with the provided masks in the dataset
Downsample the dataset to 512x512
Store the resulting data in [path_to_data_directory]/CelebAMask
Following this data structure
[path_to_data_directory] --- data --- CelebAMask --- 0.jpg
| |- 1.jpg
| |- 2.jpg
| ...
|- blender_both --- sub001
| |- sub002
| ...
Due to FaceScape's license, we cannot release the full dataset. Instead, we will release our rendering script.
Download our pretrained checkpoint and testing data. Extract the content to [path_to_data_directory]
.
The data structure should look like this:
[path_to_data_directory] --- data --- CelebAMask
| |- blender_both
| |- blender_view
| ...
|- data_results --- nelf_ft
|- data_test --- validate_0
|- validate_1
|- validate_2
In arg/__init__.py
, setup data path by changing base_path
Run python run_test.py nelf_ft [validation_data_name] [#iteration_for_the_model]
e.g. python run_test.py nelf_ft validate_0 500000
The results are stored in [path_to_data_directory]/data_test/[validation_data_name]/results
Due to FaceScape's license, we are not allowed to release the full dataset. We will use validation data to run the following example.
Download our validation data. Extract the content to [path_to_data_directory]
.
The data structure should look like this:
[path_to_data_directory] --- data --- CelebAMask
| |- blender_both
| |- blender_view
| ...
|- data_results --- nelf_ft
|- data_test --- validate_0
|- validate_1
|- validate_2
(Optional) Run rendering script and render your own data.
Remember to change line 35~42 and line 45, 46 in arg/config_nelf_ft.py
accordingly.
In arg/__init__.py
, setup data path by changing base_path
Run python run_train.py nelf_ft
The intermediate results and model checkpoints are saved in [path_to_data_directory]/data_results/nelf_ft
The following config files can be found inside arg
folder
nelf_ft
is our main model described in the paper
ibr
is our reimplementation of IBRNet
sipr
is our reimplementation of Single Image Portrait Relighting
@inproceedings {sun2021nelf,
booktitle = {Eurographics Symposium on Rendering},
title = {NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting},
author = {Sun, Tiancheng and Lin, Kai-En and Bi, Sai and Xu, Zexiang and Ramamoorthi, Ravi},
year = {2021},
}