Codebase for arXiv preprint "NeRF++: Analyzing and Improving Neural Radiance Fields"
import numpy as np
def convert_pose(C2W):
flip_yz = np.eye(4)
flip_yz[1, 1] = -1
flip_yz[2, 2] = -1
C2W = np.matmul(C2W, flip_yz)
return C2W
conda env create --file environment.yml
conda activate nerfplusplus
python ddp_train_nerf.py --config configs/tanks_and_temples/tat_training_truck.txt
Note: In the paper, we train NeRF++ on a node with 4 RTX 2080 Ti GPUs, which took ∼24 hours.
python ddp_test_nerf.py --config configs/tanks_and_temples/tat_training_truck.txt \
--render_splits test,camera_path
Note: due to restriction imposed by torch.distributed.gather function, please make sure the number of pixels in each image is divisible by the number of GPUs if you render images parallelly.
I recently re-trained NeRF++ on the tanks and temples data for another project. Here are the checkpoints (google drive) just in case you might find them useful.
Plese cite our work if you use the code.
@article{kaizhang2020,
author = {Kai Zhang and Gernot Riegler and Noah Snavely and Vladlen Koltun},
title = {NeRF++: Analyzing and Improving Neural Radiance Fields},
journal = {arXiv:2010.07492},
year = {2020},
}
You can use the scripts inside colmap_runner
to generate camera parameters from images with COLMAP SfM.
img_dir
and out_dir
in colmap_runner/run_colmap.py
.colmap_runner/
, execute command python run_colmap.py
.out_dir/posed_images
.
out_dir/posed_images/images
.out_dir/posed_images/kai_cameras.json
.out_dir/posed_images/kai_cameras_normalized.json
. See the Scene normalization method in the Data section.kai_cameras_normalized.json
according to your need. You might find the self-explanatory script data_loader_split.py
helpful when you try converting the json file to data format compatible with NeRF++.Check camera_visualizer/visualize_cameras.py
for visualizing cameras in 3D. It creates an interactive viewer for you to inspect whether your cameras have been normalized to be compatible with this codebase. Below is a screenshot of the viewer: green cameras are used for training, blue ones are for testing, while yellow ones denote a novel camera path to be synthesized; red sphere is the unit sphere.
You can use camera_inspector/inspect_epipolar_geometry.py
to inspect if the camera paramters are correct and follow the Opencv convention assumed by this codebase. The script creates a viewer for visually inspecting two-view epipolar geometry like below: for key points in the left image, it plots their correspoinding epipolar lines in the right image. If the epipolar geometry does not look correct in this visualization, it's likely that there are some issues with the camera parameters.