Viktor Rudnev, Mohamed Elgharib, Christian Theobalt, Vladislav Golyanik
Based on NeRF-OSR codebase, which is based on NeRF++ codebase and inherits the same training data preprocessing and format.
Download the datasets from here.
Untar the downloaded archive into data/
sub-folder in the code directory.
See NeRF++ sections on data and COLMAP on how to create adapt a new dataset for training.
Please contact us if you need to adapt your own event stream as it might need updates to the code.
conda env create --file environment.yml
conda activate eventnerf
Use the scripts from scripts/
subfolder for training and testing.
Please replace <absolute-path-to-code>
and <path-to-conda-env>
in the .sh
scripts and the corresponding .txt
config file
To do so automatically for all of the files, you can use sed
:
sed 's/<absolute-path-to-code>/\/your\/path/' configs/**/*.txt scripts/*.sh
sed 's/<path-to-conda-env>/\/your\/path/' scripts/*.sh
configs/nerf/*
, configs/lego1/*
-- synthetic data,configs/nextgen/*
, configs/nextnextgen/*
-- real data (from the revised paper),configs/ablation/*
-- ablation studies,configs/altbase.txt
-- constant window length baseline,configs/angle/*
-- camera angle error robustness ablation,configs/noise/*
-- noise events robustness ablation,configs/deff/*
-- data efficiency ablation (varying amount of data by varying the simulated event threshold),configs/e2vid/*
-- synthetic data e2vid baseline,configs/real/*
-- real data (from the old version of the paper)To extract the mesh from a trained model, run
ddp_mesh_nerf.py --config nerf/chair.txt
Replace nerf/chair.txt
with the path to your trained model config.
Please find the guide on evaluation, color-correction, and computing the metrics in metric/README.md
.
Please cite our work if you use the code.
@InProceedings{rudnev2023eventnerf,
title={EventNeRF: Neural Radiance Fields from a Single Colour Event Camera},
author={Viktor Rudnev and Mohamed Elgharib and Christian Theobalt and Vladislav Golyanik},
booktitle={Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.