This repository contains the code release of our paper accepted at ECCV2024:
The NeRFect Match: Exploring NeRF Features for Visual Localization. [Project Page | Paper | Poster]
Clone this repository and create a conda envoirnment with the following commands:
# Create conda env
conda env create -f configs/conda/nerfmatch_env.yml
conda activate neumatch
pip install -r configs/conda/requirements.txt
# Install this repo
pip install -e .
Download the 7-Scenes dataset from this link and place them under data/7scenes .
Download the Cambridge Landmarks scenes (Great Court, Kings College, Old Hospital, Shop Facade, St. Marys Church) and place them under data/cambrdige.
Execute the following command to download our pre-process data annotations and image retrieval pairs and SAM masks on Cambridge Landmarks for NeRF training. The Cambridge Landmarks annotations are converted from original dataset nvm file. 7-Scenes sfm ground truth json files are converted from pgt/sfm/7scenes.
cd data/
bash download_data.sh
cd ..
Execute the following command to download our pretrained nerf and nerfmatch models.
cd pretrained/
bash download_pretrained.sh
cd ..
After those preparation steps, your data/ directory shall look like:
data
├── 7scenes
│ ├── chess
│ └── ...
├── annotations
│ └── 7scenes_jsons/sfm
│ ├── transforms_*_test.json
│ ├── transforms_*_train.json
│ └── ...
├── cambridge
│ ├── GreatCourt
│ └── ...
├── mask_preprocessed
│ └── cambridge
└── pairs
├── 7scenes
└── cambridge
## Training and Evaluation
We refer users to [model_train/README.md](model_train/README.md) and [model_eval/README.md](model_eval/README.md) for training and evaluation instructions.
## Licenses
The source code is released under [NVIDIA Source Code License v1](LICENSE.txt).
The pretrained models are released under [CC BY-NC-SA 4.0](pretrained/LICENSE.txt).
## Citation
If you are using our method, please cite:
@article{zhou2024nerfmatch, title={The NeRFect match: Exploring NeRF features for visual localization}, author={Zhou, Qunjie and Maximov, Maxim and Litany, Or and Leal-Taix{\'e}, Laura}, journal={European Conference on Computer Vision}, year={2024} }