This repository contains the implementation of our paper:
NeRF-LOAM: Neural Implicit Representation for Large-Scale Incremental LiDAR Odometry and Mapping (PDF)\ Junyuan Deng, Qi Wu, Xieyuanli Chen, Songpengcheng Xia, Zhen Sun, Guoqing Liu, Wenxian Yu and Ling Pei\ If you use our code in your work, please star our repo and cite our paper.
@inproceedings{deng2023nerfloam,
title={NeRF-LOAM: Neural Implicit Representation for Large-Scale Incremental LiDAR Odometry and Mapping},
author={Junyuan Deng and Qi Wu and Xieyuanli Chen and Songpengcheng Xia and Zhen Sun and Guoqing Liu and Wenxian Yu and Ling Pei},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year={2023}
}
Overview of our method. Our method is based on our neural SDF and composed of three main components:
The reconstructed maps
The qualitative result of our odometry mapping on the KITTI dataset. From left upper to right bottom, we list the results of sequences 00, 01, 03, 04, 05, 09, 10.
The odometry results
The qualitative results of our odometry on the KITTI dataset. From left to right, we list the results of sequences 00, 01, 03, 04, 05, 07, 09, 10. The dashed line corresponds to the ground truth and the blue line to our odometry method.
Newer College real-world LiDAR dataset: website.
MaiCity synthetic LiDAR dataset: website.
KITTI dataset: website.
To run the code, a GPU with large memory is preferred. We tested the code with RTX3090 and GTX TITAN.
We use Conda to create a virtual environment and install dependencies:
python environment: We tested our code with Python 3.8.13
Pytorch: The Version we tested is 1.10 with cuda10.2 (and cuda11.1)
Other depedencies are specified in requirements.txt. You can then install all dependancies using pip
or conda
:
pip3 install -r requirements.txt
After you have installed all third party libraries, run the following script to build extra Pytorch modules used in this project.
sh install.sh
Replace the filename in mapping.py with the built library
torch.classes.load_library("third_party/sparse_octree/build/lib.xxx/svo.xxx.so")
patchwork-plusplus to separate gound from LiDAR points.
Replace the filename in src/dataset/*.py with the built library
patchwork_module_path ="/xxx/patchwork-plusplus/build/python_wrapper"
configs/maicity/maicity_01.yaml
so the data_path section points to the real dataset path. Now you are all set to run the code:
python demo/run.py configs/maicity/maicity_01.yaml
subscene
:
git checkout subscene
python demo/run.py configs/kitti/kitti_00.yaml
crop_intersection.py
Some of our codes are adapted from Vox-Fusion.
Any questions or suggestions are welcome!
Junyuan Deng: d.juney@sjtu.edu.cn and Xieyuanli Chen: xieyuanli.chen@nudt.edu.cn
This project is free software made available under the MIT License. For details see the LICENSE file.