GradientSpaces / LoopSplat

[3DV 2025] LoopSplat: Loop Closure by Registering 3D Gaussian Splats
https://loopsplat.github.io/
MIT License
264 stars 12 forks source link
3dgs 3dgs-slam 3dvision dense-mapping gaussian-splatting loop-closure slam

LoopSplat: Loop Closure by Registering 3D Gaussian Splats

Liyuan Zhu1 · Yue Li2 · Erik Sandström3 · Shengyu Huang3 · Konrad Schindler3 · Iro Armeni1

International Conference on 3D Vision (3DV) 2025

1Stanford University · 2University of Amsterdam · 3ETH Zurich

[![arXiv](https://img.shields.io/badge/arXiv-2408.10154-blue?logo=arxiv&color=%23B31B1B)](https://arxiv.org/abs/2408.10154) [![ProjectPage](https://img.shields.io/badge/Project_Page-LoopSplat-blue)](https://loopsplat.github.io/) [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)

## 📃 Description

**LoopSplat** is a coupled RGB-D SLAM system that uses Gaussian splats as a unified scene representation for tracking, mapping, and maintaining global consistency. In the front-end, it continuously estimates the camera position while constructing the scene using Gaussian splats submaps. When the camera traverses beyond a predefined threshold, the current submap is finalized, and a new one is initiated. Concurrently, the back-end loop closure module monitors for location revisits. Upon detecting a loop, the system generates a pose graph, incorporating loop edge constraints derived from our proposed 3DGS registration. Subsequently, pose graph optimization (PGO) is executed to refine both camera poses and submaps, ensuring overall spatial coherence. # 🛠️ Setup The code has been tested on: - Ubuntu 22.04 LTS, Python 3.10.14, CUDA 12.2, GeForce RTX 4090/RTX 3090 - CentOS Linux 7, Python 3.12.1, CUDA 12.4, A100/A6000 ## 📦 Repository Clone the repo with `--recursive` because we have submodules: ``` git clone --recursive git@github.com:GradientSpaces/LoopSplat.git cd LoopSplat ``` ## 💻 Installation Make sure that gcc and g++ paths on your system are exported: ``` export CC= export CXX= ``` To find the gcc path and g++ path on your machine you can use which gcc. Then setup environment from the provided conda environment file, ``` conda create -n loop_splat -c nvidia/label/cuda-12.1.0 cuda=12.1 cuda-toolkit=12.1 cuda-nvcc=12.1 conda env update --file environment.yml --prune conda activate loop_splat pip install -r requirements.txt ``` You will also need to install hloc for loop detection and 3DGS registration. ``` cd thirdparty/Hierarchical-Localization python -m pip install -e . cd ../.. ``` We tested our code on RTX4090 and RTX A6000 GPUs respectively and Ubuntu22 and CentOS7.5. ## 🚀 Usage Here we elaborate on how to load the necessary data, configure Gaussian-SLAM for your use-case, debug it, and how to reproduce the results mentioned in the paper. ### Downloading the Datasets We tested our code on Replica, TUM_RGBD, ScanNet, and ScanNet++ datasets. We also provide scripts for downloading Replica and TUM_RGBD in `scripts` folder. Install git lfs before using the scripts by running ```git lfs install```. For reconstruction evaluation on Replica, we follow [Co-SLAM](https://github.com/JingwenWang95/neural_slam_eval?tab=readme-ov-file#datasets) mesh culling protocal, please use their code to process the mesh first. For downloading ScanNet, follow the procedure described on here.
Pay attention! There are some frames in ScanNet with `inf` poses, we filter them out using the jupyter notebook `scripts/scannet_preprocess.ipynb`. Please change the path to your ScanNet data and run the cells. For downloading ScanNet++, follow the procedure described on here.
The config files are named after the sequences that we used for our method. ### Running the code Start the system with the command: ``` python run_slam.py configs// --input_path --output_path ``` You can also configure input and output paths in the config yaml file. ### Reproducing Results You can reproduce the results for a single scene by running: ``` python run_slam.py configs// --input_path --output_path ``` If you are running on a SLURM cluster, you can reproduce the results for all scenes in a dataset by running the script: ``` ./scripts/reproduce_sbatch.sh ``` Please note the evaluation of ```depth_L1``` metric requires reconstruction of the mesh, which in turns requires headless installation of open3d if you are running on a cluster. ## 📧 Contact If you have any questions regarding this project, please contact Liyuan Zhu (liyzhu@stanford.edu). If you want to use our intermediate results for qualitative comparisons, please reach out to the same email. # ✏️ Acknowledgement Our implementation is heavily based on Gaussian-SLAM and MonoGS. We thank the authors for their open-source contributions. If you use the code that is based on their contribution, please cite them as well. We thank [Jianhao Zheng](https://jianhao-zheng.github.io/) for the help with datasets and [Yue Pan](https://github.com/YuePanEdward) for the fruitful discussion.
# 🎓 Citation If you find our paper and code useful, please cite us: ```bib @misc{zhu2024_loopsplat, title={LoopSplat: Loop Closure by Registering 3D Gaussian Splats}, author={Liyuan Zhu and Yue Li and Erik Sandström and Shengyu Huang and Konrad Schindler and Iro Armeni}, year={2024}, eprint={2408.10154}, archivePrefix={arXiv}, primaryClass={cs.CV} }