neufieldrobotics / MultiCamSLAM

MIT License
138 stars 26 forks source link

MultiCam Visual Odometry

Design and Evaluation of a Generic Visual SLAM Framework for Multi Camera Systems

Version 0.2, April 10th, 2024

Authors: Pushyami Kaveti et al.
IEEE Robotics and Automation Letters (RA-L), 2023

Link: Paper | BibTex


1. Getting started

A. Prerequisites

We have tested this library in Ubuntu 16.04 and 20.04. The following external libraries are required for building Multicam Visual Odometry package. We have given the build/install instructions in the next section.

Dependencies:

[The entire package list can be built by writing a build.sh file.]

B. Build Instructions

1. ROS

Instructions to install ROS can be found in the links below:

2. Clone the repo

3. OpenCV

4. Boost

-

    apt-get install cmake build-essential libboost-all-dev libgoogle-perftools-dev google-perftools  libatlas-base-dev libsuitesparse-dev libyaml-cpp-dev

5. Eigen3

6. GTSAM 4

-

    cd ~/catkin_ws/ThirdParty  
    wget https://github.com/borglab/gtsam/archive/refs/tags/4.1.1.zip 
    unzip 4.1.1.zip && rm 4.1.1.zip
    mv gtsam-4.1.1 gtsam

    cd gtsam
    mkdir build && cd build
    cmake .. -DCMAKE_INSTALL_PREFIX=../install
    make check
    make install

7. OpenGV

9. DBoW2

10. DLib

C. Compile the ROS package

catkin_make

2. Running an example

a. Download the dataset

b. Setup the config files

c. Run

In Terminal 1

In Terminal 2 Edit the below command based on the path to the cfg file.

3. Additonal Details from the paper

a. Setup

image alt text

The custom-built multi-camera rig used to collect data for evaluating the SLAM pipeline.

b. Qualitative Results

i. Curry Center Dataset

image alt text

Estimated trajectories of the Curry center sequence with outdoor data and dynamic content. Stars indicate final positions of trajectory estimates. Accuracy and robustness improve with increasing number of cameras in OV configurations, as shown by accumulated drift in final position. Red and blue boxes highlight tracking failures caused by occluding dynamic objects. N-OV configuration exhibits scale issues compared to OV configuration but is robust to dynamic content

ii. ISEC_Ground1 Dataset

image alt text

Estimated trajectories of the ISEC_Ground1 sequence. Here, the robot’s start and end positions are the same, facilitating performance evaluation. We achieve comparable results to ORBSLAM3 and SVO in stereo setup and demonstrate improved accuracy with increasing overlapping cameras.

iii. ISEC_Lab1 Dataset

image alt text

Estimated trajectories of the ISEC_Lab1 sequence. Here, the ground truth is shown as a dashed line. We achieve comparable results to ORBSLAM3 and SVO in stereo setup and demonstrate improved accuracy with increasing overlapping cameras.

Citation

If you use this work in an academic context, please cite the following publication:

@ARTICLE{10253964,

  author={Kaveti, Pushyami and Vaidyanathan, Shankara Narayanan and Chelvan, Arvind Thamil and Singh, Hanumant},

  journal={IEEE Robotics and Automation Letters}, 

  title={Design and Evaluation of a Generic Visual SLAM Framework for Multi Camera Systems}, 

  year={2023},

  volume={8},

  number={11},

  pages={7368-7375},

  doi={10.1109/LRA.2023.3316609}}