lturing / ORB_SLAM3_modified

安卓手机适配orb slam3,运行mono-inertial
208 stars 32 forks source link

环境

基于小米9se的单目imu,30fps + 300hz imu。通过本仓库提供的安卓应用获取图像和imu信息,并在ubuntu上处理成orb-slam3的数据格式和运行。流程如下图所示:

在orbslam3的基础上,加入AR模块,并支持自定义的3维物体。根据orbslam3保存的关键帧以及其位姿,结合单目深度估计和图像语义分割模型,进行3维语义重建。orbslam3中的跟踪线程局部地图线程回环检测线程

demo

update

# install pangolin (version: 0.6)
wget https://github.com/stevenlovegrove/Pangolin/archive/refs/tags/v0.6.zip
unzip pangolin && cd pangolin
mkdir build 
cd build
cmake ..
make -j4
make install 

# install boost (version: 1.71)
wget https://boostorg.jfrog.io/artifactory/main/release/1.71.0/source/boost_1_71_0.tar.gz
tar -zxf boost_1_71_0.tar.gz
cd boost_1_71_0 
chmod +x ./bootstrap.sh && ./bootstrap.sh
./b2 && sudo ./b2 install

sudo ldconfig

安卓编译(通过安卓手机获取图像、加速度、角速度等信息)

# 安装jdk,并添加环境变量
# https://www.oracle.com/java/technologies/downloads/#java11

# 安装gradle,并添加环境变量
# https://gradle.org/releases/

# 安装sdkmanger,并添加环境变量
# https://developer.android.com/studio/command-line/sdkmanager

# sdkmanager 命令测试
sdkmanager --list --sdk_root=/path/to/sdkmanager_root # list所有相关的包
sdkmanager "platforms;android-28" # 安装android-28(android sdk)
sdkmanager "ndk;21.0.6113669" # 安装ndk version=21.0.6113669

# 编译apk
cd $HOME
git clone https://github.com/lturing/ORB_SLAM3_modified.git
cd ORB_SLAM3_modified/android_app
./gradlew assembleDebug # 打包成apk

# 编译好的apk在目录 app/build/outputs/apk/debug

imu噪声参数评估

安装本仓库提供的apk,将手机放在桌面上,保持静止状态,打开apk,录制3个小时以上的数据。录制好的数据在目录/Android/data/se.lth.math.videoimucapture/files下

处理数据

cd ~/ORB_SLAM3_modified
# 安装依赖包
unzip protoc-21.5-linux-x86_64.zip -d protoc-21.5
mkdir proto_python 
protoc-21.5/bin/protoc --python_out=proto_python android_app/protobuf/recording.proto
cp proto_python/protobuf/recording_pb2.py script 
pip3 install protobuf pyquaternion matplotlib numpy opencv_python py3rosmsgs PyYAML scipy   

# 将录制的数据(imu_data)复制到ubuntu系统中
data_dir=/path/to/imu_data

# 查看数据的统计信息
python script/data2statistics.py $data_dir/video_meta.pb3

# 将数据保存着rosbag,生成kalibr.bag、target.yaml、imu.yaml
python script/data2kalibr.py $data_dir --subsample 10 --save_image false

参数评估

# 安装kalibr
cd ~
git clone https://github.com/ethz-asl/kalibr.git
cd kalibr
docker build -t kalibr -f Dockerfile_ros1_20_04 . # change this to whatever ubuntu version you want

FOLDER=$HOME # path to your data on host
xhost +local:root
docker run -it -e "DISPLAY" -e "QT_X11_NO_MITSHM=1" \
    -v "/tmp/.X11-unix:/tmp/.X11-unix:rw" \
    -v "$FOLDER:/data" kalibr

# 在容器内运行
source devel/setup.bash

#退出容器,在ubuntu中的terminal中运行
cd $HOME
# 安装allan_variance_ros,评估imu的噪声参数
git clone https://github.com/ori-drs/allan_variance_ros.git

# 查看上一步创建的docker容器的id
docker ps -a 
docker start 容器id
docker attach 容器id
source devel/setup.bash

cd /data/allan_variance_ros
catkin build allan_variance_ros
source devel/setup.bash #激活allan_variance_ros的环境

# imu_data的数据
cd /data/path/to/imu_data 

# 进行参数评估,结果保存在/data/path/to/imu_data/allan_variance_ros
# 首先在一个窗口内,执行          
roscore       
在另一个窗口内执行              
rosrun allan_variance_ros allan_variance /data/path/to/imu_data/kalibr.bag /data/path/to/imu_data_config_file
例如
rosrun allan_variance_ros allan_variance /data/ros2_ws/imu_calib/rosbag2_2024_05_24-16_47_23 /data/code/ros1_ws/src/allan_variance_ros/config/orbbec.yaml 

# 生成imu.yaml,NoiseGyro、GyroWalk、NoiseAcc、AccWalk(连续)
rosrun allan_variance_ros analysis.py --data /data/path/to/imu_data/allan_variance_ros/allan_variance.csv

根据我们的经验,对于成本最低的传感器,可能需要将噪声模型参数增加10倍或更多here

相机和imu内参矫正

# 查看上一步创建的docker容器的id
docker ps -a 
docker start 容器id
docker attach 容器id
source devel/setup.bash

# 生成Aprilgrid,用于矫正相机和imu内参

apt install python3
ln -sf /usr/bin/python3 /usr/bin/python
pip3 install pyx 
apt install texlive-binaries

# kalibr_create_target_pdf --type apriltag --nx [NUM_COLS] --ny [NUM_ROWS] --tsize [TAG_WIDTH_M] --tspace [TAG_SPACING_PERCENT]

# 刚好和mac 13寸屏幕一样大
rosrun kalibr kalibr_create_target_pdf --type apriltag --nx 6 --ny 4 --tsize 0.035 --tspace 0.2

打开target.pdf,并设置成实际大小展示。打开本仓库提供的apk,按照视频教程,录制视频和imu数据,数据保存在手机/Android/data/se.lth.math.videoimucapture/files目录下,将数据复制到ubuntu中

# 数据保存目录
data=/path/to/data
# 数据统计,确认角度和幅度
cd ~/ORB_SLAM3_modified
python script/data2statistics.py $data/video_meta.pb3 

# 处理成rosbag,便于kalibr处理
python script/data2kalibr.py $data --subsample 10 --save_image true

# 查看上一步创建的docker容器的id
docker ps -a 
docker start 容器id
docker attach 容器id
source devel/setup.bash

# 相机内参矫正,生成kalibr-camchain.yaml
cd $data_dir/kalibr
rosrun kalibr kalibr_calibrate_cameras --bag kalibr.bag --target target.yaml --models pinhole-radtan --topics /cam0/image_raw

#相机到imu参数矫正,生成kalibr-camchain-imucam.yaml
rosrun kalibr kalibr_calibrate_imu_camera --target target.yaml --imu imu.yaml --cams kalibr-camchain.yaml --bag kalibr.bag

将Examples/Monocular-Inertial/EuRoC.yaml相关参数用kalibr-camchain-imucam.yaml替代,并保存成mi_aprilgrid.yaml

录制视频和惯导

打开apk,录制视频和惯导,为了更好激活imu,录制开头需要注意移动的加速度、倾斜手机here。将录制好的手机上传到ubuntu系统中,并将数据处理成orb-slam需要的格式


cd ~/ORB_SLAM3_modified 
# 第一次
chmod +x ./build.sh 
./build.sh 

data=/path/to/data
python script/data2orbslam.py --data_dir $data

# 运行orb-slam mono
./Examples/Monocular/mono_mi ./Vocabulary/ORBvoc.txt ./Examples/Monocular-Inertial/mi_aprilgrid.yaml $data/data/

# 运行orb-slam mono inertial 
./Examples/Monocular-Inertial/mono_inertial_mi ./Vocabulary/ORBvoc.txt ./Examples/Monocular-Inertial/mi_aprilgrid.yaml $data/data/

三维语义重建

根据orbslam3保存的关键帧以及其位姿,结合单目深度估计模型Zoedepth和图像分割模型OneFormer,进行单目3维语义重建。单目深度估计图像分割

# 准备工作
# 安装cmake-gui 
sudo apt install cmake-gui

# 安装qt5版本在5.12以上
# https://download.qt.io/archive/qt/5.13/5.13.2/
chmod +x ./qt-opensource-linux-x64-5.13.2.run 
sudo ./qt-opensource-linux-x64-5.13.2.run
将/path/to/Qt5.13.2/5.13.2/gcc_64/bin和/path/to/Qt5.13.2/Tools/QtCreator/bin添加到环境变量中

# 安装VTK
# https://www.vtk.org/files/release/9.2/VTK-9.2.6.tar.gz

# 解压并打开cmake-gui,source code 目录设置成VTK-9.2.6,build为VTK-9.2.6下的,选中grouped和advanced,然后点击configure,在search中输入qt,全部改为yes,重新configure和generate
在命令行中进入build的目录
make -j4 
sudo make install 

# 安装pcl
# https://github.com/PointCloudLibrary/pcl/releases/download/pcl-1.13.0/source.zip
解压并进入pcl-1.13.0目录
make build
cd build
cmake .. 
sudo make install

# 由于pcl编译用的c++14,故需要修改orbslam3中的CMakeLists.txt 
sed -i 's/++11/++14/g' CMakeLists.txt

# 根据orbslam3保存的关键帧,分别用单目深度估计和图像分割模型进行处理

# 三维重建(orbslam3位姿+三角化)
export img_dir=/path/to/img  # 图像所在目录
./Examples/Monocular/mono_mi_cloudpoint ./Vocabulary/ORBvoc.txt ./Examples/Monocular/mi.yaml $img_dir ./Examples/Monocular/KeyFrameTrajectory_mi.txt

# 三维重建(orbslam3位姿+单目预测)
export img_dir=/path/to/img # 图像所在目录
export depth_dir=/path/to/depth_prediction_dir # 图像深度估计所在目录
./Examples/Monocular/mono_depth_seg ./Examples/Monocular/mi.yaml $img_dir $depth_dir ./Examples/Monocular-Inertial/KeyFrameTrajectory_mi.txt

# 三维语义重建(orbslam3位姿+单目预测+图像语义分割)
export img_dir=/path/to/img # 图像所在目录
export depth_dir=/path/to/depth_prediction_dir # 图像的深度估计所在目录
export seg_dir=/path/to/semantic_prediction_dir # 图像的图像分割所在目录

./Examples/Monocular/mono_semantic ./Examples/Monocular/mi.yaml $img_dir $depth_dir $seg_dir ./image_segmentation/color_map_cityescape.txt  ./Examples/Monocular/KeyFrameTrajectory_mi.txt

AR

# 准备工作(pangolin用c++14编译的,需要安装以下包)
git clone https://github.com/mpark/variant.git
mkdir variant/build && cd variant/build
cmake ..
cmake --build . --target install

# 添加ar后
./Examples/Monocular/mono_mi ./Vocabulary/ORBvoc.txt ./Examples/Monocular-Inertial/mi_8_by_aprilgrid.yaml /home/spurs/dataset/30fps/2023_02_21_14_04_08/data/

代码修改

  1. 增加rbg显示,根据配置文件中isColor标志,决定rbg还是gray
  2. g2o中的se3quat的exp有误,证明如下:
    
    R = (Matrix3d::Identity()
              + sin(theta)/theta *Omega
              + (1-cos(theta))/(theta*theta)*Omega2);

V = (Matrix3d::Identity()

1 - cos(\theta) \approx \frac{\theta^2}{2}
R \approx Matrix3d::Identity() + Omega + 0.5 * Omega2

同理

V = Matrix3d::Identity() + \frac{1-cos(\theta)}{\theta^2} * Omega + \frac{\theta-sin(\theta)}{\theta^3} * Omega2

根据泰勒展开式

sin(\theta) \approx \theta - \frac{\theta^3}{3!} + \frac{\theta^5}{5!}

\frac{\theta-sin(\theta)}{\theta^3} \approx \frac{1.}{6}

V \approx Matrix3d::Identity() + 0.5 * Omega + \frac{1.}{6} * Omega2

对于se3 $\delta$

\delta = \begin{pmatrix} \theta \\ t \end{pmatrix}

e^\delta = \begin{pmatrix}
  R & Vt \\
  0 & 1 
  \end{pmatrix}

故当t很小时

V \approx Matrix3d::Identity()
R \approx Matrix3d::Identity() + Omega
  1. 根据EdgePriorAccG2oTypes中的jacobi进行修改

  2. 根据EdgePriorGyroG2oTypes中的jacobi进行修改

  3. 添加点云

  4. 添加AR

代码中部分jacobi推导

G2oTypes中的EdgeMono、EdgeMonoOnlyPose、EdgeStereo、EdgeStereoOnlyPose的jacobi推导

t_{wb} = t_{wb} + R_{wb} * \delta t 
R_{wb} = R_{wb} * Exp(\delta \theta) 
err = obj - est 
p_c = R_{cw} * p_w + t_{cw} 
est = K(p_c) 
K(
  \begin{pmatrix}
  x  \\
  y  \\
  z
  \end{pmatrix}) = 
\begin{pmatrix}
  \frac{f_x  x}{z} + c_x \\
  \frac{f_y  y}{z} + c_y 
  \end{pmatrix}

由于关于 $R{wb}$ 和 $t{wb}$ 的导数,需做以下变换

R_{cw} = R_{cb} * R_{wb}^T 
t_{cw} = R_{cb} * t_{bw} + t_{cb} 
t_{bw} = -R_{wb}^T * t_{wb} 

因此

est = K(R_{cw} * p_w + t_{cw}) = K(R_{cb} * R_{wb}^T * p_w - R_{cb} * R_{wb}^T * t_{wb} + t_{cb})

p_c = R_{cb} * R_{wb}^T * p_w - R_{cb} * R_{wb}^T * t_{wb} + t_{cb} = R_{cb} * R_{wb}^T * (p_w - t_{wb}) + t_{cb}

error关于R的导数

\frac{\partial err}{\partial \delta \theta} = \frac{\partial err}{\partial p_c} \frac{\partial p_c}{\partial \delta \theta} 
\frac{\partial err}{\partial p_c} =  \begin{pmatrix}
  \frac{f_x}{z} & 0 & -\frac{f_x x}{z^2}\\ 
  0 & \frac{f_y}{z} & -\frac{f_y y}{z^2}
  \end{pmatrix} 
p_c(\delta \theta) = R_{cb} * (R_{wb} * Exp(\delta \theta))^T * (p_w - t_{wb}) + t_{cb} 
= R_{cb} * (I - \delta \theta ^ \land)* R_{wb}^T * (p_w - t_{wb}) + t_{cb} 
= p_c - R_{cb} * \delta \theta ^ \land * R_{wb}^T * (p_w - t_{wb}) 
= p_c + R_{cb} * (R_{wb}^T * (p_w - t_{wb})) ^ \land *  \delta \theta 

所以

\frac{\partial p_c}{\partial \delta \theta} = R_{cb} * (R_{wb}^T * (p_w - t_{wb})) ^ \land
\frac{\partial p_c}{\partial \delta \theta} = R_{cb} * p_b ^ \land

同理

p_c(\delta t) = R_{cb} * R_{wb}^T * (p_w - (t_{wb} + R_{wb} * \delta t)) + t_{cb} 
= p_c - R_{cb} * R_{wb}^T * R_{wb} * \delta t  
= p_c - R_{cb} * \delta t

所以

\frac{\partial p_c}{\partial \delta t} = - R_{cb}

同理

\frac{\partial p_c}{\partial \delta p_w} = R_{cb} * R_{wb}^T

参考


offical readme(click to expand) # ORB-SLAM3 ### V1.0, December 22th, 2021 **Authors:** Carlos Campos, Richard Elvira, Juan J. Gómez Rodríguez, [José M. M. Montiel](http://webdiis.unizar.es/~josemari/), [Juan D. Tardos](http://webdiis.unizar.es/~jdtardos/). The [Changelog](https://github.com/UZ-SLAMLab/ORB_SLAM3/blob/master/Changelog.md) describes the features of each version. ORB-SLAM3 is the first real-time SLAM library able to perform **Visual, Visual-Inertial and Multi-Map SLAM** with **monocular, stereo and RGB-D** cameras, using **pin-hole and fisheye** lens models. In all sensor configurations, ORB-SLAM3 is as robust as the best systems available in the literature, and significantly more accurate. We provide examples to run ORB-SLAM3 in the [EuRoC dataset](http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets) using stereo or monocular, with or without IMU, and in the [TUM-VI dataset](https://vision.in.tum.de/data/datasets/visual-inertial-dataset) using fisheye stereo or monocular, with or without IMU. Videos of some example executions can be found at [ORB-SLAM3 channel](https://www.youtube.com/channel/UCXVt-kXG6T95Z4tVaYlU80Q). This software is based on [ORB-SLAM2](https://github.com/raulmur/ORB_SLAM2) developed by [Raul Mur-Artal](http://webdiis.unizar.es/~raulmur/), [Juan D. Tardos](http://webdiis.unizar.es/~jdtardos/), [J. M. M. Montiel](http://webdiis.unizar.es/~josemari/) and [Dorian Galvez-Lopez](http://doriangalvez.com/) ([DBoW2](https://github.com/dorian3d/DBoW2)). `ORB-SLAM3``` ### Related Publications: [ORB-SLAM3] Carlos Campos, Richard Elvira, Juan J. Gómez Rodríguez, José M. M. Montiel and Juan D. Tardós, **ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM**, *IEEE Transactions on Robotics 37(6):1874-1890, Dec. 2021*. **[PDF](https://arxiv.org/abs/2007.11898)**. [IMU-Initialization] Carlos Campos, J. M. M. Montiel and Juan D. Tardós, **Inertial-Only Optimization for Visual-Inertial Initialization**, *ICRA 2020*. **[PDF](https://arxiv.org/pdf/2003.05766.pdf)** [ORBSLAM-Atlas] Richard Elvira, J. M. M. Montiel and Juan D. Tardós, **ORBSLAM-Atlas: a robust and accurate multi-map system**, *IROS 2019*. **[PDF](https://arxiv.org/pdf/1908.11585.pdf)**. [ORBSLAM-VI] Raúl Mur-Artal, and Juan D. Tardós, **Visual-inertial monocular SLAM with map reuse**, IEEE Robotics and Automation Letters, vol. 2 no. 2, pp. 796-803, 2017. **[PDF](https://arxiv.org/pdf/1610.05949.pdf)**. [Stereo and RGB-D] Raúl Mur-Artal and Juan D. Tardós. **ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras**. *IEEE Transactions on Robotics,* vol. 33, no. 5, pp. 1255-1262, 2017. **[PDF](https://arxiv.org/pdf/1610.06475.pdf)**. [Monocular] Raúl Mur-Artal, José M. M. Montiel and Juan D. Tardós. **ORB-SLAM: A Versatile and Accurate Monocular SLAM System**. *IEEE Transactions on Robotics,* vol. 31, no. 5, pp. 1147-1163, 2015. (**2015 IEEE Transactions on Robotics Best Paper Award**). **[PDF](https://arxiv.org/pdf/1502.00956.pdf)**. [DBoW2 Place Recognition] Dorian Gálvez-López and Juan D. Tardós. **Bags of Binary Words for Fast Place Recognition in Image Sequences**. *IEEE Transactions on Robotics,* vol. 28, no. 5, pp. 1188-1197, 2012. **[PDF](http://doriangalvez.com/php/dl.php?dlp=GalvezTRO12.pdf)** # 1. License ORB-SLAM3 is released under [GPLv3 license](https://github.com/UZ-SLAMLab/ORB_SLAM3/LICENSE). For a list of all code/library dependencies (and associated licenses), please see [Dependencies.md](https://github.com/UZ-SLAMLab/ORB_SLAM3/blob/master/Dependencies.md). For a closed-source version of ORB-SLAM3 for commercial purposes, please contact the authors: orbslam (at) unizar (dot) es. If you use ORB-SLAM3 in an academic work, please cite: @article{ORBSLAM3_TRO, title={{ORB-SLAM3}: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map {SLAM}}, author={Campos, Carlos AND Elvira, Richard AND G\´omez, Juan J. AND Montiel, Jos\'e M. M. AND Tard\'os, Juan D.}, journal={IEEE Transactions on Robotics}, volume={37}, number={6}, pages={1874-1890}, year={2021} } # 2. Prerequisites We have tested the library in **Ubuntu 16.04** and **18.04**, but it should be easy to compile in other platforms. A powerful computer (e.g. i7) will ensure real-time performance and provide more stable and accurate results. ## C++11 or C++0x Compiler We use the new thread and chrono functionalities of C++11. ## Pangolin We use [Pangolin](https://github.com/stevenlovegrove/Pangolin) for visualization and user interface. Dowload and install instructions can be found at: https://github.com/stevenlovegrove/Pangolin. ## OpenCV We use [OpenCV](http://opencv.org) to manipulate images and features. Dowload and install instructions can be found at: http://opencv.org. **Required at leat 3.0. Tested with OpenCV 3.2.0 and 4.4.0**. ## Eigen3 Required by g2o (see below). Download and install instructions can be found at: http://eigen.tuxfamily.org. **Required at least 3.1.0**. ## DBoW2 and g2o (Included in Thirdparty folder) We use modified versions of the [DBoW2](https://github.com/dorian3d/DBoW2) library to perform place recognition and [g2o](https://github.com/RainerKuemmerle/g2o) library to perform non-linear optimizations. Both modified libraries (which are BSD) are included in the *Thirdparty* folder. ## Python Required to calculate the alignment of the trajectory with the ground truth. **Required Numpy module**. * (win) http://www.python.org/downloads/windows * (deb) `sudo apt install libpython2.7-dev` * (mac) preinstalled with osx ## ROS (optional) We provide some examples to process input of a monocular, monocular-inertial, stereo, stereo-inertial or RGB-D camera using ROS. Building these examples is optional. These have been tested with ROS Melodic under Ubuntu 18.04. # 3. Building ORB-SLAM3 library and examples Clone the repository: ``` git clone https://github.com/UZ-SLAMLab/ORB_SLAM3.git ORB_SLAM3 ``` We provide a script `build.sh` to build the *Thirdparty* libraries and *ORB-SLAM3*. Please make sure you have installed all required dependencies (see section 2). Execute: ``` cd ORB_SLAM3 chmod +x build.sh ./build.sh ``` This will create **libORB_SLAM3.so** at *lib* folder and the executables in *Examples* folder. # 4. Running ORB-SLAM3 with your camera Directory `Examples` contains several demo programs and calibration files to run ORB-SLAM3 in all sensor configurations with Intel Realsense cameras T265 and D435i. The steps needed to use your own camera are: 1. Calibrate your camera following `Calibration_Tutorial.pdf` and write your calibration file `your_camera.yaml` 2. Modify one of the provided demos to suit your specific camera model, and build it 3. Connect the camera to your computer using USB3 or the appropriate interface 4. Run ORB-SLAM3. For example, for our D435i camera, we would execute: ``` ./Examples/Stereo-Inertial/stereo_inertial_realsense_D435i Vocabulary/ORBvoc.txt ./Examples/Stereo-Inertial/RealSense_D435i.yaml ``` # 5. EuRoC Examples [EuRoC dataset](http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets) was recorded with two pinhole cameras and an inertial sensor. We provide an example script to launch EuRoC sequences in all the sensor configurations. 1. Download a sequence (ASL format) from http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets 2. Open the script "euroc_examples.sh" in the root of the project. Change **pathDatasetEuroc** variable to point to the directory where the dataset has been uncompressed. 3. Execute the following script to process all the sequences with all sensor configurations: ``` ./euroc_examples ``` ## Evaluation EuRoC provides ground truth for each sequence in the IMU body reference. As pure visual executions report trajectories centered in the left camera, we provide in the "evaluation" folder the transformation of the ground truth to the left camera reference. Visual-inertial trajectories use the ground truth from the dataset. Execute the following script to process sequences and compute the RMS ATE: ``` ./euroc_eval_examples ``` # 6. TUM-VI Examples [TUM-VI dataset](https://vision.in.tum.de/data/datasets/visual-inertial-dataset) was recorded with two fisheye cameras and an inertial sensor. 1. Download a sequence from https://vision.in.tum.de/data/datasets/visual-inertial-dataset and uncompress it. 2. Open the script "tum_vi_examples.sh" in the root of the project. Change **pathDatasetTUM_VI** variable to point to the directory where the dataset has been uncompressed. 3. Execute the following script to process all the sequences with all sensor configurations: ``` ./tum_vi_examples ``` ## Evaluation In TUM-VI ground truth is only available in the room where all sequences start and end. As a result the error measures the drift at the end of the sequence. Execute the following script to process sequences and compute the RMS ATE: ``` ./tum_vi_eval_examples ``` # 7. ROS Examples ### Building the nodes for mono, mono-inertial, stereo, stereo-inertial and RGB-D Tested with ROS Melodic and ubuntu 18.04. 1. Add the path including *Examples/ROS/ORB_SLAM3* to the ROS_PACKAGE_PATH environment variable. Open .bashrc file: ``` gedit ~/.bashrc ``` and add at the end the following line. Replace PATH by the folder where you cloned ORB_SLAM3: ``` export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:PATH/ORB_SLAM3/Examples/ROS ``` 2. Execute `build_ros.sh` script: ``` chmod +x build_ros.sh ./build_ros.sh ``` ### Running Monocular Node For a monocular input from topic `/camera/image_raw` run node ORB_SLAM3/Mono. You will need to provide the vocabulary file and a settings file. See the monocular examples above. ``` rosrun ORB_SLAM3 Mono PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE ``` ### Running Monocular-Inertial Node For a monocular input from topic `/camera/image_raw` and an inertial input from topic `/imu`, run node ORB_SLAM3/Mono_Inertial. Setting the optional third argument to true will apply CLAHE equalization to images (Mainly for TUM-VI dataset). ``` rosrun ORB_SLAM3 Mono PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE [EQUALIZATION] ``` ### Running Stereo Node For a stereo input from topic `/camera/left/image_raw` and `/camera/right/image_raw` run node ORB_SLAM3/Stereo. You will need to provide the vocabulary file and a settings file. For Pinhole camera model, if you **provide rectification matrices** (see Examples/Stereo/EuRoC.yaml example), the node will recitify the images online, **otherwise images must be pre-rectified**. For FishEye camera model, rectification is not required since system works with original images: ``` rosrun ORB_SLAM3 Stereo PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE ONLINE_RECTIFICATION ``` ### Running Stereo-Inertial Node For a stereo input from topics `/camera/left/image_raw` and `/camera/right/image_raw`, and an inertial input from topic `/imu`, run node ORB_SLAM3/Stereo_Inertial. You will need to provide the vocabulary file and a settings file, including rectification matrices if required in a similar way to Stereo case: ``` rosrun ORB_SLAM3 Stereo_Inertial PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE ONLINE_RECTIFICATION [EQUALIZATION] ``` ### Running RGB_D Node For an RGB-D input from topics `/camera/rgb/image_raw` and `/camera/depth_registered/image_raw`, run node ORB_SLAM3/RGBD. You will need to provide the vocabulary file and a settings file. See the RGB-D example above. ``` rosrun ORB_SLAM3 RGBD PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE ``` **Running ROS example:** Download a rosbag (e.g. V1_02_medium.bag) from the EuRoC dataset (http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets). Open 3 tabs on the terminal and run the following command at each tab for a Stereo-Inertial configuration: ``` roscore ``` ``` rosrun ORB_SLAM3 Stereo_Inertial Vocabulary/ORBvoc.txt Examples/Stereo-Inertial/EuRoC.yaml true ``` ``` rosbag play --pause V1_02_medium.bag /cam0/image_raw:=/camera/left/image_raw /cam1/image_raw:=/camera/right/image_raw /imu0:=/imu ``` Once ORB-SLAM3 has loaded the vocabulary, press space in the rosbag tab. **Remark:** For rosbags from TUM-VI dataset, some play issue may appear due to chunk size. One possible solution is to rebag them with the default chunk size, for example: ``` rosrun rosbag fastrebag.py dataset-room1_512_16.bag dataset-room1_512_16_small_chunks.bag ``` # 8. Running time analysis A flag in `include\Config.h` activates time measurements. It is necessary to uncomment the line `#define REGISTER_TIMES` to obtain the time stats of one execution which is shown at the terminal and stored in a text file(`ExecTimeMean.txt`). # 9. Calibration You can find a tutorial for visual-inertial calibration and a detailed description of the contents of valid configuration files at `Calibration_Tutorial.pdf`