ObVi-SLAM is a joint object-visual SLAM approach aimed at long-term multi-session robot deployments.
Update: Published in IEEE RA-L in February 2024!
[Paper with added appendix] [Video] Appendix includes:
ROS implementation coming spring 2024.
Please email amanda.adkins4242@gmail.com with any questions!
For information on how to set up and run the comparison algorithms, see our evaluation repo.
This is a quick start guide to run ObVi-SLAM from ROS Bagfiles inside the docker container. First, refer to this page for Docker setup.
By default, we assume to have a following data root directory structures:
root_data_dir
|- calibration (stores calibration files)
|-- base_link_to_os1.txt
|-- base_link_to_zed_left_camera_optical_frame.txt
|-- base_link_to_zed_right_camera_optical_frame.txt
|-- camera_matrix.txt
|-- extrinsics.txt
|-- lego_loam_bl.txt
|-- ov_slam_bl.txt
|-- odom_bl.txt
|- original_data (stores raw bagfiles)
|-- <baganme1.bag>
|-- <baganme2.bag>
|-- <...>
|- orb_out (stores feature and pose initialization from ORB-SLAM2 Frontend)
|- orb_post_process (processed ORB-SLAM2 Frontend results)
|- ut_vslam_results (ObVi-SLAM results)
|- lego_loam_out (Optional; stores pose estimations for LeGOLOAM)
To start, we require the following calibration files under a directory named "calibration" under the root data directory:
If you have different namings for those files, you can either softlink or change the calibration filenames inside the evaluation scripts later.
Open two terminals and run the following commands to uncompress image topics:
rosrun image_transport republish compressed in:=/zed/zed_node/left/image_rect_color raw out:=/camera/left/image_raw
rosrun image_transport republish compressed in:=/zed/zed_node/right/image_rect_color raw out:=/camera/right/image_raw
In a third terminal, run:
bash ./convenience_scripts/docker/high_res_orbslam2_multibags_executor.sh
This script will preprocess all the bagfiles in sequence files specified by sequence_file_base_names
. It reads camera calibarations from the file indicated by orb_slam_configuration_file
. You may want to change it to point to your own camera calibration file. If you want to run for your own bagfiles, you can create your own sequence file and replace sequence_file_base_names
with the one you created.
We used YOLOv5 for object detection. To run the object detector, in a new terminal run:
cd <path to yolov5 project root directory>
python3 detect_ros.py --weights <path to model weight> --img <image width in pixels> --conf <detection confidence threshold>
This script will start a ROS service named yolov5_detect_objs
that allows users to query bounding boxes by image observations.
Run:
bash ./convenience_scripts/docker/high_res_ut_vslam_sequence_executor.sh
In the script, sequence_file_base_name
is the filename of the sequence file (without suffix ".json"), and config_file_base_name
is the filename of the configuration file (withouth suffix ".yaml"). You can change them to match with your sequence file and configuration file setups.
We rely on ORB-SLAM2 to extract visual features from image observations and motion tracking, YOLO to detect objects, and amrl_msgs to publish and subscribe to SLAM-related ROS messages. We provide our own customized versions of these libraries to facilitate your use: ORB-SLAM2, YOLO, and amrl_msgs. To install them inside the container:
# compile ORB-SLAM2
git clone https://github.com/ut-amrl/ORB_SLAM2 ORB_SLAM2
git checkout writeTimestamps
cd ORB_SLAM2
chmod +x build.sh
./build.sh
# install amrl_msgs
git clone https://github.com/ut-amrl/amrl_msgs.git
cd amrl_msgs
git checkout orbSlamSwitchTraj
echo "export ROS_PACKAGE_PATH="`pwd`":$ROS_PACKAGE_PATH" >> ~/.bashrc
source ~/.bashrc
make
# install yolo
git clone https://github.com/ut-amrl/yolov5
cd yolov5
git checkout ROS
If you want to install them outside the container, refer to their README page for further instruction. Our YOLO package depends on amrl_msgs. You will need to install amrl_msgs first to use our YOLO detector for ROS.