introlab / rtabmap_ros

RTAB-Map's ROS package.
http://wiki.ros.org/rtabmap_ros
BSD 3-Clause "New" or "Revised" License
952 stars 556 forks source link

robot localization global param fusing visual odometry and 2D LiDAR #927

Open bojan-muse opened 1 year ago

bojan-muse commented 1 year ago

Hi

I have a wheel odometry and I am producing this TF and publishing odom topic based on encoders. Now, I have a visual odometry in place (orb slam) and I can produce a PoseWithCovariance type message where covariance matrix consists of 0 and 1s. The topic is /robot/data/vslam_localization/pose -Output pose from ORB_SLAM. So time to time, while navigating or doing a online SLAM (not with predifened map). Current VSLAM-based localization outputs the relative pose from initial point. From SLAM’s nature, it accumulates the error after long travel. So was thinking to use the 2D Lidar to fuse the Visual Odometry in global frame to correct my robot pose(localisation). I also have 9 axis IMU unit.

So I have the following topics that can help me for the lokalization.

  1. /robot/data/vehicle_state : Current speed and yawrate calculated from wheel speed.
  2. /robot/data/twist :Twist data created from vehicle speed and IMU data.
  3. /robot/data/vslam_localization/pose :Output pose from ORB_SLAM
  4. /camera/left/image_raw and /camera/right/image_raw : Images from camera
  5. /scan : YDLidar scan
  6. /imu/sensor_msgs/Imu : IMU data

And when use EKF then

/robot/localization/ekf_base/set_pose External command to reset pose of EKF for base_link /robot/localization/ekf_odom/set_pose External command to reset pose of EKF for odom

And the following tf:

map : map frame odom : odom frame base_link : vehicle pose (center of driving wheel axis) slam_base : a frame where ORB_SLAM is working

So can I use rtabmap_ros to correct the robot localisation and obtain accurate global pose?

matlabbe commented 1 year ago

Now, I have a visual odometry in place (orb slam) and I can produce a PoseWithCovariance type message where covariance matrix consists of 0 and 1s.

Visual odometry covariance with only 1s on diagonal will produce very bad graph optimization. For convenience, you can override those covariances with odom_tf_linear_variance and odom_tf_angular_variance parameters of rtabmap node.

If you don't disable loop closure detection in ORB-SLAM, you cannot use it with rtabmap. You should make ORB-SLAM works like a visual odometry node only.

When combining multiple inputs with robot_localization, make sure that no other nodes are publishing odom -> base_link, only the combined odometry from robot_localization should be published on odom->base_link. You can then feed the output odometry topic of robot_localization to rtabmap node, which will provide map->odom TF.

bojan-muse commented 1 year ago

Well at the end my goal is not ICP or visual slam. My goal is to use LiDAR to detect the driving are( means walls) and use that to correct the Visual slam pose. Means not doing ICP. What I want is detecting walls (like clusters) and get the distance and angle to the robot. Now I wonna use this info to correct the pose from Visual. slam. . But I need more details how to do that. Think can use ROS TF for this correct? Any help ? –

matlabbe commented 1 year ago

General TF would look like this:

map -> odom -> base_link

If you want to correct odom -> base_link based on lidar's wall detection, to not break the tf tree, you may publish the correction of visual odometry as lidar_correction -> odom. If you need a map, you may use odom_frame_id = lidar_correction for rtabmap node. The final tree could look like this:

map -> lidar_correction -> odom -> base_link