introlab / rtabmap

RTAB-Map library and standalone application
https://introlab.github.io/rtabmap
Other
2.6k stars 761 forks source link

RGBD and 2d lidar fusion #1261

Open Boris-Cui opened 2 months ago

Boris-Cui commented 2 months ago

hello, i have a small question about principle. hope you can help me. If I use an RGBD camera and a 2D lidar as sensors when using rtab-map, how does rtab-map fuse the two types of data? Does it perform fusion in terms of localization? If so, what method does it utilize? Is it extended Kalman filtering? If I want to further improve the localization accuracy by utilizing UKF in combination with RTAB-Map and other LiDAR SLAM systems, is it feasible?

matlabbe commented 2 months ago

If you use that configuration, it is explained in this paper:

The transformation is computed using the same Motion Estimation approach used by visual odometry (Section 3.1.1), and if accepted, the new link is added to the graph. When a laser scan or a point cloud is available, link’s transformation is refined using the same ICP Registration approach than with lidar odometry (described in Section 3.1.2)

So in your case it is a two steps process:

  1. Use RGB-D camera to visually compute localization transform against a node in the map
  2. Refine that transform using lidar scan between the two nodes.

The robot can also be localized by only lidar using proximity detection (described in that paper).

Fusing two global localization approaches (not odometry) is more complicated has they would fight each other from their respective map frames.