Open Boris-Cui opened 2 months ago
If you use that configuration, it is explained in this paper:
The transformation is computed using the same Motion Estimation approach used by visual odometry (Section 3.1.1), and if accepted, the new link is added to the graph. When a laser scan or a point cloud is available, link’s transformation is refined using the same ICP Registration approach than with lidar odometry (described in Section 3.1.2)
So in your case it is a two steps process:
The robot can also be localized by only lidar using proximity detection (described in that paper).
Fusing two global localization approaches (not odometry) is more complicated has they would fight each other from their respective map frames.
hello, i have a small question about principle. hope you can help me. If I use an RGBD camera and a 2D lidar as sensors when using rtab-map, how does rtab-map fuse the two types of data? Does it perform fusion in terms of localization? If so, what method does it utilize? Is it extended Kalman filtering? If I want to further improve the localization accuracy by utilizing UKF in combination with RTAB-Map and other LiDAR SLAM systems, is it feasible?