introlab / rtabmap_ros

RTAB-Map's ROS package.
http://wiki.ros.org/rtabmap_ros
BSD 3-Clause "New" or "Revised" License
966 stars 557 forks source link

rtabmap accuracy degradation in dynamic environment #852

Open alexk1976 opened 1 year ago

alexk1976 commented 1 year ago

Hi, We using RGB-D (realsense camera) setup with IMU/Wheel odom fusion. So far got good accuracy when environment doesnt change but even small changes in the environment causing to accuracy degradation. Any ideas how to improve/handle changes in the environment ? We considered running in continuous map update mode but in this case dynamic objects (people, robots..) might be entered to the map that is also problematic.

matlabbe commented 1 year ago

If you are looking at the point cloud of rtabmap_ros/MapCloud plugin in RVIZ (or the default point clouds in 3D Map view in rtabmapviz), dynamic obstacles won't be removed. You should look at the occupancy grid (/rtabmap/grid_map), which will be updated to clear dynamic obstacles over time (make sure Grid/RayTracing=true). If you want 3D point cloud with dynamic obstacles cleared, you will need to visualize the /rtabmap/octomap_occupied_space topic (with Grid/RayTracing=true and Grid/3D=true).

Related issues:

alexk1976 commented 1 year ago

Hi @matlabbe , my concern/issue here is not related to OccupancyGrid that already answered by you earlier. Our issue is related to SLAM pose accuracy that is degraded since rtabmap map (features) is not accurate overtime and localization is not good after environment was changed. If we use always Incremental=true mode to update map continuously..dynamic object (e.g. people/robots) might be captured as features to rtabmap and will be used by F2M odometry. Any way to improve it?

matlabbe commented 1 year ago

From a pose estimation perspective, if dynamic objects don't take the whole field of view of the camera, RANSAC should be able to filter them (see between 1:00 and 1:13 https://www.youtube.com/watch?v=xIGKaE_rZ_Q).

For localization, as we cannot use a constant velocity model (or external guess) like odometry to get more accurate visual correpsondences, you may have to use more complex approaches like a depth mask on dynamic objects/persons. There are many approaches to do this, but like in the linked project, we can simply filter dynamic obstacles from the depth image and feed that depth image to rtabmap. rtabmap won't extract features in areas where there no depth.

alexk1976 commented 1 year ago

Hi @matlabbe , one more question related to same topic. while running in localization mode ..can we get some indicators from rtabmap that our environment changed and we need to enter map update mode?

matlabbe commented 1 year ago

Good question. I am currently working on some metrics right now for a company. It is a work on progress, so I cannot really share a lot, but in rtabmap/info topic, it would be possible to check the Loop/ statistics to see a trend that less and less visual correspondences or inliers are found when the robot is traversing the same area over time. This could be an indication that the environment is changing. A decrease of number of localizations in a particular area could indicate that the environment changes. These kind of metrics can be useful in a not so illumination dynamic environment, unless you use very robust visual features against lighting variation (see this paper for example).

alexk1976 commented 1 year ago

@matlabbe sounds exactly what we need! Please update when it's available. Do we miss anything regarding map update? will it be enough to switch to the "Incremental" mode to update the map in this area or it's better first to remove this area from the map and then switch to "Incremental"?