Open mizrex opened 4 weeks ago
Is the IMU published by the camera? If not, there could be a small angle difference between IMU and camera frames.
For the grid, many obstacles are caused by the ceiling:
Setting Grid/MaxObstacleHeight
to 1 meter:
The disparity estimation gives also points under the ground detected as obstacle, we can filter them by setting a minimum ground height with Grid/MinGroundHeight=-0.15
:
The have a more dense occupancy grid, assuming it is a 2D robot, you can disable Grid/3D
and enable Grid/RayTracing
:
@matlabbe , thank you so much for getting back and looking into this with such detail. I tried your suggestions and they worked like a charm. Thanks a lot for resolving this. Yes, the IMU is provided by the camera. However, I'm running into a new issue now. With featureless walls. (I can put this on a new issue, if you like?) In the map shown below, the red region is supposed to be a hallway. But you can see how it perceives it as an open space. It's a featureless hallway. Is there a parameter, I can tweak to make it see it? Here is the db file accompanying this map. ` import os from ament_index_python.packages import get_package_share_directory from launch import LaunchDescription from launch_ros.actions import Node from launch.actions import IncludeLaunchDescription from launch.launch_description_sources import PythonLaunchDescriptionSource
def generate_launch_description():
parameters=[{'frame_id':'oak-d-base-frame',
'subscribe_rgbd':True,
'subscribe_odom_info':True,
'approx_sync':False,
'wait_imu_to_init':True,
'Grid/RayTracing':'True',
'Grid/3D':'False',
'Grid/MaxGroundHeight': '-0.15',
# 'Grid/MaxGroundAngle': '10.0',
# "Odom/Strategy": '1',
"Grid/MaxObstacleHeight":'2'
}]
remappings=[('imu', '/imu/data')]
return LaunchDescription([
# Launch camera driver
IncludeLaunchDescription(
PythonLaunchDescriptionSource([os.path.join(
get_package_share_directory('depthai_examples'), 'launch'),
'/stereo_inertial_node.launch.py']),
launch_arguments={'depth_aligned': 'false',
'enableRviz': 'false',
'monoResolution': '400p'}.items(),
),
# Sync right/depth/camera_info together
Node(
package='rtabmap_sync', executable='rgbd_sync', output='screen',
parameters=parameters,
remappings=[('rgb/image', '/right/image_rect'),
('rgb/camera_info', '/right/camera_info'),
('depth/image', '/stereo/depth')]),
# Compute quaternion of the IMU
Node(
package='imu_filter_madgwick', executable='imu_filter_madgwick_node', output='screen',
parameters=[{'use_mag': False,
'world_frame':'enu',
'publish_tf':False}],
remappings=[('imu/data_raw', '/imu')]),
# Visual odometry
Node(
package='rtabmap_odom', executable='rgbd_odometry', output='screen',
parameters=parameters,
remappings=remappings),
# VSLAM
Node(
package='rtabmap_slam', executable='rtabmap', output='screen',
parameters=parameters,
remappings=remappings,
arguments=['-d']),
# Visualization
Node(
package='rtabmap_viz', executable='rtabmap_viz', output='screen',
parameters=parameters,
remappings=remappings)
])
Further, I also have an on-board lidar, that can provide laserscan or pointcloud data. To improve the detection in case of such featureless hallways. But, I don't know how, I can add the lidar as a backup (with the Oak-D) being the primary.
With pure stereo, that is an issue:
If you can add texture somehow on the wall, that would help to see obstacles on it at least.
What kind of lidar do you have? In your parameters
, you could add 'subscribe_scan':True,
and in the remappings
, add 'scan': 'your/scan/topic'
.
@matlabbe , thank you for the advise. It worked great. For some reason, it didn't work directly with the lidar's point-cloud (it's a Helios-32 from Robosense). But when I passed it through the point_cloud_assembler
node, it worked fine. I passed /assembled_cloud
instead of /robosense_points
.
However, I'm running into a new issue, where the initial map that comes out is great, but as I move the robot, the map deteriotes badly. I'm attaching the db file and the screen-recording of Rviz for your review. I was hoping you might have a better idea as to what is causing it. If I can somehow, get the quality of that first map repeat in all the subsequent mapping.
You may want to do icp_odometry instead (example here). There is a lot of drift or you didn't set correctly the fixed_frame_id of point_cloud_assembler, causing point_cloud_assembler to not correctly assemble the point clouds.
I would not use point_cloud_assembler until the mapping is debugged.
I recommend setting "Grid/FlatObstacleDetected" to false. Because the data from stereo depth estimation is noisy. Even after decimation and filtering, it is still easy to generate some small clusters when performing ground segmentation. Either you need to carefully tune the cluster related parameters. In addition, I wrote a parameter tuning tool to improve the performance of the SGBM pipeline in OAK. Our new configurations may be released in the future.
I realized, I posted, this issue in the wrong repo, so, moving it here. I've tried playing with various parameters within rtabmap (
Grid/CellSize
,Grid/ClusterRadius
,Grid/MinClusterSize
,Grid/MaxGroundAngle
,Grid/MinGroundHeight
etc. As it was suggested in other posts, but nothing seems to resolve it. I'm including the db file here along with the Rviz screenshots for reference. You can see how the visual odom thinks that it's going within the ground. Not sure where to fix that. If it's a camera issue, or something in the TFs ?