introlab / rtabmap_ros

RTAB-Map's ROS package.
http://wiki.ros.org/rtabmap_ros
BSD 3-Clause "New" or "Revised" License
1k stars 558 forks source link

Trying to run two ZED2 cameras for outdoor navigation [ WARN] [1600270209.527064552]: /rtabmap/rtabmap: Did not receive data since 5 seconds! #464

Closed jorgemiar closed 4 years ago

jorgemiar commented 4 years ago

Details:

Jetson Xavier NX Jetpack 4.4 (but OpenCV 3.2) Ros Melodic I installed rtabmap from binaries simply using apt install.

Problem:

I'm trying to get 2 ZED2 cameras to work with rtabmap. They're placed on a robot opposite each other and the TFs are defined using a robot state publisher/URDF.

For some reason, when I try to launch rtabmap with the file below I get:

[FATAL] [1600270204.524891208]: Cannot synchronize more than 1 rgbd topic (rtabmap_ros has been built without RTABMAP_SYNC_MULTI_RGBD option)

and

[ WARN] [1600270209.527064552]: /rtabmap/rtabmap: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set. If topics are coming from different computers, make sure the clocks of the computers are synchronized ("ntpdate"). If topics are not published at the same rate, you could increase "queue_size" parameter (current=10).

I tried playing around with the queue size parameter but doesn't seem to change anything. Any ideas what might be wrong? I've checked all the camera topics and odometry and they're all publishing correctly. Could the problem have to do with this? <remap from="rgbd_image0" to="/zedA/rgbd_image"/>

My launch file:

<?xml version="1.0"?>
<launch>

   <!-- Choose visualization -->
   <arg name="rviz"       default="false" />

   <!-- <arg name="wait_for_transform"  default="true" /> -->

   <!-- Localization-only mode -->
    <arg name="localization"            default="false"/>

    <arg     if="$(arg localization)" name="args" default=""/>
    <arg unless="$(arg localization)" name="args" default="--delete_db_on_start"/>

   <!-- sync rgb/depth images per camera -->
   <group ns="zedA">
    <node pkg="nodelet" type="nodelet" name="nodelet_manager1" args="manager"/>
    <node pkg="nodelet" type="nodelet" name="rgbd_sync1" args="load rtabmap_ros/rgbd_sync nodelet_manager1">
      <remap from="rgb/image"       to="zed_node_A/rgb/image_rect_color"/>
      <remap from="depth/image"     to="zed_node_A/depth/depth_registered"/>
      <remap from="rgb/camera_info" to="zed_node_A/rgb/camera_info"/>
      <param name="approx"     value="false"/>
    </node>
   </group>

   <group ns="zedB">
    <node pkg="nodelet" type="nodelet" name="nodelet_manager2" args="manager"/>
    <node pkg="nodelet" type="nodelet" name="rgbd_sync2" args="load rtabmap_ros/rgbd_sync nodelet_manager2">
      <remap from="rgb/image"       to="zed_node_B/rgb/image_rect_color"/>
      <remap from="depth/image"     to="zed_node_B/depth/depth_registered"/>
      <remap from="rgb/camera_info" to="zed_node_B/rgb/camera_info"/>
      <param name="approx"     value="false"/>
    </node>
   </group>

  <group ns="rtabmap">
    <!-- Visual SLAM (robot side) -->
    <!-- args: "delete_db_on_start" and "udebug" -->
    <node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="$(arg args)" >
      <!-- <rosparam command="load" file="$(find navvy_localization)/config/rtabmap.yaml" /> -->
      <param name="subscribe_depth"  type="bool"   value="false"/>
      <param name="subscribe_rgbd"   type="bool"   value="true"/>
      <param name="rgbd_cameras"     type="int"    value="2"/>
      <param name="frame_id"         type="string" value="base_link"/>
      <param name="gen_scan"         type="bool"   value="true"/>
      <param name="queue_size"       type="int"    value="100"/>
      <!--<param name="wait_for_transform" type="bool"   value="$(arg wait_for_transform)"/> --> <!-- Do I need this? -->
      <!--<param name="map_negative_poses_ignored" map_always_update type="bool"   value="false"/>        refresh grid map even if we are not moving-->
      <!--<param name="map_negative_scan_empty_ray_tracing"map_empty_ray_tracing type="bool" value="false"/> don't fill empty space between the generated scans-->

      <remap from="odom"              to="/odometry/filtered"/> 
      <remap from="rgbd_image0"       to="/zedA/rgbd_image"/>
      <remap from="rgbd_image1"       to="/zedB/rgbd_image"/>

      <param name="Grid/FromDepth"     type="string" value="false"/>
      <param name="Vis/EstimationType" type="string" value="0"/> <!-- should be 0 for multi-cameras -->
      <!-- <param name="Vis/MinInliers"     type="string" value="10"/>-->
      <!-- <param name="Vis/InlierDistance" type="string" value="0.02"/>-->

      <!-- localization mode -->
      <param     if="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="false"/>
      <param unless="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="true"/>
      <param name="Mem/InitWMWithAllNodes" type="string" value="$(arg localization)"/>
    </node>

  </group>

  <!-- Visualization RVIZ -->
  <node if="$(arg rviz)" pkg="rviz" type="rviz" name="rviz" args="-d $(find rtabmap_ros)/launch/config/rgbd.rviz"/>

</launch>
matlabbe commented 4 years ago

The ros binaries don't contain synchronizers for more than 1 camera. You shoul rebuild rtabmap_ros from source with -DRTABMAP_SYNC_MULTI_RGBD=ON:

sudo apt remove ros-melodic-rtabmap-ros
cd ~/catkin_ws/src
git clone -b melodic-devel https://github.com/introlab/rtabmap_ros.git
cd ~/catkin_ws
caktin_make -DRTABMAP_SYNC_MULTI_RGBD=ON

For the "Did not receive data since 5 seconds!" warning, check if the input topics listed are published with rostopic hz. Are /zedA/rgbd_image and /zedB/rgbd_image published?

jorgemiar commented 4 years ago

Ok, should I follow the specific tutorial to build from source for the Jetsons with Jetpack 4.4 or what you posted above will be enough?

For the "Did not receive data since 5 seconds!" warning, check if the input topics listed are published with rostopic hz. Are /zedA/rgbd_image and /zedB/rgbd_image published?

It seems they are publishing?

rostopic hz /zedA/rgbd_image
subscribed to [/zedA/rgbd_image]
average rate: 11.050
    min: 0.049s max: 0.160s std dev: 0.03238s window: 10
average rate: 12.027
    min: 0.034s max: 0.183s std dev: 0.03599s window: 23
average rate: 12.292
    min: 0.034s max: 0.183s std dev: 0.03388s window: 36

and

rostopic hz /zedB/rgbd_image
subscribed to [/zedB/rgbd_image]
average rate: 11.106
    min: 0.049s max: 0.134s std dev: 0.02939s window: 9
average rate: 12.956
    min: 0.029s max: 0.134s std dev: 0.02581s window: 24
average rate: 12.019
    min: 0.026s max: 0.197s std dev: 0.03859s window: 34
average rate: 11.358
    min: 0.026s max: 0.197s std dev: 0.04224s window: 44
matlabbe commented 4 years ago

Try to build only rtabmap_ros as above.

jorgemiar commented 4 years ago

Seems to work now thanks!

I still get these warnings though:

[ERROR] [1600275547.595411843]: PluginlibFactory: The plugin for class 'octomap_rviz_plugin/ColorOccupancyGrid' failed to load. Error: According to the loaded plugin descriptions the class octomap_rviz_plugin/ColorOccupancyGrid with base class type rviz::Display does not exist. Declared types are rtabmap_ros/Info rtabmap_ros/MapCloud rtabmap_ros/MapGraph rviz/Axes rviz/Camera rviz/DepthCloud rviz/Effort rviz/FluidPressure rviz/Grid rviz/GridCells rviz/Illuminance rviz/Image rviz/InteractiveMarkers rviz/LaserScan rviz/Map rviz/Marker rviz/MarkerArray rviz/Odometry rviz/Path rviz/PointCloud rviz/PointCloud2 rviz/PointStamped rviz/Polygon rviz/Pose rviz/PoseArray rviz/PoseWithCovariance rviz/Range rviz/RelativeHumidity rviz/RobotModel rviz/TF rviz/Temperature rviz/WrenchStamped rviz_plugin_tutorials/Imu

and

[ WARN] (2020-09-16 17:59:13.382) OccupancyGrid.cpp:357::createLocalMap() Cannot create local map, scan is empty (node=1).

I imagine the first one is not important and is something that has probably changed name? Does the second warning have to do with laser scan?

jorgemiar commented 4 years ago

@matlabbe also do I need to run the obstacle detection nodelet to be able to avoid obstacles with move base, or will the cost map be enough? Can I run the obstacle detection nodelet with two cameras?

matlabbe commented 4 years ago

Is octomap_rviz_plugin library installed?

sudo apt install ros-melodic-octomap-rviz-plugins

The warning is that you have set gen_scan to true, which creates laser scans from depth images. The local occupancy grid is then created from laser scan. Set gen_scan to false or set explicitly Grid/FromDepth to true to make the local occupancy grids from depth images instead of laser scans. It has been a long time since I tested gen_scan with multi-camera setup, it may not work as expected returning a null scan causing the warning.

You could feed downasampled point clouds directly to local costmap, or use obstacle detection nodelet on each cloud separatly before sending the resulting clouds to local costmap. The ground_cloud can be useful to clear obstacles, see https://github.com/introlab/rtabmap_ros/blob/58dd3134be6f2ccc81f998df4e258354f0071e44/launch/azimut3/config/local_costmap_params_2d.yaml#L32-L50 for example. If you don't use obstacle detection nodelet, send the downsampled clouds of the cameras like the obstacles_cloud config above, 0.04 is the minimum height to filter the ground in the depth image.

jorgemiar commented 4 years ago

Is octomap_rviz_plugin library installed?

I installed it earlier today and no longer get the error!

Set gen_scan to false or set explicitly Grid/FromDepth to true to make the local occupancy grids from depth images instead of laser scans.

Nice, I set gen scan to false but forgot to set Grid/FromDepth to true so it wasn't working. Fixed it now. 👍

You could feed downasampled point clouds directly to local costmap.

So if I don't use the nodelet which clouds do I feed to the local costmap/how? The rtabmap/cloud_obstacles? or the individual point clouds from each camera?

or use obstacle detection nodelet on each cloud separatly before sending the resulting clouds to local costmap.

This might put too much processing load on the Xavier NX if it needs to process two cameras? This, however, is the approach you use in the StereoOutdoorNavigation tutorial right? Is it better?

matlabbe commented 4 years ago

Yes it will use more processing power to segment the ground on each cloud. The reason I did this for the stereo outdoor navigation tutorial is because the terrain is 3D, segmenting the ground just based on height didn't work (it was adding false obstacles if the robot was in front of a small slope). If you are indoor and the floor is flat, you can skip obstacles_detection nodelet. You still need to downsample the point clouds of the cameras before sending them to local costmap. To save more processing power, don't use the point cloud generated by zed wrapper, use rtabmap_ros/pointcloud_xyz nodelet subscribed to depth image (like this) and set parameters decimation and voxel_size: https://github.com/introlab/rtabmap_ros/blob/58dd3134be6f2ccc81f998df4e258354f0071e44/src/nodelets/point_cloud_xyz.cpp#L110-L111 Note that in this launch file, I also throttle the camera images before converting to point clouds (if you don't need to refresh the local costmap at 30 Hz, it can be useful to throttle the images to save more computation power on the conversion to point clouds).

jorgemiar commented 4 years ago

@matlabbe thanks, the robot is going to be used outdoors so I will need to use the obstacles_detection nodelet.

I will need to run 2 instances of pointcloud_xyz nodelet (one for each camera) right? When running two instances of obstacle detection I get two /obstacle_cloud topics being published, is there a way to combine them or do I just feed both of them in a similar way to this?

https://github.com/introlab/rtabmap_ros/blob/58dd3134be6f2ccc81f998df4e258354f0071e44/launch/azimut3/config/local_costmap_params_2d.yaml#L32-L50

And should I set all my frame ids to base_footprint instead of base_link (and make base footprint the parent of base_link)? This means I should establish a TF between odom and base footprint instead of base_link? Because right now my odometry is in the base link frame.

matlabbe commented 4 years ago

Yes, 2 instances of pointcloud_xyz and 2 instances of obstacles_detection nodelets. You will have 4 output clouds (2 obstacles and 2 ground), you can duplicate the point cloud part of the costmap config to have point_cloud_sensorC and point_cloud_sensorD for example. Make sure to modify this line too to add your two new clouds: https://github.com/introlab/rtabmap_ros/blob/58dd3134be6f2ccc81f998df4e258354f0071e44/launch/azimut3/config/local_costmap_params_2d.yaml#L22 There is no need to merge the clouds before sending to costmap, as it can subscribe to N clouds directly. Set base_link directly in the costmap config instead of base_footprint if you are using base_link as the root of your TF tree.

cheers, Mathieu

jorgemiar commented 4 years ago

Makes sense, thanks!

Sorry that this is more of a tf question but would it be better to set base_footprint as the root of my tf tree? Base_link is attached to the bottom plate of my robot (not the ground), so don't know how that is going to affect obstacle detection or moving on uneven ground?

matlabbe commented 4 years ago

If you use parameters like: https://github.com/introlab/rtabmap_ros/blob/58dd3134be6f2ccc81f998df4e258354f0071e44/launch/azimut3/az3_nav.launch#L148 The height will be relative to frame_id set: https://github.com/introlab/rtabmap_ros/blob/58dd3134be6f2ccc81f998df4e258354f0071e44/launch/azimut3/az3_nav.launch#L144

It is a good practice to add a base_footprint frame below base_link, so that everything shown in RVIZ is aligned with the ground (e..g, occupancy grid on xy plane, the ground in the point clouds of the camera is aligned with xy plane...). EDIT: Your odometry will indeed need to publish on base_footprint instead of base_link.

jorgemiar commented 4 years ago

So ideally it is map -> odom -> base_link -> base_footprint?

Or base_footprint first?

Edit: because my current tree is map -> odom -> base_link -> base_footprint,camera_center,wheels etc

matlabbe commented 4 years ago

no, ideally map->odom->base_footprint->base_link, see also rep120. The odometry would have to send the topic and TF with base frame base_footprint instead of base_link.

jorgemiar commented 4 years ago

Thanks that's what I thought. Got it all set up and working now :)

One final thing, in your Outdoor Stereo Navigation Tutorial, you only feed the pointclouds to the local map config file and use the grid map for the global map right? Any reason for doing this instead of feeding the pointclouds in the costmap_common_params file to both the global and local maps like here:

https://github.com/introlab/rtabmap_ros/blob/58dd3134be6f2ccc81f998df4e258354f0071e44/launch/azimut3/config/costmap_common_params_2d.yaml#L23-L41

Does it perform differently?

matlabbe commented 4 years ago

You can, but I prefer not updating the global costmap with current obstacles. If rtabmap is in SLAM mode, it will add them to map already. If in localization mode, I generally prefer to keep the global costmap static for global planning. It is because when you update the global costmap with current sensors, the obstacles (e.g. people) stay there even if the robot quits the area, which would affect global planning if we want to return in that area. In contrast, the local costmap is cleared when leaving the area, and obstacles will be re-added only on sight for local planning. But as you pointed out, I did sometimes use an obstacle layer for the global costmap, and some other times not, it is mostly how you expect your global planning should behave in dynamic environments (assuming that new obstacles are temporary, so don't update global costmap, or if new obstacles can be permanent, thus keeping them helps the global planner to use a longer path beause we know that the door is closed for example).

jorgemiar commented 4 years ago

Ok I understand thanks. In my environment there is going to be people moving around constantly, so I probably don't want to update the global map and just update the local map when the robot encounters a person/obstacle.

In an outdoor environment, do we need to feed the local map with the ground point cloud or is the obstacles point cloud enough? What is the ground cloud used for, just helping clear obstacles?

Also, in the outdoor navigation tutorial you set the obstacle heights to:

point_cloud_sensor: {
    sensor_frame: base_footprint,
    data_type: PointCloud2, 
    topic: openni_points, 
    expected_update_rate: 0.5, 
    marking: true, 
    clearing: true,
    min_obstacle_height: -99999.0, <---------
    max_obstacle_height: 99999.0} <---------

when you feed the obstacles point cloud in the local_costmap_params.yaml . From what I understand, this is done because the robot can move up a hill (change z value in odom) so you want to make sure you get all the obstacles?

Are these values different from the max_obstacle_height parameter in the obstacles_detection nodelet and the Grid/MaxObstacleHeight parameter from the main rtabmap node?

Sorry for asking all these additional questions. After this I'll move any other questions to your forum. Really appreciate the help.

matlabbe commented 4 years ago

The ground cloud is used to clear obstacles. The min_obstacle_height and max_obstacle_height in local costmap are disabled to not filter the input clouds (as they should be already filtered).

Grid/MaxObstacleHeight for obstacles_detection or rtabmap nodes is used to filter the obstacles up to this height. This height is based on base frame of the robot (when Grid/MapFrameProjection is false (default)).

cheers, Mathieu

jorgemiar commented 4 years ago

Any reason why you don't use the ground cloud in the Stereo Outdoor Navigation Tutorial? Is it possible/useful to use it? Does the filtering also have to be disabled for them in the local costmap if used? Thanks!

matlabbe commented 4 years ago

Yes the filtering should be also disabled for ground clouds. In that experiment, there were no dynamic objects, so using the ground cloud would be an overhead for the local costmap.

jorgemiar commented 4 years ago

Got it! Thanks a lot for the help!