introlab / rtabmap_ros

RTAB-Map's ROS package.
http://wiki.ros.org/rtabmap_ros
BSD 3-Clause "New" or "Revised" License
957 stars 556 forks source link

rtabmap looking for a frame that I did not defined, source_frame firefly/firefly/laser_lidar_link12 #266

Open Amir-Ramezani opened 6 years ago

Amir-Ramezani commented 6 years ago

Thanks for your effort.

However, when I run the package I face the following Error:

[ WARN] [1533199508.052997875, 963.860000000]: Timed out waiting for transform from base_footprint to map to become available before running costmap, tf error: canTransform: source_frame base_footprint does not exist.. canTransform returned after 0.1 timeout was 0.1.
[ WARN] [1533199508.202823939, 964.010000000]: Could not get transform from firefly/base_link to firefly/firefly/laser_lidar_link12 after 0.200000 seconds (for stamp=963.750000)! Error="canTransform: source_frame firefly/firefly/laser_lidar_link12 does not exist.. canTransform returned after 0.2 timeout was 0.2.".
[ERROR] [1533199508.202867336, 964.010000000]: Could not convert laser scan msg! Aborting rtabmap update...

Error happens when I am running: roslaunch rtabmap_ros demo_firefly_mapping.launch

demo_firefly_mapping.launch is a copy of demo_turtlebot_mapping.launch, I just changed some arguments.

It is strange because I have no _sourceframe called firefly/firefly/laser_lidar_link12, the frame should be firefly/laser_lidar_link12

demo_firefly_mapping.launch


<launch> 
<!--

       Bringup Turtlebot:
       $ roslaunch turtlebot_bringup minimal.launch

       Mapping:
       $ roslaunch rtabmap_ros demo_turtlebot_mapping.launch

       Visualization:
       $ roslaunch rtabmap_ros demo_turtlebot_rviz.launch

       This launch file is a one to one replacement of the gmapping_demo.launch in the 
       "SLAM Map Building with TurtleBot" tutorial:
       http://wiki.ros.org/turtlebot_navigation/Tutorials/indigo/Build%20a%20map%20with%20SLAM

       For localization-only after a mapping session, add argument "localization:=true" to
       demo_turtlebot_mapping.launch line above. Move the robot around until it can relocalize in 
       the previous map, then the 2D map should re-appear again. You can then follow the same steps 
       from 3.3.2 of the "Autonomous Navigation of a Known Map with TurtleBot" tutorial:
       http://wiki.ros.org/turtlebot_navigation/Tutorials/Autonomously%20navigate%20in%20a%20known%20map

       For turtlebot in simulation (Gazebo):
         $ roslaunch turtlebot_gazebo turtlebot_world.launch
         $ roslaunch rtabmap_ros demo_turtlebot_mapping.launch simulation:=true
         $ roslaunch rtabmap_ros demo_turtlebot_rviz.launch

  <node pkg = "tf" type = "static_transform_publisher" name="laser_to_vi_sensor_base_sensor_r200cam_transform_publisher" args = "0 0 0 0 0 0 firefly/base_link firefly/firefly/laser_lidar_link12 100"/> 
 -->

  <arg name="database_path"     default="rtabmap.db"/>
  <arg name="rgbd_odometry"     default="false"/>
  <arg name="rtabmapviz"        default="false"/>
  <arg name="localization"      default="false"/>
  <arg name="simulation"        default="true"/>
  <arg     if="$(arg localization)" name="args"  default=""/>
  <arg unless="$(arg localization)" name="args"  default="--delete_db_on_start"/>

  <arg     if="$(arg simulation)" name="rgb_topic"   default="/firefly/vi_sensor/camera_depth/camera/image_raw"/>
  <arg unless="$(arg simulation)" name="rgb_topic"   default="/firefly/vi_sensor/camera_depth/camera/image_raw"/>
  <arg     if="$(arg simulation)" name="depth_topic" default="/firefly/vi_sensor/camera_depth/depth/disparity"/>
  <arg unless="$(arg simulation)" name="depth_topic" default="/firefly/vi_sensor/camera_depth/depth/disparity"/>
  <arg name="camera_info_topic" default="/firefly/vi_sensor/camera_depth/camera/camera_info"/>

  <arg name="wait_for_transform"  default="0.2"/> 
  <!-- 
      robot_state_publisher's publishing frequency in "turtlebot_bringup/launch/includes/robot.launch.xml" 
      can be increase from 5 to 10 Hz to avoid some TF warnings.
  -->

  <!-- Navigation stuff (move_base) -->
  <include unless="$(arg simulation)" file="$(find turtlebot_bringup)/launch/3dsensor.launch"/>
  <include file="$(find turtlebot_navigation)/launch/includes/move_base.launch.xml"/>

  <!-- Mapping -->
  <group ns="rtabmap">

    <node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="$(arg args)">
      <param name="database_path"       type="string" value="$(arg database_path)"/>
      <param name="frame_id"            type="string" value="firefly/base_link"/>
      <param name="wait_for_transform_duration"  type="double"   value="$(arg wait_for_transform)"/>
      <param name="subscribe_depth"     type="bool"   value="true"/>
      <param name="subscribe_scan"      type="bool"   value="true"/>
      <param name="map_negative_poses_ignored" type="bool" value="true"/>

      <!-- inputs -->
      <remap from="scan"            to="/scan"/> 
      <remap from="rgb/image"       to="$(arg rgb_topic)"/>
      <remap from="depth/image"     to="$(arg depth_topic)"/>
      <remap from="rgb/camera_info" to="$(arg camera_info_topic)"/>
      <remap unless="$(arg rgbd_odometry)" from="odom" to="/odom"/>
      <param name="odom_frame_id" type="string" value="/firefly/odometry_sensor1"/>

      <!-- output 
      <remap from="grid_map" to="/map"/> -->

      <!-- RTAB-Map's parameters: do "rosrun rtabmap rtabmap (double-dash)params" to see the list of available parameters. -->
      <param name="RGBD/ProximityBySpace"        type="string" value="true"/>   <!-- Local loop closure detection (using estimated position) with locations in WM -->
      <param name="RGBD/OptimizeFromGraphEnd"    type="string" value="false"/>  <!-- Set to false to generate map correction between /map and /odom -->
      <param name="Kp/MaxDepth"                  type="string" value="4.0"/>
      <param name="Reg/Strategy"                 type="string" value="0"/>      <!-- Loop closure transformation: 0=Visual, 1=ICP, 2=Visual+ICP -->
      <param name="Icp/CorrespondenceRatio"      type="string" value="0.3"/>
      <param name="Vis/MinInliers"               type="string" value="15"/>      <!-- 3D visual words minimum inliers to accept loop closure -->
      <param name="Vis/InlierDistance"           type="string" value="0.1"/>    <!-- 3D visual words correspondence distance -->
      <param name="RGBD/AngularUpdate"           type="string" value="0.1"/>    <!-- Update map only if the robot is moving -->
      <param name="RGBD/LinearUpdate"            type="string" value="0.1"/>    <!-- Update map only if the robot is moving -->
      <param name="RGBD/ProximityPathMaxNeighbors" type="string" value="0"/> 
      <param name="Rtabmap/TimeThr"              type="string" value="700"/>
      <param name="Mem/RehearsalSimilarity"      type="string" value="0.30"/>
      <param name="Optimizer/Slam2D"             type="string" value="true"/>
      <param name="Reg/Force3DoF"                type="string" value="true"/>
      <param name="GridGlobal/MinSize"           type="string" value="20"/>
          <param name="RGBD/OptimizeMaxError"        type="string" value="0.1"/>

      <!-- localization mode -->
      <param     if="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="false"/>
      <param unless="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="true"/>
      <param name="Mem/InitWMWithAllNodes" type="string" value="$(arg localization)"/> 
    </node>

    <!-- Odometry : ONLY for testing without the actual robot! /odom TF should not be already published. -->
    <node if="$(arg rgbd_odometry)" pkg="rtabmap_ros" type="rgbd_odometry" name="rgbd_odometry" output="screen">
      <param name="frame_id"                    type="string" value="/firefly/base_link"/>
      <param name="wait_for_transform_duration" type="double" value="$(arg wait_for_transform)"/>
      <param name="Reg/Force3DoF"               type="string" value="true"/>
      <param name="Vis/InlierDistance"          type="string" value="0.05"/>

      <remap from="rgb/image"       to="$(arg rgb_topic)"/>
      <remap from="depth/image"     to="$(arg depth_topic)"/>
      <remap from="rgb/camera_info" to="$(arg camera_info_topic)"/>
    </node>

    <!-- visualization with rtabmapviz -->
    <node if="$(arg rtabmapviz)" pkg="rtabmap_ros" type="rtabmapviz" name="rtabmapviz" args="-d $(find rtabmap_ros)/launch/config/rgbd_gui.ini" output="screen">
      <param name="subscribe_depth"             type="bool" value="true"/>
      <param name="subscribe_scan"              type="bool" value="true"/>
      <param name="frame_id"                    type="string" value="/firefly/base_link"/>
      <param name="wait_for_transform_duration" type="double" value="$(arg wait_for_transform)"/>

      <remap from="rgb/image"       to="$(arg rgb_topic)"/>
      <remap from="depth/image"     to="$(arg depth_topic)"/>
      <remap from="rgb/camera_info" to="$(arg camera_info_topic)"/>
      <remap from="scan"            to="/firefly/scan"/>
    </node>

  </group>
</launch>

Please find the output of rosrun tf view_frames in the following link: https://drive.google.com/file/d/1lbeyL7nKQlukbo5ngEoBwpC7PpODPePb/view

and the xacro macro I used for generating the lidar sensor is:

 <!-- Hokuyo Lidar Range Sensor -->
    <xacro:macro name="hokuyo_utm30lx" params="*origin update_rate ray_count min_angle max_angle">
      <link name="firefly/laser_lidar_link12">
    <inertial>
      <mass value="0.001" />
      <origin xyz="0 0 0" rpy="0 0 0" />
      <inertia ixx="0.0001" ixy="0" ixz="0" iyy="0.0001" iyz="0" izz="0.0001" />
    </inertial>
    <visual>
          <geometry>
            <mesh filename="package://turtlebot_description/meshes/sensors/hokuyo_utm30lx/hokuyo_utm_30lx.dae"/>
          </geometry>
    </visual>
    <collision>
      <origin xyz="0 0 -0.0115" rpy="0 0 0" />
          <geometry>
            <box size="0.058 0.058 0.087" />
            <!--<mesh filename="package://hector_sensors_description/meshes/hokuyo_utm30lx/hokuyo_utm_30lx.stl"/>-->
          </geometry>
    </collision>
      </link>
      <joint name="laser_lidar_joint" type="fixed">
        <origin xyz="0.0 0.0 0.12" rpy="3.14159 0.0 0.0" />
    <parent link="firefly/base_link"/>
    <child link="firefly/laser_lidar_link12"/>
      </joint>
      <gazebo reference="firefly/laser_lidar_link12">
        <sensor type="ray" name="firefly_laser_lidar">
            <pose>0 0 0 0 0 0</pose>
            <visualize>false</visualize>
            <update_rate>40</update_rate>
        <ray>
          <scan>
            <horizontal>
              <samples>720</samples>
              <resolution>1</resolution>
              <min_angle>-2.350796</min_angle>
              <max_angle>2.350796</max_angle>
            </horizontal>
          </scan>
          <range>
            <min>0.10</min>
            <max>20.0</max>
            <resolution>0.01</resolution>
          </range>
          <noise>
            <type>Gaussian</type>
            <mean>0.0</mean>
            <stddev>0.01</stddev>
          </noise>
        </ray>
        <plugin name="gazebo_ros_lidar_controller" filename="libgazebo_ros_laser.so">
          <topicName>/scan</topicName>
          <frameName>firefly/laser_lidar_link12</frameName>
          <!-- <interface:laser name="gazebo_ros_${laser_suffix}_iface" /> -->
            </plugin>
        </sensor>
      </gazebo>
    </xacro:macro>
matlabbe commented 6 years ago

Hi,

can you show the header of /firefly/scan?

$ rostopic echo /firefly/scan/header

The frame used is taken from the topic, so if there is a frame name error, this may come from that topic. For the TF tree, /world frame may not exist. For example, if I look at the gazebo turtlebot example of this tutorial: screenshot_2018-08-03_15-09-05 only /odom -> /base_footprint is published from gazebo (when rgbd_odometry argument is true, /odom -> /base_footprint would be published by rgbd_odometry node). /map -> /odom would be published by rtabmap node.

For example in your case,

Amir-Ramezani commented 6 years ago

Thanks for your reply.

The output of that command is:

stamp: 
  secs: 6850
  nsecs: 530000000
frame_id: firefly/firefly/laser_lidar_link12
---
seq: 721
stamp: 
  secs: 6850
  nsecs: 560000000
frame_id: firefly/firefly/laser_lidar_link12
---
seq: 722
stamp: 
  secs: 6850
  nsecs: 590000000
frame_id: firefly/firefly/laser_lidar_link12
---
^Cseq: 723
stamp: 
  secs: 6850
  nsecs: 620000000
frame_id: firefly/firefly/laser_lidar_link12

So it seems the issue coming from the Gazebo xacro file - or the Gazebo package?

about the examples you explained, when I set the args to:

  <arg name="rgbd_odometry"     default="true"/>
  <arg name="rtabmapviz"        default="true"/>
  <arg name="localization"      default="false"/>
  <arg name="simulation"        default="true"/>

and

      <param name="subscribe_depth"     type="bool"   value="true"/>
      <param name="subscribe_scan"      type="bool"   value="false"/>

I think you are saying it should be /map->/odom->firefly_baselink, but in my case /odom is not part of the same tree with /map it is separated, check the following tf tree:

https://drive.google.com/open?id=1m34X_vkK4c79Bha3RsVtlykg4LK_0Lui

and beside that I still receive the following warning about base_footprint - you were saying /world should not be there and /base_footprint should be if I use rgbd_odometry?

[ INFO] [1533415540.923936134, 9508.850000000]: Odom: quality=916, std dev=0.000032m|0.000032rad, update time=0.063503s
[ WARN] [1533415540.930069975, 9508.860000000]: Timed out waiting for transform from base_footprint to map to become available before running costmap, tf error: canTransform: source_frame base_footprint does not exist.. canTransform returned after 0.1 timeout was 0.1.
[ INFO] [1533415540.997387276, 9508.920000000]: Odom: quality=964, std dev=0.000032m|0.000032rad, update time=0.064100s
[ INFO] [1533415541.091593434, 9509.020000000]: Odom: quality=960, std dev=0.000032m|0.000032rad, update time=0.068964s
matlabbe commented 6 years ago

What does firefly/odometry_sensor1 frame represent? If it is odometry, it should be parent of base_link. Rtabmap seems subscribed to an odom topic in which its fixed frame is world. It is why rtabmap is publishing map to world (or did you set odom_frame_id to world?). What is the content of the odometry topic when you run without rgbd_odometry? Is there any odometry topic?

In your case, base_link is the root transform on the robot, you don't need base_footprint. base_footprint is generally used for ground robots (see REP 105 and REP 120).

The TF tree doesn't look right in general, all frames on the robot should have their parent base_link (directly or indirectly). Launch the original turtlebot simulation with rtabmap to compare the tf trees.

Amir-Ramezani commented 6 years ago

Yes I set the odom_frame_id to world.

$ rostopic list /firefly/odometry_sensor1/
/firefly/odometry_sensor1/odometry
/firefly/odometry_sensor1/pose
/firefly/odometry_sensor1/pose_with_covariance
/firefly/odometry_sensor1/position
/firefly/odometry_sensor1/transform

$ rostopic echo /firefly/odometry_sensor1/odometry
header: 
  seq: 38868
  stamp: 
    secs: 388
    nsecs: 710000000
  frame_id: world
child_frame_id: firefly/odometry_sensor1
pose: 
  pose: 
    position: 
      x: 0.198848420937
      y: 4.95605517839e-06
      z: 0.80125563408
    orientation: 
      x: -7.55004112378e-09
      y: -9.97785682093e-09
      z: -3.07755847567e-08
      w: 1.0
  covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
twist: 
  twist: 
    linear: 
      x: 2.17748780975e-08
      y: -3.81054668865e-08
      z: 1.57936208871e-09
    angular: 
      x: -4.76373144711e-07
      y: 9.39479712359e-08
      z: -2.33323445622e-09
  covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

the content of odom frame id (/rtabmap/odom ?):

$ rostopic echo /rtabmap/odom 
WARNING: no messages received and simulated time is active.
Is /clock being published?
matlabbe commented 6 years ago

Well, ignore /firefly/odometry_sensor1/odometry message (covariance is null, use directly TF instead), just launch without rgbd_odometry and with odom_frame_id set to world. You should have a TF tree like this: /map -> /world -> /firefly/base_link.

Note however that /world -> /firefly/base_link seems to be a drift free TF, unlike general odometry poses. You should have to remove /world frame and fix your /firefly/odometry_sensor1 frame to be the only parent of /firefly/base_link, then set odom_frame_id to /firefly/odometry_sensor1. The goal would be to have /map -> /firefly/odometry_sensor1 -> /firefly/base_link

Amir-Ramezani commented 6 years ago

I tried with the following setting:

      <param name="frame_id"            type="string" value="firefly/base_link"/>
      <param name="subscribe_scan"      type="bool"   value="false"/>
      <param name="odom_frame_id" type="string" value="world"/>

But, as you can see in the following images, the map is not drawn correctly: with rgbd_odometry="false": image-3

image-1

with rgbd_odometry="true": image-4

image-2

matlabbe commented 6 years ago

Features for loop closure detection are taken just 4 meters in front of the robot, so they are taken only on the bottom of the image on the repetitive textured ground. Remove this parameter or set it to 0 (inf):

<param name="Kp/MaxDepth"                  type="string" value="4.0"/>

As the ground is identical everywhere, loop closures will be triggered often. To reject loop closures based on bad graph deformations, you should have valid covariances in the graph's links. As your odometry doesn't have any covariance set, you can fix it manually with those parameters under rtabmap node:

<param name="odom_tf_angular_variance" type="double" value="0.005"/>
<param name="odom_tf_linear_variance"  type="double" value="0.0001"/>

Those values are required so that parameter RGBD/OptimizeMaxError can be used to reject loop closures based on bad graph optimization:

$ rtabmap --params | grep RGBD/OptimizeMaxError
Param: RGBD/OptimizeMaxError = "1.0"                       
   [Reject loop closures if optimization error ratio is greater 
   than this value (0=disabled). Ratio is computed as absolute 
   error over standard deviation of each link. This will help to 
   detect when a wrong loop closure is added to the graph. 
   Not compatible with "Optimizer/Robust" if enabled.]
Amir-Ramezani commented 6 years ago

Thank you very much. Now its really good. I set "RGBD/OptimizeMaxError" to 0.1 and 0.9, but didn't notice a big difference. I also changed "Rtabmap/TimeThr" to 0, because after some time parts of the map starting to disappear.

Result: new-image-1 new-image-2

Just curious to know if its possible to map trees better? And why the trees background is gray?

Beside that, what "localization"=true does (even with false value, still there is blue line and points that do the localization?)?

matlabbe commented 6 years ago

The simulator may use large polygons for the trees with transparent textures. The depth camera seems to ignore the transparency of the texture and "hit" the polygons, giving valid depth values. The color is taken from the background (which is gray).

When localization is true, the map is fixed, no new nodes are added to map. If the robot explores new areas not in the map, the map will not be extended. The loop closure detector is still running to localize the robot in the map.

Note that you can drag and drop images (JPG format) in Github's comment box directly instead of linking them off-site.

Amir-Ramezani commented 6 years ago

Thanks for the explanation and the point (was looking for a button to add images here!).

Can we navigate in the map we created from the environment? Actually, it seems that we can, similar to the rtabmapviz visualization, is it possible using the rtabmap published topics? So we can behave the map as a model of the environment.

matlabbe commented 6 years ago

You can indeed do navigation with gazebo and rtabmap. See turtlebot simulation example here.

Amir-Ramezani commented 6 years ago

You are saying I should get the images from rviz?

matlabbe commented 6 years ago

Not sure I understood your last question:

Can we navigate in the map we created from the environment?

Gazebo is the simulator. Rviz is just for visualization of the data seen by the robot. RTAB-Map can provide a 2D map or a 3D map. For the 2D case, the occupancy grid map can be used by move_base to send commands to the robot.

Images would be coming from gazebo (simulated camera), not rviz.

Amir-Ramezani commented 6 years ago

Can we navigate in the map we created from the environment? "Images would be coming from gazebo (simulated camera), not rviz."

Images come from gazebo and then rtabmap creates a 3D point cloud and a 3D map using the the raw image and depth image. Now, consider I am planning to save the map created by rtabmap, and I close gazebo---or rtabmap can still be open but I close gazebo. How can I start viewing the created 3D map from a third application---similar to a virtual reality glass that we use to dive in a 3D environment (similar to the operation we do in rviz while the map is getting generate).

matlabbe commented 6 years ago

The best way to do this is to open the created database (e.g., ~/.ros/rtabmap.db) with RTAB-Map standalone application: $ rtabmap ~/.ros/rtabmap.db

You can then do File->Export 3D clouds... (see here and here for examples of usage), you will see a lot of options to export the assembled point cloud, to generate a mesh and even apply texture on the mesh. The exported cloud/mesh will be in PLY or OBJ formats, which are quite common formats for 3D file exchange. For VR, you may open directly the PLY or OBJ file with a visualization application. If you need to convert the file, you can use MeshLab (or other 3D softwares) to do so. Some exported examples here.

cheers, Mathieu

Amir-Ramezani commented 6 years ago

Thanks Mathieu, I tested the export method, but, it is kind of offline (need to save the map and open it and export it and open it again in another app, still not what I am looking for), maybe my example about VR was misleading, I have this idea: imagine the drone moved around the environment and created a map using RTAB-MAP, and the map is in the memory, after that the drone is stopped in a point, then I want to order the drone to go near the fountain for example (the fountain currently is not visible to the drone), so, the drone should go and search for the fountain, I want to do it virtually using the map that is created without moving the drone, and then after going different routs, I choose the best way to go there. Typically, I receive the images from the drone camera topic (simulated in gazebo) (a depth image), but in this case, I want to know is it possible to get the image (depth image) from the map generated by RTAB-Map which is in the memory (instead gazebo)?

matlabbe commented 6 years ago

"the drone should go and search for the fountain" [...] "without moving the drone", it seems what you want to do is a planner with the map created by rtabmap. You can get online the map as a point cloud on topic /rtabmap/cloud_map. You can also get an OctoMap (3D occupancy grid) when subscribing to /rtabmap/octomap_full.

Rtabmap keeps the depth images created from the mapping, but it doesn't simulate depth images from a arbitrary point of view. You may generate your own depth images using the 3D maps published (see above) from a point of view you want.

I know you want to do it online, but another way is the generate the mesh offline from the map, put it in gazebo, then gazebo will generate the depth image from any point of view.

Amir-Ramezani commented 6 years ago

Great, thanks, /rtabmap/cloud_map and /rtabmap/octomap_full seem to be good options, the Gazebo idea also seems practicable.