Open Amir-Ramezani opened 6 years ago
Hi,
can you show the header of /firefly/scan
?
$ rostopic echo /firefly/scan/header
The frame used is taken from the topic, so if there is a frame name error, this may come from that topic. For the TF tree, /world frame may not exist. For example, if I look at the gazebo turtlebot example of this tutorial: only /odom -> /base_footprint is published from gazebo (when rgbd_odometry argument is true, /odom -> /base_footprint would be published by rgbd_odometry node). /map -> /odom would be published by rtabmap node.
For example in your case,
Thanks for your reply.
The output of that command is:
stamp:
secs: 6850
nsecs: 530000000
frame_id: firefly/firefly/laser_lidar_link12
---
seq: 721
stamp:
secs: 6850
nsecs: 560000000
frame_id: firefly/firefly/laser_lidar_link12
---
seq: 722
stamp:
secs: 6850
nsecs: 590000000
frame_id: firefly/firefly/laser_lidar_link12
---
^Cseq: 723
stamp:
secs: 6850
nsecs: 620000000
frame_id: firefly/firefly/laser_lidar_link12
So it seems the issue coming from the Gazebo xacro file - or the Gazebo package?
about the examples you explained, when I set the args to:
<arg name="rgbd_odometry" default="true"/>
<arg name="rtabmapviz" default="true"/>
<arg name="localization" default="false"/>
<arg name="simulation" default="true"/>
and
<param name="subscribe_depth" type="bool" value="true"/>
<param name="subscribe_scan" type="bool" value="false"/>
I think you are saying it should be /map->/odom->firefly_baselink, but in my case /odom is not part of the same tree with /map it is separated, check the following tf tree:
https://drive.google.com/open?id=1m34X_vkK4c79Bha3RsVtlykg4LK_0Lui
and beside that I still receive the following warning about base_footprint - you were saying /world should not be there and /base_footprint should be if I use rgbd_odometry?
[ INFO] [1533415540.923936134, 9508.850000000]: Odom: quality=916, std dev=0.000032m|0.000032rad, update time=0.063503s
[ WARN] [1533415540.930069975, 9508.860000000]: Timed out waiting for transform from base_footprint to map to become available before running costmap, tf error: canTransform: source_frame base_footprint does not exist.. canTransform returned after 0.1 timeout was 0.1.
[ INFO] [1533415540.997387276, 9508.920000000]: Odom: quality=964, std dev=0.000032m|0.000032rad, update time=0.064100s
[ INFO] [1533415541.091593434, 9509.020000000]: Odom: quality=960, std dev=0.000032m|0.000032rad, update time=0.068964s
What does firefly/odometry_sensor1
frame represent? If it is odometry, it should be parent of base_link
. Rtabmap seems subscribed to an odom topic in which its fixed frame is world
. It is why rtabmap is publishing map
to world
(or did you set odom_frame_id
to world
?). What is the content of the odometry topic when you run without rgbd_odometry? Is there any odometry topic?
In your case, base_link
is the root transform on the robot, you don't need base_footprint
. base_footprint
is generally used for ground robots (see REP 105 and REP 120).
The TF tree doesn't look right in general, all frames on the robot should have their parent base_link
(directly or indirectly). Launch the original turtlebot simulation with rtabmap to compare the tf trees.
Yes I set the odom_frame_id to world.
$ rostopic list /firefly/odometry_sensor1/
/firefly/odometry_sensor1/odometry
/firefly/odometry_sensor1/pose
/firefly/odometry_sensor1/pose_with_covariance
/firefly/odometry_sensor1/position
/firefly/odometry_sensor1/transform
$ rostopic echo /firefly/odometry_sensor1/odometry
header:
seq: 38868
stamp:
secs: 388
nsecs: 710000000
frame_id: world
child_frame_id: firefly/odometry_sensor1
pose:
pose:
position:
x: 0.198848420937
y: 4.95605517839e-06
z: 0.80125563408
orientation:
x: -7.55004112378e-09
y: -9.97785682093e-09
z: -3.07755847567e-08
w: 1.0
covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
twist:
twist:
linear:
x: 2.17748780975e-08
y: -3.81054668865e-08
z: 1.57936208871e-09
angular:
x: -4.76373144711e-07
y: 9.39479712359e-08
z: -2.33323445622e-09
covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
the content of odom frame id (/rtabmap/odom ?):
$ rostopic echo /rtabmap/odom
WARNING: no messages received and simulated time is active.
Is /clock being published?
Well, ignore /firefly/odometry_sensor1/odometry
message (covariance is null, use directly TF instead), just launch without rgbd_odometry
and with odom_frame_id
set to world
.
You should have a TF tree like this: /map -> /world -> /firefly/base_link
.
Note however that /world -> /firefly/base_link
seems to be a drift free TF, unlike general odometry poses. You should have to remove /world
frame and fix your /firefly/odometry_sensor1
frame to be the only parent of /firefly/base_link
, then set odom_frame_id
to /firefly/odometry_sensor1
. The goal would be to have /map -> /firefly/odometry_sensor1 -> /firefly/base_link
I tried with the following setting:
<param name="frame_id" type="string" value="firefly/base_link"/>
<param name="subscribe_scan" type="bool" value="false"/>
<param name="odom_frame_id" type="string" value="world"/>
But, as you can see in the following images, the map is not drawn correctly:
with rgbd_odometry="false"
:
with rgbd_odometry="true"
:
Features for loop closure detection are taken just 4 meters in front of the robot, so they are taken only on the bottom of the image on the repetitive textured ground. Remove this parameter or set it to 0 (inf):
<param name="Kp/MaxDepth" type="string" value="4.0"/>
As the ground is identical everywhere, loop closures will be triggered often. To reject loop closures based on bad graph deformations, you should have valid covariances in the graph's links. As your odometry doesn't have any covariance set, you can fix it manually with those parameters under rtabmap
node:
<param name="odom_tf_angular_variance" type="double" value="0.005"/>
<param name="odom_tf_linear_variance" type="double" value="0.0001"/>
Those values are required so that parameter RGBD/OptimizeMaxError
can be used to reject loop closures based on bad graph optimization:
$ rtabmap --params | grep RGBD/OptimizeMaxError
Param: RGBD/OptimizeMaxError = "1.0"
[Reject loop closures if optimization error ratio is greater
than this value (0=disabled). Ratio is computed as absolute
error over standard deviation of each link. This will help to
detect when a wrong loop closure is added to the graph.
Not compatible with "Optimizer/Robust" if enabled.]
Thank you very much. Now its really good. I set "RGBD/OptimizeMaxError" to 0.1 and 0.9, but didn't notice a big difference. I also changed "Rtabmap/TimeThr" to 0, because after some time parts of the map starting to disappear.
Result:
Just curious to know if its possible to map trees better? And why the trees background is gray?
Beside that, what "localization"=true does (even with false value, still there is blue line and points that do the localization?)?
The simulator may use large polygons for the trees with transparent textures. The depth camera seems to ignore the transparency of the texture and "hit" the polygons, giving valid depth values. The color is taken from the background (which is gray).
When localization is true, the map is fixed, no new nodes are added to map. If the robot explores new areas not in the map, the map will not be extended. The loop closure detector is still running to localize the robot in the map.
Note that you can drag and drop images (JPG format) in Github's comment box directly instead of linking them off-site.
Thanks for the explanation and the point (was looking for a button to add images here!).
Can we navigate in the map we created from the environment? Actually, it seems that we can, similar to the rtabmapviz visualization, is it possible using the rtabmap published topics? So we can behave the map as a model of the environment.
You can indeed do navigation with gazebo and rtabmap. See turtlebot simulation example here.
You are saying I should get the images from rviz?
Not sure I understood your last question:
Can we navigate in the map we created from the environment?
Gazebo is the simulator. Rviz is just for visualization of the data seen by the robot. RTAB-Map can provide a 2D map or a 3D map. For the 2D case, the occupancy grid map can be used by move_base to send commands to the robot.
Images would be coming from gazebo (simulated camera), not rviz.
Can we navigate in the map we created from the environment? "Images would be coming from gazebo (simulated camera), not rviz."
Images come from gazebo and then rtabmap creates a 3D point cloud and a 3D map using the the raw image and depth image. Now, consider I am planning to save the map created by rtabmap, and I close gazebo---or rtabmap can still be open but I close gazebo. How can I start viewing the created 3D map from a third application---similar to a virtual reality glass that we use to dive in a 3D environment (similar to the operation we do in rviz while the map is getting generate).
The best way to do this is to open the created database (e.g., ~/.ros/rtabmap.db) with RTAB-Map standalone application:
$ rtabmap ~/.ros/rtabmap.db
You can then do File->Export 3D clouds... (see here and here for examples of usage), you will see a lot of options to export the assembled point cloud, to generate a mesh and even apply texture on the mesh. The exported cloud/mesh will be in PLY or OBJ formats, which are quite common formats for 3D file exchange. For VR, you may open directly the PLY or OBJ file with a visualization application. If you need to convert the file, you can use MeshLab (or other 3D softwares) to do so. Some exported examples here.
cheers, Mathieu
Thanks Mathieu, I tested the export method, but, it is kind of offline (need to save the map and open it and export it and open it again in another app, still not what I am looking for), maybe my example about VR was misleading, I have this idea: imagine the drone moved around the environment and created a map using RTAB-MAP, and the map is in the memory, after that the drone is stopped in a point, then I want to order the drone to go near the fountain for example (the fountain currently is not visible to the drone), so, the drone should go and search for the fountain, I want to do it virtually using the map that is created without moving the drone, and then after going different routs, I choose the best way to go there. Typically, I receive the images from the drone camera topic (simulated in gazebo) (a depth image), but in this case, I want to know is it possible to get the image (depth image) from the map generated by RTAB-Map which is in the memory (instead gazebo)?
"the drone should go and search for the fountain" [...] "without moving the drone", it seems what you want to do is a planner with the map created by rtabmap. You can get online the map as a point cloud on topic /rtabmap/cloud_map
. You can also get an OctoMap (3D occupancy grid) when subscribing to /rtabmap/octomap_full
.
Rtabmap keeps the depth images created from the mapping, but it doesn't simulate depth images from a arbitrary point of view. You may generate your own depth images using the 3D maps published (see above) from a point of view you want.
I know you want to do it online, but another way is the generate the mesh offline from the map, put it in gazebo, then gazebo will generate the depth image from any point of view.
Great, thanks, /rtabmap/cloud_map
and /rtabmap/octomap_full
seem to be good options, the Gazebo idea also seems practicable.
Thanks for your effort.
However, when I run the package I face the following Error:
Error happens when I am running:
roslaunch rtabmap_ros demo_firefly_mapping.launch
demo_firefly_mapping.launch is a copy of demo_turtlebot_mapping.launch, I just changed some arguments.
It is strange because I have no _sourceframe called firefly/firefly/laser_lidar_link12, the frame should be firefly/laser_lidar_link12
demo_firefly_mapping.launch
Please find the output of
rosrun tf view_frames
in the following link: https://drive.google.com/file/d/1lbeyL7nKQlukbo5ngEoBwpC7PpODPePb/viewand the xacro macro I used for generating the lidar sensor is: