Closed romainreignier closed 7 years ago
So, could you give some documentation or tips to help tuning the parameters?
The turtlebot repo is definitively a good start.
But first of all, do you think that it is possible to use SLAM only with depth cameras?
Yes, there is no theoretical limitation why that should not work. And it works alright for the turtlebot.
Or maybe should I add even more cameras to get a wider field of view? One looking backward? That will help for sure. One problem that we saw is that if you do a turn close to a wall there is a timeslot where you only see a plane - no floor, no ceiling. This point cloud only restricts you in 1 dimension instead of the 3 that are required for stable matching and Cartographer deals poorly with this. By adding a camera in the opposite direction you make situations like these less likely - of course now you have to figure out the camera extrinsics wrt each other.
I started looking into your data in this branch, but got stuck. Take a look at the last few commits to trace my steps.
I managed to make your new code compile, and also run tim.launch
which performs good. But when running astras.launch
I get the following error:
ERROR] [1483972947.152885997, 1482510927.417480942]: tf: Lookup would require extrapolation into the past. Requested time 1482510927.292840718 but the earliest data is at time 1482510927.436074258, when looking up transform from frame [astra_left_depth_frame] to frame [astras_center]. canTransform returned after 0.0526132 timeout was 0.05.
[ WARN] [1483972947.159028885, 1482510927.417480942]: W0109 15:42:27.000000 81173 tf_bridge.cc:51] Lookup would require extrapolation into the past. Requested time 1482510927.424206600 but the earliest data is at time 1482510927.436074258, when looking up transform from frame [base_link] to frame [imu_link]
[ERROR] [1483972947.203281902, 1482510927.467929866]: tf: Lookup would require extrapolation into the past. Requested time 1482510927.295216321 but the earliest data is at time 1482510927.436074258, when looking up transform from frame [astra_right_depth_frame] to frame [astras_center]. canTransform returned after 0.0504489 timeout was 0.05.
terminate called after throwing an instance of 'std::runtime_error'
what(): Field x does not exist
[pointcloud_to_laserscan-6] process has died [pid 81147, exit code -6, cmd /opt/ros/indigo/lib/pointcloud_to_laserscan/pointcloud_to_laserscan_node cloud_in:=/astras_cloud scan:=/astras_scan __name:=pointcloud_to_laserscan __log:=/usr/local/google/home/hrapp/.ros/log/d6207790-d679-11e6-829b-480fcf3c9b93/pointcloud_to_laserscan-6.log].
log file: /usr/local/google/home/hrapp/.ros/log/d6207790-d679-11e6-829b-480fcf3c9b93/pointcloud_to_laserscan-6*.log
There a no logfiles for this node in ~/.ros
. I am not familiar with pointcloud_to_laserscan
, but maybe you have an inkling what the problem could be?
Hello, Thanks for looking to my problem. I will try to find the issues here. But first, what version of ROS are you using? I use Kinetic, maybe that is the cause of some errors.
On first though, maybe change the duration from 0.05 to 0.2 line 54 of LaserScanMerger.cpp https://github.com/SirVer/cartographer_ros/blob/user/romainreignier/cartographer_ros/laser_scan_merger/LaserScanMerger.cpp#L54
Sorry for this issue, it was working on my computer.
I have also replaced the make_unique
by a std::make_shared
and a std::shared_ptr
in the header file because I add build issues with you version of make_unique
in your repo.
With these changes, I could run the bag with astras.launch
.
Let me know if the issue persists.
Anyway, there are my changes to your branch. https://github.com/romainreignier/cartographer_ros/commit/470c01ac049726bcea15662d6ace6fd6a9041abb
That didn't change anything. But I figured out that if I comment cartographer_node
in the launch file and start it from the commandline at a later point in time everything seems to work. Maybe I have to filter the bag to remove some topics. I am out of time for today, I try to look into this again later this week.
I did not figure out why it crashes for me, but I worked around this issue now by manually launching Cartographer using rosrun cartographer_ros cartographer_node scan:=/astras_scan odom:=/odomcomputer/odometry -configuration_directory src/cartographer_ros/cartographer_ros/configuration_files --configuration_basename=astras_scan.lua
.
I did not significantly improve your quality, but I found a few things that are odd and let me to believe that you have an error in how you transform your data or a timing issue. In the following TIN laser is green and your point cloud from depth data is red+ yellow.
Here, Tin sees data closer than the depth camera (bottom):
Here Tin sees it father away (right).
Here, there is no agreement about the corner.
Here, no agreement about the angle of the wall:
Thanks a lot for your valuable feedback.
In the first image, the offset is caused by a stairway at the end of the corridor. The TIM does not see the first steps because its position is too high. The cameras are able to see in 3D so the steps appear (that is why we want to use 3D cameras).
For the two last images, there is a difference because the robot is rotating and there is a small lag with the depth sensor data. Did you face the same lag on the Turtlebot?
For the second image, maybe the depthimage_to_laserscan height
parameter is too big and the camera see the floor before the wall.
So, thanks a lot for your feedback. The main issue seems to be the lag of the depth data when the robot turn. If you did not face the same issue on the turtlebot, maybe the merge of the 2 sensors causes this lag.
The cameras are able to see in 3D so the steps appear (that is why we want to use 3D cameras).
Why not use 3D slam then either? I think it would make it easier for the system to give you a reasonable map (more features)
Did you face the same lag on the Turtlebot?
@damonkohler did most work with the turtlebot. Do you remember seeing this Damon?
I am closing this for now, I do not think I can provide more input right now.
Ok, thanks for your help anyway!
This sounds like a sensor data timing issue. If your sensors are not synchronized, there's not much we can do. That's outside the scope of Cartographer and it will harm quality.
Oh, interesting. How did you noticed that timing issue? Because I did not detect it.
This is based on the screenshots I posted above: especially the last shows disagreement between laser and camera which strongly indicates timing problems.
Hello,
First of all, thank you for sharing this amazing work.
I have been testing Cartographer with 2D planar lidars such as a Sick Tim or even the RPlidar and it works quite well.
But now, I would like to use some RGB-D cameras because I need a 2D projection of the 3D world (not all the obstacles are at the same height. So I use the Orbbec Astra Pro that is the only one available on the market as far as I know.
The field of view of such cameras is quite narrow compared to 2D planar lidars so I have decided to use two of them. I use the node
depthimage_to_laserscan
in order to get a 2D laser scan, then I merge the two scans into one to feed Cartographer (custom node +pointcloud_to_laserscan
).I also use the odometry of the mobile base (2 wheels mobile robot + 2 caster wheels) and an IMU in order to help Cartographer as much as I can.
The issue is to find the correct configuration parameters to get decent results. I have used the turtlebot settings that are not too bad but the result is still not quite accurate (the walls shift after a u-turn and are no longer parallel).
So, could you give some documentation or tips to help tuning the parameters? But first of all, do you think that it is possible to use SLAM only with depth cameras? Or maybe should I add even more cameras to get a wider field of view? One looking backward?
You will find my launch file (
astras.launch
) , configuration file and custom merging node in my repository: https://github.com/romainreignier/cartographer_ros/tree/astrasAnd the bag I use: https://drive.google.com/open?id=0B37uoIn2q4qqU0J4MVRrV2lMSDQ
The
tim/scan
is here for reference of the expected result.Thanks in advance for your help.