Closed SteveMacenski closed 1 year ago
Thank you for the issue!
How performant is this?
I can hardly tell which is better / newer / stronger / reliable wink.
Okay, I'd like to add any metrics and a more detailed description of my packages.(It will take time.)
Additionally, have you done any reliability testing to know how large of maps it can handle before having problems or edge cases?
The test is still not enough. The code is still being refactored. After refactoring, I would like to test it with various data and compare it with various other projects.
(PS, you're also missing a license)
Oh. I had forgotten about it. Thank you. I added it.
Hey. Since I finished refactoring to some extent, I tried to compare other OSSs. I used the following data for velodyne VLP-16 from Tier IV. It takes about 40 minutes to travel as fast as a walk on a 2km long course. https://data.tier4.jp/rosbag_details/?id=212
Main memory: 16GB Clock speed: 3.5GHz Cache memory: 8MB Core: 4 cores
This is the result of lidarslam_ros2. This can be done in the demo. Green: path (the 25x25 grids in size of 10m × 10m)
There was plenty of room for memory, but CPU usage was a bit harsh when saving the map to pcd while mapping.
Here are the results of the LEGO-LOAM, which drifted heavily in the z-direction from the middle. (There are many issues of z-direction drift with 16-line LiDAR in LeGO-LOAM.) https://github.com/RobustFieldAutonomyLab/LeGO-LOAM
This is the result of hdl_graph_slam. https://github.com/koide3/hdl_graph_slam Sorry if it's hard to see, but the self-localization is failing in the middle like this. As you can see from the issues of hdl_graph_slam, hdl_graph_slam does not work well with 16-line LIDAR.
This time, I didn't use cartographer because I can't run it on my poor PC specs and it needs an imu in 3D. https://github.com/cartographer-project/cartographer_ros
I'll update again when I have time. I'd like to make a wiki when it is completed to some extent.
Do you think that this would be suitable for a modern ~i7 mobile (laptop, NUC, etc) CPU for smaller spaces ~10,000 m2? That's about the normal for an indoor mobile robot application at around a 100k sqft (10,000 m2), they definitely get bigger, but that's probably 60% of use-cases right there.
This looks really impressive. Is there a localization mode / package you can use alongside a map this generates? Is this work motivated / included in an open-source project like Autoware? It seems like you have a bunch of really good projects here, that if they work this well, should get some more use / publicity / sharing with the community. Also may be worth explaining a little compare and contrast on all of your repos for various SLAM/localization solutions, since there are many. Overall, I'd just like to understand your work and see how we can make it more visible / used (if its not already).
Right now for at least the navigation side of things (drones, subs, mobile robots, basically non-AVs), we don't have a 3D slam option that we're currently really supporting. This work seems like a great candidate if it can run real-time on that kind of CPU. We have good odometry at lower speeds so we don't need to run updates at some crazy rate. 1 hz or every 0.5 meters is pretty standard. Other ones I've looked at on my short list are HDL SLAM / localization that I've gotten glowing reviews from the AV world about. If you had interest in this, I'd love to discuss it more.
Thank you for this very useful information to help me think about my future course of action to the OSS. It is very enlightening.
Do you think that this would be suitable for a modern ~i7 mobile (laptop, NUC, etc) CPU for smaller spaces ~10,000 m2?
Yes, the LABTOP PC was the use case I envisioned. In fact, my PC I tested the SLAM on is a cheap and poor laptop I bought over 5 years ago(I'm ashamed to say that this is the highest spec of my private PCs…). I think this SLAM will work normally for NUC ... (sorry it will do not do.)
Is there a localization mode / package you can use alongside a map this generates?
I created a package called "pcl_localization_ros2" below, which is very simple to implement. I'd like to improve this one a bit more as well. https://github.com/rsasaki0109/pcl_localization_ros2
Is this work motivated / included in an open-source project like Autoware?
This package has not been incorporated into any projects.
It seems like you have a bunch of really good projects here
It's a great honor. But unfortunately, there is no other package that I can recommend other than lidarslam_ros2 to the community. There are some things I can recommend with a little more improvement. But I haven't gotten around to improving it yet.
If you had interest in this, I'd love to discuss it more.
I'm very curious about that. As you can see from my repositories, I like robots and SLAM/Localization and I like to talk about them.
If you give me some instructions, I can run a high-level benchmark on a more current CPU. My laptop's a 7th gen i7, still an older model, but used on many older robots in service. That would also let me take a peak at the CPU utilization see if this is viable since the SLAM can't take up all the cores and CPU, that's just 1 process in the larger system.
Are you running any filters (PF, KF, etc) over that matching or is it just raw NDT matching? That would be really nice (and required) for production use, you don't want a couple irraneous measurements to totally mess things up outside of reality.
CC @gbiggs @JWhitleyWork what SLAM do you use in Autoware, does this look compelling to you?
It's a great honor. But unfortunately, there is no other package that I can recommend other than lidarslam_ros2 to the community.
You may want to point readmes for the others here then and explain what's potentially "wrong" with the others. They look from the readme to be totally functional so if there's issues, you should let people know so that they don't go down a bad rabbit hole. Can you briefly outline those recommendations with a little improvements?
As you can see from my repositories, I like robots and SLAM/Localization and I like to talk about them.
If you look at my repos / orgs, I'm also interested in similar things :wink: I think getting a rosbag and some instructions to run this would be a good first step so I can get a feel for it. If everything seems good, I think we should discuss maybe having this as an option in Navigation2 and/or Autoware. Now would be a great time to also mention any shortcomings, I fully expect that there's places this can be improved, but just having that explicit knowledge is important to getting adoption. Nothing turns people off quite like a nice readme and then having issues.
CC @gbiggs @JWhitleyWork what SLAM do you use in Autoware, does this look compelling to you?
We use NDT, but LeGO-LOAM has recently surfaced as a potential alternative.
@gbiggs NDT isn't SLAM, its a method of matching scans. Are you telling me Autoware build maps by just buffering scans? :cringes: I've heard OK things about that too. Though if you look above, at least one this one data set, this rocks LeGO-LOAM by alot. Maybe we should do some benchmarks or something ....
I don't know the details, unfortunately. I'm not a SLAM person. But to my knowledge, our SLAM solution is NDT.
Autoware doesn't have a SLAM solution. We do localization with NDT against a pre-recorded map. Same for both Autoware.ai and Autoware.Auto.
So how does that pre-recorded map come to be? Sounds like you guys need a SLAM solution then :smile: maybe we can see if we have some requirements that match and we can explore this process together as a joint autoware-navigation project. Anyhow, this is an aside, we can follow up in another thread for that, but I wanted to bring this to your attention.
I'm a huge fan of simple to the point code, and if this does perform in line with those other options, there's alot to say for KISS.
If you give me some instructions
I think getting a rosbag and some instructions
I wrote about how to make it work in the demo section "The larger environment" in README.md, but is this explanation insufficient? I'm not used to writing these instructions, so if you don't understand something, please ask me. (I've been wondering why I haven't gotten more questions about the instructions, even though my github repository is getting more and more stars...)
Are you running any filters (PF, KF, etc) over that matching or is it just raw NDT matching?
This package does not use a Gaussian filter, but it does allow you to combine odometry and 9-axis IMU for scan matching. For example, here's the code for the odometry fusion. Variables https://github.com/rsasaki0109/lidarslam_ros2/blob/master/scanmatcher/include/scanmatcher/scanmatcher_component.h#L167-L171 Receiving and storing odometry messages. https://github.com/rsasaki0109/lidarslam_ros2/blob/master/scanmatcher/src/scanmatcher_component.cpp#L497-L505 Updating robot's pose to account for timestamps https://github.com/rsasaki0109/lidarslam_ros2/blob/master/scanmatcher/src/scanmatcher_component.cpp#L302-L325
I just haven't written the process for when a match fails, but I'd like to implement a simple process soon. I'll probably write a code that looks something like this.
registration_->.align(*output_cloud, mat);
// registration_ is an entity of pcl::Registration<pcl::PointXYZI, pcl::PointXYZI>::Ptr
// It execute the alignment with the align method.
if (registration_->hasConverged()) { // PCL registration has a method for checking convergence.
return; // Ignore the result of this matching and publish the value updated by odometry.
}
You may want to point readmes for the others here then and explain what's potentially "wrong" with the others.
These aren't wrong, and I think they all basically work just fine. (Of course, problems can come up by testing in various environments.)
I just wanted to say that since they are simply and naively implemented., there are no advantage compared to other OSSs.
For example, this kalman_filter_localization
is a package that combines sensors with a Kalman filter to estimate its own position. It does not estimate the IMU sensor bias or compensate the observation delay as other OSSs do.
https://github.com/rsasaki0109/kalman_filter_localization
Now would be a great time to also mention any shortcomings.
OK. There's a lot to improve on in this package.
・It does not close the loop when the self-position is drifting heavily. Dealing with large drifts would be too heavy to handle in a naive way, so this package doesn't address that. There is a global LiDAR descriptor called scancontext which is a powerful way to solve this problem.
scancontext Global LiDAR descriptor for place recognition and long-term localization https://github.com/irapkaist/scancontext However, I didn't adopt it because the license of it is in discussion with the company. https://github.com/irapkaist/scancontext/issues/6
・IMU combined but only in 9 axes 6-axis IMU is not supported. This package also does not estimate the IMU bias with tight coupling to the IMU as Kudan SLAM and LIO-mapping do. https://github.com/hyye/lio-mapping
・Insufficient pre-processing of the point cloud The only pre-processing of the point cloud is a min-max filter and distortion correction with a 9-axis IMU. I think other necessary features include clustering to clear obstacles.
・No GPS fusion ・No gtest ・No docker ・Long build ・Insufficient visualization
That's all I can think of at the moment.
@SteveMacenski Tier IV, which is developing autoware, has a subsidiary called MAP IV, which I believe is doing the research and development of NDT SLAM. https://twitter.com/map4_jp/status/1247851869721874433. However, it is not enough to use the SLAM map for automatic driving, so they may buy the advance map from Mobile Mapping System company, which make high-precision maps with expensive GPS/IMU/Odometry/LiDAR.(Maybe, is it Aisan Technology, etc.?) https://www.aisantec.co.jp/english/ And I think the projects that use Autoware other than self-driving cars are using Original NDT SLAM, such as Map IV's SLAM.
I'll look at the instructions next week and see what I can do / suggest. I don't want to open that can of worms at 2pm on Friday before a long weekend.
When you say "this package" you mean the PCL localization, right? This has some type of pose-graph or filter in the backend, right?
Dealing with large drifts would be too heavy to handle in a naive way, so this package doesn't address that.
Sounds like something we should talk about / maybe look at. What's "large" to you? That's also something that can be looked at / fixed if its a big deal at some point. You mention only working with 9 axes IMUs, you're using the mag information in this? Typically I would have thought that this would just pull the orientation vector from the IMU and maybe the acceleration.
On Tier IV SLAM: Got it, so they're matching not against SLAM maps but against HD annotated maps from a data company. I suppose at some point SLAM was involved, but not to the end user for that level of autonomous driving. I don't think that means we shouldn't still have one around and supported.
If nothing, if this is something you'd like to maintain for the foreseeable future / keep developing on (e.g. if this is a "seminal" project for you and not a stepping stone for another one, in that case maybe we talk about that next one) I don't see a reason why at the very least this is one of the options we document and show navigation2 integrations with. It never hurts to support more than less. But to make it the "recommended" one, I'd need to do a survey of this, LeGO, and HDL and make sure we can find some common formats to save maps in for localization & a stable localization solution.
I don't want to open that can of worms at 2pm on Friday before a long weekend.
OK. Have a nice weekend.
When you say "this package" you mean the PCL localization, right? This has some type of pose-graph or filter in the backend, right?
"this package" is a lidarslam_ros2. It has a pose-graph optimization using g2o on the backend, but it doesn't have a Gaussian filter like EKF or PF.
What's "large" to you?
Below is a paper about hdl_graph_slam. lidarslam_ros2 also adopts a simple loop closure like this Algorithm1.
This corrects the relative position with scan matching,
but if the drift is large, the scan matching will not converge and loop closure will not be possible
(because the distance between pose node pi
and pose node pj
in Algorithm is too large).
『A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement』 https://www.researchgate.net/publication/331283709_A_portable_three-dimensional_LIDAR-based_system_for_long-term_and_wide-area_people_behavior_measurement
You mention only working with 9 axes IMUs, you're using the mag information in this?
Yes, lidarslam_ros2 uses a geomagnetic sensor.
I don't think using a geomagnetic sensor is that weird.
For example, AutowareAI's self-localization ndt_matching
, which is currently available, only supports 9-axis IMU.
The following is a part in the IMU callback function in ndt_matching.cpp.
Since input->orientation, which is sensor_msgs/imu, is used, AutowareAI's ndt_matching
also requires a geomagnetic sensor.
double imu_roll, imu_pitch, imu_yaw;
tf::Quaternion imu_orientation;
tf::quaternionMsgToTF(input->orientation, imu_orientation);
tf::Matrix3x3(imu_orientation).getRPY(imu_roll, imu_pitch, imu_yaw);
https://gitlab.com/autowarefoundation/autoware.ai/core_perception/-/blob/master/lidar_localizer/nodes/ndt_matching/ndt_matching.cpp#L885 I'm aware of the discourse that self-localization is more stable when magnetic sensors are not used. However, in that case, the coordinate conversion for LiDAR distortion using IMU would be troublesome, so lidarslam_ros2 necessarily needs a geomagnetic sensor when using the IMU . Of course, it is not difficult to implement IMU fusion without a geomagnetic sensor in lidarslam_ros2.
if this is something you'd like to maintain for the foreseeable future / keep developing on
Yes, I would like to continue the development to make this package better.
Got it on the pose-graph, that's A-OK. I thought you may have been saying that this doesn't have any structure and its just matching or something.
Your loop closure algorithm looks pretty typical so I understand. What's the maximum distance / angle off you have set before it fails to attempt a candidate (I assume that's a parameter somewhere)? That's what I mean by asking what's large. What's that distance you have set to be computationally stable (0.1 m, 1 m, 10 m, 50 m)?
Can you clarify what you mean by "9 DOF IMU required" because the snippet you sent
double imu_roll, imu_pitch, imu_yaw;
tf::Quaternion imu_orientation;
tf::quaternionMsgToTF(input->orientation, imu_orientation);
tf::Matrix3x3(imu_orientation).getRPY(imu_roll, imu_pitch, imu_yaw);
Is only using the orientation vector from the IMU, not all 9 DOF (3x angular velocity, 3x mag, 3x accelerometer). If that's what you mean by that, then that's totally typical. Most ROS things using IMU messages are going to be looking at the orientation vector only. I would state that requirement as "IMU message provides orientation vector".
Yes, I would like to continue the development to make this package better.
Awesome, I will try to get some time this week to play around with it with datasets. I wish I had more 3D lidar mobile robot datasets to work with, its hard to get confident in something when you have such limited data. I really would like to have some "official" 3D lidar SLAM and localization support in navigation2. If you though are confident in this, I think there's a couple of things that we could do.
・About loop closure
Yes. The loop closing algorithm of lidarslam_ros2 is typical, as you say.
The results of the above experiment show that the loop closure is successful in drifts of about 20 meters.
I haven't tried more than that, so I'll report back when I test it.
This is set with the range_of_searching_loop_clousure
.
・About IMU Oh, that's right. I was mistaken. To be precise, it requires an IMU to provide an orientation vector when using an IMU.
its hard to get confident in something when you have such limited data.
OK, I'm going to look at a lot of different metrics with a lot of different data, and then I'm going to test it and summarize the results.
I see you use the PCD IO for PCL
There is a lozalization mode, pcl_localization_ros2 package below. I would also like to test this with lidarslam_ros2. pcl_localization_ros2 https://github.com/rsasaki0109/pcl_localization_ros2
I think we (I) need to make a 3D static layer
I don't know what I can do to help with that problem, but if there is something I can do, I will.
The results of the above experiment show that the loop closure is successful in drifts of about 20 meters.
20 meters is more than OK for mobile robotics, and probably most things. That's totally good :smile:
There is a lozalization mode, pcl_localization_ros2 package below.
I thought you said that just did NDT matching? For something like navigation, we need a bit more reliability than that. It can do matching / ray casting / whatever to match things, but it needs to be filtered through an EKF or particle filter to stabilize readings. Mobile robots work in environments with significantly less coverage of sensors so there's alot more uncertainty that needs to be filtered out. If you had a 3D lidar only facing forward, for instance, there may be many adjacent regions that look similar, a filter helps make sure that you don't jump around. That might be good for first-order testing, but probably couldn't be used in it of itself.
Nothing you need to do on the 3D static layer, that's a me problem. Just a statement. Have you noticed in your experience that there's a consistent filetype used to save the 3D maps from 3D SLAM, I see you use the PCL PCD format, is that typical of others as well? Need to know the standards to know how to load them for that layer.
Part of the problem with datasets is they're likely going to be for the autonomous driving usecase. While its better than nothing, its not really representative of the area I work in. Though if you showed over a few common benchmark like KITTI or something sets that this outperforms or on par with other open solutions (Lego, HDL) that would be a really good signal either way. I happen to have a 3D lidar and a robot, but not data or a space to test in right now with COVID.
One thing I just noticed is that you use the odometry topic for something in this rather than getting the odom->base_link transform from TF. Can you explain that a little? That's not necessarily in the best-practices for ROS based SLAM/localizers (per REP-105). The same way you use TF to get the sensor->base_link transform, typically you use TF to get the odom->base_link transform as well so you can get time-interpolated results and be drop in compatible with other SLAM systems (and one less parameter to configure)
・About pcl_localization_ros2 I understand. So, pcl_localization_ros2 isn't good enough.
Nothing you need to do on the 3D static layer, that's a me problem.
OK.
Need to know the standards to know how to load them for that layer.
I think the PCD format is the standard, at least for 3D SLAM in the ROS community..
hdl_graph_slam and LeGO-LOAM also save the map only in pcd format.
sorce codes of LeGO-LOAM
pcl::io::savePCDFileASCII(fileDirectory+"cornerMap.pcd", *cornerMapCloudDS);
pcl::io::savePCDFileASCII(fileDirectory+"surfaceMap.pcd", *surfaceMapCloudDS);
pcl::io::savePCDFileASCII(fileDirectory+"trajectory.pcd", *cloudKeyPoses3D);
README.md of hdl_graph_slam
Services
// abbreviation
/hdl_graph_slam/save_map (hdl_graph_slam/SaveMap)
save the generated map as a PCD file.
You can save the generated map by:
rosservice call /hdl_graph_slam/save_map "resolution: 0.05
destination: '/full_path_directory/map.pcd'"
https://github.com/koide3/hdl_graph_slam
Since the PCD format is the standard format for PCL, and PCL was developed in the willow garage with ROS, I think the PCD format was easier for SLAM developers to handle. However, I'm not sure about it outside of the ROS community. For example, google cartographer saves the map in pcd or ply format.
One thing I just noticed is that you use the odometry topic for something in this rather than getting the odom->base_link transform from TF.
There is no particular reason that lidarslam_ros2 is getting the odom->base_link transformation from the odometry topic, not from TF. It's just that I didn't know the typical. If it's typical to get from TF, I'd like to improve it that way.
That's actually really good to know that PCD is standard, that makes my life much easier.
So typically to get odometry, you have your TF buffer get you the odom to base link frame when you require it, rather than subscribing to an odometry topic and caching it. It should help in accuracy as well since TF will do time interpolation for higher accuracy odometry estimates at a given time.
Thank you for your advice! I have modified the code to get the odom->base_link conversion from TF rather than from the odometry topic. I've tested it and it certainly improved the accuracy.
modified code https://github.com/rsasaki0109/lidarslam_ros2/blob/master/scanmatcher/src/scanmatcher_component.cpp#L296-L309 commit https://github.com/rsasaki0109/lidarslam_ros2/commit/2354189ee006261bac480e369cb34710483cbac3#diff-b4e1f266ac485ebdc879e3d13b1df45aL302-L325
Below is the result of the experiment. The robot only does sequential SLAM and does not perform a loop detection. The robot takes the same path three times, and you can see that the modified one has a smaller drift.
Green: odometry from TF, Yellow: odometry from an odometry topic
I'll report back when the progress is made.
range_of_searching_loop_clousure: 20.0
range_of_searching_loop_clousure: 40.0
You seem to have added the same parameter twice. Also, that parameter is misspelled. Might want to check that its not also misspelled in the code.
Wow, that's almost a meter difference! @tfoote will be happy to hear that TF improves SLAM performance so much vs a naive approach :smile:
Oops. Thank you for pointing out my mistakes. I will fix them.
As a quick note - I haven't forgotten about this, just been really overloaded the last bit and need a couple projects to end before I can add this to my more active queue
Thank you for your comment in your busy time.
I ported this package to foxy, but I didn't have the package I needed, so my progress was halted. I got the packages I needed into foxy last week, so I'm starting to commit it again.
I'm working with some students right now to implement the 3D costmap layer to make use of 3D SLAM maps for navigation & a map server for .pcd
files for 3D SLAM / VSLAM. So there's some movement on the integration side, but not any on the 3D SLAM testing / support side.
This is being started to work on by the navigation working group in benchmarking 3D SLAM systems on datasets to find the CPU/memory usage before then testing on more representative mobile robot uses
I am sorry, I was too busy job hunting to respond when you wrote me back. I was too busy job hunting to respond when you replied (at the time I was very impatient to find a job and had no time to spend on other things...). As a working person, I don't have the spare time to resolve the issue, so I am closing this issue.
Hi,
How performant is this? E.g. can I run this on a mobile CPU like found on a NUC or would you require an automotive grade processing power for autonomous driving? Any metrics here would be helpful.
Additionally, have you done any reliability testing to know how large of maps it can handle before having problems or edge cases? Any comparison you can make to HDL from their datasets (or any of your other projects)?
I've taken a brief look over some of your github project; these all look very impressive. So many localization and slam projects, I can hardly tell which is better / newer / stronger / reliable :wink:. Some metrics would be great to tell if these are good options for use. I can say the ROS community and general open-source community has a current lack in high-quality 6D localization and SLAM solutions and these all look interesting at least from the readme pictures.
(PS, you're also missing a license)
Steve