introlab / rtabmap_ros

RTAB-Map's ROS package.
http://wiki.ros.org/rtabmap_ros
BSD 3-Clause "New" or "Revised" License
946 stars 553 forks source link

Lidar-camera calibration #894

Open cy-2022 opened 1 year ago

cy-2022 commented 1 year ago

Hi Mathieu @matlabbe, do you perhaps have tips for single-line-lidar-camera calibration? Are there tools or packages you'd recommend? After a brief search, I'd probably go for this one https://github.com/MegviiRobot/CamLaserCalibraTool. Do you know about it? Thank you very much for your kind help and support all along.

Cheers, Yao

matlabbe commented 1 year ago

Interesting link! I don't have references to do that. In my experiences, I did it "by eye" and using rough measurements with a tape or CAD model (adjusting TF in RVIZ afterwards) on this robot. I didn't need a lot of accuracy (between camera and lidar), because the camera was used only for guessing the transform when a visual loop closure was found, then corresponding laser scans were refined by ICP. The 2D map created from laser scan was still perfectly crisp, but not the 3D point cloud generated using the lidar poses (I didn't really care about the 3D point cloud for that kind of setup). If you are going to generate 3D point cloud reconstruction, you indeed need a good extrinsic calibration between lidar and camera.

cy-2022 commented 1 year ago

Thanks a lot for your reply, Mathieu. “Do I need a 3D point-cloud reconstruction?" - Well, to answer it, I actually have a long story to tell. Originally, we focused on pure-visual SLAM solutions and soon realized their well-known limitations were a problem for our application (this could easily lead to another long story if I was telling it, let's just say we work on autonomous-driving indoor and outdoor robots). So we turned to fusing visual sensors and IMU which proved to be quite sensitive sometimes, and the cam-IMU calibration was also a headache. What is worse, we'd always have to take the performance-price trade-off into account, meaning we cannot go for promising but expensive devices, like ZED2, although we have done experiments with it. Therefore, we've been also looking for alternatives. If you remember, I mentioned in a previous issue that we had to use OpenCV 3.4.3 for the Indemind SDK (http://www.indemind.cn/en/). Anyway, in parallel with these attempts, I thought we could try some lidar-visual-combined SLAM solutions, considering there is a 2D lidar and cameras on our robot products already. That was what led me to RTAB-Map (well, partially, because it also seems quite popular among many robots for educational purposes in China, and let me know if you'd like to hear more about it).

What I'd want to get with RTAB-Map is more robust mapping and localization. For our current robots where Cartographer is used for mapping and localization, mapping and localization in, for instance, server rooms with rows of identical server cabinets, can be quite difficult. To solve the issue with localization, I had to "hack up" a kind of visual-lidar-combined solution. So basically I used the camera for feature extraction and matching and linked each visual keyframe with the robot pose estimated with lidar SLAM. So obviously RTAB-Map does far more than that. Moreover, generating great grid-maps is also quite important, and we need them for navigation. Eventually I think I'd want the 3D point-cloud as another map layer. Sorry for being greedy, I kind of want to exploit everything a visual sensor could bring out. With this map layer, object avoidance is possible, and I'd probably add some semantic info into the point-cloud.

I hope I'm not oversharing, and thanks for reading. I'd really appreciate it if you have comments and suggestions. Thank you.

And by the way, we just tried this calibration tool I mentioned earlier https://github.com/MegviiRobot/CamLaserCalibraTool. Looks great. Just the camera should not be too high above the lidar, which makes the calibration really difficult. We haven't got around to check the mapping results with RTAB-Map with these calibration results yet. Once I have some, I'll post them.

(upper - initial values we've provided; lower - calibrated) 图片

matlabbe commented 1 year ago

Thanks for sharing your experience and your lidar-camera calibration. There are indeed advantages and inconveniences going vision-only or LiDAR-only. RTAB-Map tries to get the best of both worlds, I hope you will find some success with it. Note that with the database format, you can share your maps here and get good feedback for issues or better parameters.

cheers, Mathieu

cy-2022 commented 1 year ago

Hi Mathieu, thanks a lot for your reply. I'd like to share one database after a mapping around in a circular corridor scenario I showed earlier in a previous issue. However, the maximum allowed file size is 25 MB, while my zip file is about 80 MB. How should I proceed then? Thanks a lot~~

And for the lidar-cam setting of this mapping round, we used that already installed on our robot product. The camera is about 80 cm above the lidar sensor. So it seems this setting poses challenges on the lidar-cam calibration. To ensure the lidar sensor is able to scan the calibration board, the board has to be held far away (like about 2 meters away) such that it is still in the FoV of the camera. Our calibration board is 80 x 80 cm. Still it can be just too small when it is 2 meters away from the camera. Consequently, the calibration results don't seem to be as accurate as for the setting shown above.

Cheers, Yao

matlabbe commented 1 year ago

A public google drive or dropbox link can do for sharing large files. Some people used other online free large file sharing services (like wetransfer.com).

Interesting info about calibration FOV issue like this, thx for sharing!

cheers, Mathieu

cy-2022 commented 1 year ago

Thanks a lot, Mathieu. I've sent you the database via wetransfer.com to your email address (that I took from your paper "RTAB-Map as an Open-Source Lidar and Visual SLAM Library for Large-Scale and Long-Term Online Operation"). Please let me know if it works. Thank you very much! Today or tomorrow we'll try a mapping session in a server room, which I said could be challenging for 2D lidar-only SLAM. We'll see what the results are like.

Cheers, Yao

matlabbe commented 1 year ago

Sorry Yao, I was quite busy the past two weeks (by the new windows release) I missed the wetransfer, I see it in my email but the link is not available anymore. If you still want me to check any new database you create, send me another link.

cheers, Mathieu

cy-2022 commented 1 year ago

No problem at all, Mathieu. I figured that you must have been busy. And sure, I'd like to share the databases with you. The new link is already on its way. This time, three scenarios, the circular corridor, a small server room, and an underground garage (quite a big one, but was really cold that day, so we only managed to have a mapping session for part of it). Thank you very much!

Cheers, Yao

matlabbe commented 1 year ago

Interesting databases! Here are some comments:

Thanks for sharing, Mathieu

cy-2022 commented 1 year ago

Hi Mathieu,

thanks a lot for the comments and suggestions. Very important and valuable. We’ll incorporate them into our further investigations. I was actually about to ask you about the parameter "RGBD/OptimizeMaxError" (there were a few issues we observed when checking the relocalization performance at start-up) and how to make better use of the database, like what info can be extracted there. Thank you very much for already sharing some tips with us. Like I said, I’ve been summarizing the results we’ve collected so far and want to also compile a list of questions we’ve had, especially concerning localization. I’ll send this summary to you by Monday. Looking forward to further discussion with you. Thanks a lot!

Have a great weekend.

Cheers, Yao