johannes-graeter / limo

Lidar-Monocular Visual Odometry
GNU General Public License v3.0
809 stars 232 forks source link

About the principle #4

Closed YiChenCityU closed 6 years ago

YiChenCityU commented 6 years ago

Hi, This is a great work. Do you have some paper or materials to introduce the principle of the code? How to fuse the lidar and visual data in SLAM? Thanks very much.

johannes-graeter commented 6 years ago

Thanks for your interest in my work! The paper is currently under revision at IROS. End of next week I will know if it will be accepted or not, so then I can tell you more. The basics are simple: The depth for each measurement in the camera image is estimated by fitting a local plane through the surrounding lidar points. To make it work we need some heuristics to segment the foreground and we need to treat features on the ground specially. In the bundle adjustment backend we use both "monocular" landmarks, that we get from traditional mono slam and the "depth enhanced" landmarks from which we get the scale.

salahkhan94 commented 6 years ago

This looks to have Pose-Graph generator(Front-End) and an Optimizer BA (Back-End) but you refer to it as Visual Odometry as opposed to SLAM. 1) Is it because of the lack of Loop Closing capability? 2) Is the LIDAR is only used to get the 3D coordinates of the monocular features ? In which case is this a Visual SLAM? 3) Does it only take Monocular camera images and LIDAR Point Cloud as an input or can it optionally also take an external Odometry information to improve SLAM?

Also, do share your paper once it gets accepted. Great work by the way :)

YiChenCityU commented 6 years ago

Great . Looking forward to your paper.

johannes-graeter commented 6 years ago

Hi salahkan,

  1. In my opinion in the literature SLAM, Visual Oodometry, Structure From Motion, Visual Mapping and Localization are not very distinct and clearly used. For me Visual Odometry is an application and SLAM the method to solve it with. Your are right that "Visual Odometry" implies, that loop closure is not relevant since you only have a local map (When you drive in circles your routing is usually bad ;) ).
  2. Yep LIDAR is only used for the depth. In contrast to Zhang et al. which use intensive LIDAR methods, like ICP, I wanted to see how far I can only with only applying the depth and otherwise using mono techniques. With Visual Odometry I wanted to emphasize, that I use mostly mono techniques and only use LIDAR for the depth, but I must admit, that it is a bit miss-leading (And there is some people that see LIDAR also as a visual sensor for Infrared light with a projector, so I found it OKish...).
  3. I found that most systems out there are end to end: you put an image in, get a trajectory out. I wanted to have something more modular, so I designed it to "refine" some input pose-graph. In the original version, the input was just a tf2 message, so you can put anything in there, even mono-estimates without scale (f.e. from https://github.com/johannes-graeter/momo), since I do pose-only-adjustment in a first step. In that version I skipped that for clarity, but since so many people are interested in the library, I will update it soon. Long answer short, yes you can put in wheel odometry if you want to :)

thanks for the interest in my work,

Johannes

johannes-graeter commented 6 years ago

I was accepted at IROS :) Submitted the paper to arxiv, should be up by tomorrow.

salahkhan94 commented 6 years ago

Great!

johannes-graeter commented 6 years ago

Was linked on Kitti benchmark.