Closed YiChenCityU closed 6 years ago
Thanks for your interest in my work! The paper is currently under revision at IROS. End of next week I will know if it will be accepted or not, so then I can tell you more. The basics are simple: The depth for each measurement in the camera image is estimated by fitting a local plane through the surrounding lidar points. To make it work we need some heuristics to segment the foreground and we need to treat features on the ground specially. In the bundle adjustment backend we use both "monocular" landmarks, that we get from traditional mono slam and the "depth enhanced" landmarks from which we get the scale.
This looks to have Pose-Graph generator(Front-End) and an Optimizer BA (Back-End) but you refer to it as Visual Odometry as opposed to SLAM. 1) Is it because of the lack of Loop Closing capability? 2) Is the LIDAR is only used to get the 3D coordinates of the monocular features ? In which case is this a Visual SLAM? 3) Does it only take Monocular camera images and LIDAR Point Cloud as an input or can it optionally also take an external Odometry information to improve SLAM?
Also, do share your paper once it gets accepted. Great work by the way :)
Great . Looking forward to your paper.
Hi salahkan,
thanks for the interest in my work,
Johannes
I was accepted at IROS :) Submitted the paper to arxiv, should be up by tomorrow.
Great!
Was linked on Kitti benchmark.
Hi, This is a great work. Do you have some paper or materials to introduce the principle of the code? How to fuse the lidar and visual data in SLAM? Thanks very much.