johannes-graeter / limo

Lidar-Monocular Visual Odometry
GNU General Public License v3.0
822 stars 231 forks source link

Limo


CALL FOR HACKATHON: Against all my expectations my work still invokes the interest of many. Since it aged quite a bit now, I would love to organize a Hackathon were people who love to code such as me take part to refurbish this repo. Possible things to do are:

  1. Make a standalone api that works without ros
  2. Make a python api (already started)
  3. Bring repo to C++20 as much as possible (basis is ros indepedant app)
  4. constexpr everything (as much as possible)
  5. ROS2
  6. Remake of the lidar point depth extraction module

But for this I need YOUR HELP! Your reward will be a lot of fun, working together on a project with experienced devs and of course a contribution record (that looks pretty neat in applications ;) ). If you are interested, please contact me per mail or write an issue. If there is more than 2 people plus me, I will organize a date :)


Lidar-Monocular Visual Odometry. This library is designed to be an open platform for visual odometry algortihm development. We focus explicitely on the simple integration of the following key methodologies:

The core library keyframe_bundle_adjustment is a backend that should faciliate to swap these modules and easily develop those algorithms.

Details

This work was accepted on IROS 2018. See https://arxiv.org/pdf/1807.07524.pdf .

If you refer to this work please cite:

@inproceedings{graeter2018limo,
  title={LIMO: Lidar-Monocular Visual Odometry},
  author={Graeter, Johannes and Wilczynski, Alexander and Lauer, Martin},
  booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  pages={7872--7879},
  year={2018},
  organization={IEEE}
}

Please note that Limo2 differs from the publication. We enhanced the speed a little and added additional groundplane reconstruction for pure monocular visual odometry and a combination of scale from LIDAR and the groundplane (best performing on KITTI). For information on Limo2, please see my dissertation https://books.google.de/books?hl=en&lr=&id=cZW8DwAAQBAJ&oi .

Installation

Docker

To facilitate the development I created a standalone dockerfile.

Semantic segmentation

The monocular variant expects semantic segmentation of the images. You can produce this for example with my fork from NVIDIA's semantic segmentation:

  1. Clone my fork

    git clone https://github.com/johannes-graeter/semantic-segmentation
  2. Download best_kitti.pth as described in the README.md from NVIDIA and put it in the semantic-segmentation folder

  3. I installed via their docker, for which you must be logged in on (and register if necessary) https://ngc.nvidia.com/

  4. Build the container with

    docker-compose build semantic-segmentation
  5. Run the segmentation with

    docker-copmose run semantic-segmentation

    Note that without a GPU this will take some time. With the Nvidia Quadro P2000 on my laptop i took around 6 seconds per image.

Requirements

In any case:

Build

Run

Known issues