CALL FOR HACKATHON: Against all my expectations my work still invokes the interest of many. Since it aged quite a bit now, I would love to organize a Hackathon were people who love to code such as me take part to refurbish this repo. Possible things to do are:
But for this I need YOUR HELP! Your reward will be a lot of fun, working together on a project with experienced devs and of course a contribution record (that looks pretty neat in applications ;) ). If you are interested, please contact me per mail or write an issue. If there is more than 2 people plus me, I will organize a date :)
Lidar-Monocular Visual Odometry. This library is designed to be an open platform for visual odometry algortihm development. We focus explicitely on the simple integration of the following key methodologies:
The core library keyframe_bundle_adjustment is a backend that should faciliate to swap these modules and easily develop those algorithms.
It is supposed to be an add-on module to do temporal inference of the optimization graph in order to smooth the result
In order to do that online a windowed approach is used
Keyframes are instances in time which are used for the bundle adjustment, one keyframe may have several cameras (and therefore images) associated with it
The selection of Keyframes tries to reduce the amount of redundant information while extending the time span covered by the optimization window to reduce drift
Methodologies for Keyframe selection:
We use this library for combining Lidar with monocular vision.
Limo2 on KITTI is LIDAR with monocular Visual Odometry, supported with groundplane constraint
Video: https://youtu.be/wRemjJBjp64
Now we switched from kinetic to melodic
This work was accepted on IROS 2018. See https://arxiv.org/pdf/1807.07524.pdf .
If you refer to this work please cite:
@inproceedings{graeter2018limo,
title={LIMO: Lidar-Monocular Visual Odometry},
author={Graeter, Johannes and Wilczynski, Alexander and Lauer, Martin},
booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={7872--7879},
year={2018},
organization={IEEE}
}
Please note that Limo2 differs from the publication. We enhanced the speed a little and added additional groundplane reconstruction for pure monocular visual odometry and a combination of scale from LIDAR and the groundplane (best performing on KITTI). For information on Limo2, please see my dissertation https://books.google.de/books?hl=en&lr=&id=cZW8DwAAQBAJ&oi .
To facilitate the development I created a standalone dockerfile.
# This is where you put the rosbags this will be available at /limo_data in the container
mkdir $HOME/limo_data
cd limo/docker
docker-compose build limo
docker-compose run limo bash
Go to step Run in this tutorial and use tmux for terminals.
docker-compose up limo
and open the suggested link from the run output in a browser.
The monocular variant expects semantic segmentation of the images. You can produce this for example with my fork from NVIDIA's semantic segmentation:
Clone my fork
git clone https://github.com/johannes-graeter/semantic-segmentation
Download best_kitti.pth as described in the README.md from NVIDIA and put it in the semantic-segmentation folder
I installed via their docker, for which you must be logged in on (and register if necessary) https://ngc.nvidia.com/
Build the container with
docker-compose build semantic-segmentation
Run the segmentation with
docker-copmose run semantic-segmentation
Note that without a GPU this will take some time. With the Nvidia Quadro P2000 on my laptop i took around 6 seconds per image.
In any case:
sudo make install
to install the headers.sudo apt-get install libpng++-dev
sudo apt-get install python-catkin-tools
sudo apt-get install ros-melodic-opencv-apps
sudo apt-get install git
initiate a catkin workspace:
cd ${your_catkin_workspace}
catkin init
clone limo into src of workspace:
mkdir ${your_catkin_workspace}/src
cd ${your_catkin_workspace}/src
git clone https://github.com/johannes-graeter/limo.git
clone dependencies and build repos
cd ${your_catkin_workspace}/src/limo
bash install_repos.sh
unittests:
cd ${your_catkin_workspace}/src/limo
catkin run_tests --profile limo_release
get test data Sequence 04 or Sequence 01. This is a bag file generated from Kitti sequence 04 with added semantic labels.
in different terminals (for example with tmux)
roscore
rosbag play 04.bag -r 0.1 --pause --clock
source ${your_catkin_workspace}/devel_limo_release/setup.sh
roslaunch demo_keyframe_bundle_adjustment_meta kitti_standalone.launch
rviz -d ${your_catkin_workspace}/src/demo_keyframe_bundle_adjustment_meta/res/default.rviz
watch limo trace the trajectory in rviz :)
Before submitting an issue, please have a look at the section Known issues.