(Top) camera frames (Bottom) depth maps of the keyframes from Monodepth
Demo: https://www.youtube.com/watch?v=4oTzwoby3jw
Paper: CNN-SVO: Improving the Mapping in Semi-Direct Visual Odometry Using Single-Image Depth Prediction
This is an extension of the SVO with major improvements for the forward facing camera with the help of monocular depth estimation from CNN.
We tested this code with ROS Kinetic on Ubuntu 16.04.
There are two ways to make it work
We use KITTI Odometry sequences and Oxford Robotcar dataset to evaluate the robustness of our method.
We provide camera parameters for both datasets. Running a sequence requires setting the directory that keeps the images (also directory to their corresponding depth maps if you are using Offline mode).
Unfortunately, getting Oxford Robotcar dataset to work is not straightforward. Here is the rough idea of what do you need to do to preprocess Oxford Robotcar dataset images:
rpg_svo/svo_ros/param/vo_kitti.yaml
NOTE: If you are having difficulty compiling the library, you can still visualize the results using the Offline mode
monodepth_main.py
so that it saves the disparity maps as numpy (.npy) files in one folder (per sequence)rpg_svo/svo_ros/param/vo_kitti.yaml
NOTE: For saving disparity map, the naming convention is depth_x.npy
, where x is 0-based with no leading zero.
Clone this repo
cd ~/catkin_ws/src
git clone https://github.com/yan99033/CNN-VO
(OPTIONAL) Clone monodepth-cpp repo
cd ~/catkin_ws/src/CNN-VO
git clone https://github.com/yan99033/monodepth-cpp
Enable/disable Online mode by toggling TRUE/FALSE in svo_ros/CMakeLists.txt
and svo/CMakeLists.txt
. Also, make sure that Monodepth library and header file are linked properly if Online mode is used. Note that Online mode is disabled by default
Compile the project
cd ~/catkin_ws
catkin_make
Launch the project
roscore
rosrun rviz rviz -d ~/catkin_ws/src/CNN-VO/rpg_svo/svo_ros/rviz_kitti.rviz
roslaunch svo_ros kittiOffline00-02.launch
Try another KITTI sequence
rpg_svo/svo_ros/param/vo_kitti.yaml
We used ATE evaluation code provided by ORBSLAM here with a little bit of modification.
cd eval
python2 evaluate_ate_scale.py PATH_TO_GROUNDTRUTH PATH_TO_YOUR_OUTPUT PATH_TO_TIMESTAMPS_FILE --plot PATH_TO_PLOT_FILE_YOU_WANT_TO_SAVE
As an example:
python2 evaluate_ate_scale.py ground_truth/kitti_ground_truth/00.txt sample/seq00_CNN_SVO_KITTI_SAMPLE.txt ground_truth/kitti_ground_truth/00_times.txt --plot 00.png
We tried our best to improve the existing SVO, but this code is by no means perfect. That being said, we would like to point out some of the noticeable problems in our code:
We hope that you can further extend the functionality of this work, and make the existing SVO even better (which is an impressive piece of work).
The authors take no credit from SVO and Monodepth. Therefore the licenses should remain intact. Please cite their work if you find them helpful.