Firstly, thanks for the author for providing us access to your work. Generally speaking, after some hours working with it I managed to get it to run in custom gazebo simulations and 'real robots'. That said, I would like to put some observations in the issues as guide for future work. I'm working with robotics for quite a while and eventually found Kimera as a VIO solution, among other things. Specially considering that they published their papers and won awards for it, I decided to test the stack.
General Thoughts and Directions
I got myself very frustrated due to the magnitude this project reached and the quality of their solution in overall. My general thoughts about the library are:
The architecture of the system is not transparent and clean as done in other ROS packages, specially considering that they provide a server that receives data from a data-provider structure. This architecture seems more like an use case they have instead of providing a clean and understandable way of using the system by the actual robotics community surrounding ROS. What makes me think that are the issues in the abandoned repository and experience trying to deploying it.
It is not going to work as an out-of-the box solution if you are using the wrong system/ROS and it does not work as a 'library', but rather as a black-box coupled with several things that you might not need.
In my adventures getting it to work, I've faced several issues as all of you who are posting here, and my general conclusions are that the stack can serve as an example and possible might help you as a baseline.
If you are having trouble to understand it, it can be break it into the following systems:
A coupled Mesh reconstruction that does not has much utility for most use-cases.
A visual-odometry system, highly coupled with a pose-graph optimization that is a wrap-up with several calls to the GTSam library created at Caltech https://gtsam.org/.
If you use semantics from the other repository, you have a 3D reconstruction system based on the VoxBlox library from ETH Zurich https://github.com/ethz-asl/voxblox.
Assuming that you know ROS and work with robotics, to have a cleaner system deployment for your stack I would consider:
Deploy a minimal working simulation or stack that provides you an odometry source. You can use any VIO and hardware for that.
Deploy your own pose-graph optimization based on GTSam that gets features from the environment and provides you corrections based-on them. You can use anything as features in its factors, it doesn't matter, and there are several tutorials on how to do it!
Add loop-closure in your GTSam integration if you need it! You can find several tutorials on it too!
You can create a map, if needed, in a separate system from your pose-graph with the measurements you stored.
You can share GTSam structures among robots if you need it, when robots are in communication range, as this is a common use case and shouldn't be a significant problem for academic purposes!
This is going to provide you an architecture composed of three ROS nodes: CleanVIO (Odometry), PoseCorrectionGTSam (Pose estimation correction), MapBuilder (Provides you a clean representation for planning).
I found that all other stuff/architecture provided the software is not very usable for academic purposes, as some of you just want a VIO solution that integrates easily with robots in ROS and allows you to build maps for planning.
Make it Work Out-of-the-box
To make it work, without having the headache of trying to port this to ROS2 or Noetic, in any stack, deployment, consider the following:
Install docker.
Create an image using ubuntu 18,
Create a container from it and install ROS Melodic
Clone Kimera_VIO_ROS repository in the container, install it and its dependencies. This step is not going to work with the Dockerfile they provide, because the dependencies are in the wrong version, so you need to fix that! (google for it)
When running this container you need to expose its network to the host OS. To do this run it with the flag "--network host" and also expose your USB devices as needed for cameras and etc with the flags "-v /dev/bus/usb:/dev/bus/usb" or something like that (look at google for it)
Let this container run in your ROS stack.
In your host OS:
Install any ROS you want (to be sure it is going to work try using distros higher than the one in your container with ubuntu 18, in my case I use Noetic).
Create a launch file for your robot that specifies a minimal deployment and provides you a transformation tree for your camera or robot, it can be for a real robot or a robot in gazebo.
Ensure that you have proper stereo images without IR patterns.
In the host machine run a stack for your robot (ensure that the TFs are correct! this is an important step!).
In the running container, run their realsense_IR launch file they provide (if you are using a realsense camera) or the one you created for your own camera.
Sensor:
Make sure you are have "enable_gyro" and "enable_accel" enabled for RealSense cameras.
Make sure that you have "unite_imu_method" set as copy of interpolation if you are using RealSense cameras.
These directions should work flawlessly, because you are eliminating error factors provided by changes in libraries form distributions different from what they used when they developed this, and also if you have the right transformation trees in your minimal stack in your host machine. The stack in the docker container + host machine, should output your with some topics and transforms, such as odometry and etc.
I found that the output odometry seems to be actually a RAW VIO if you compare with the odometry edges from the factor graph and that the "optimized_odometry" topic is not being published. To solve that:
Simple get the last pose from the "/kimera_vio_ros/optimized_trajectory" topic and use that in your transform.
Now, if you manage to get this working following these steps, you are going to face:
Several errors, warnings, and maybe system freezes during runtime, probably because of improper error handling in the code-base and also features that you don't need but are going to be running in the stack, such as the mesh reconstruction.
The stack gets slower with more measurements and I had to run it in parallel mode to avoid that.
However, for most scenarios it should allow you to perform experiments such as calculating drift errors and getting some estimates to serve as a baseline.
Here is a deployment example: youtube
Here is a deployment with 3D reconstruction in the custom stack: youtube
Hi all,
Firstly, thanks for the author for providing us access to your work. Generally speaking, after some hours working with it I managed to get it to run in custom gazebo simulations and 'real robots'. That said, I would like to put some observations in the issues as guide for future work. I'm working with robotics for quite a while and eventually found Kimera as a VIO solution, among other things. Specially considering that they published their papers and won awards for it, I decided to test the stack.
General Thoughts and Directions
I got myself very frustrated due to the magnitude this project reached and the quality of their solution in overall. My general thoughts about the library are:
If you are having trouble to understand it, it can be break it into the following systems:
Assuming that you know ROS and work with robotics, to have a cleaner system deployment for your stack I would consider:
This is going to provide you an architecture composed of three ROS nodes: CleanVIO (Odometry), PoseCorrectionGTSam (Pose estimation correction), MapBuilder (Provides you a clean representation for planning).
I found that all other stuff/architecture provided the software is not very usable for academic purposes, as some of you just want a VIO solution that integrates easily with robots in ROS and allows you to build maps for planning.
Make it Work Out-of-the-box
To make it work, without having the headache of trying to port this to ROS2 or Noetic, in any stack, deployment, consider the following:
Install docker.
Create an image using ubuntu 18,
Create a container from it and install ROS Melodic
Clone Kimera_VIO_ROS repository in the container, install it and its dependencies. This step is not going to work with the Dockerfile they provide, because the dependencies are in the wrong version, so you need to fix that! (google for it)
When running this container you need to expose its network to the host OS. To do this run it with the flag "--network host" and also expose your USB devices as needed for cameras and etc with the flags "-v /dev/bus/usb:/dev/bus/usb" or something like that (look at google for it)
Let this container run in your ROS stack.
In your host OS:
Install any ROS you want (to be sure it is going to work try using distros higher than the one in your container with ubuntu 18, in my case I use Noetic).
Create a launch file for your robot that specifies a minimal deployment and provides you a transformation tree for your camera or robot, it can be for a real robot or a robot in gazebo.
Ensure that you have proper stereo images without IR patterns.
In the host machine run a stack for your robot (ensure that the TFs are correct! this is an important step!).
In the running container, run their realsense_IR launch file they provide (if you are using a realsense camera) or the one you created for your own camera.
Sensor:
These directions should work flawlessly, because you are eliminating error factors provided by changes in libraries form distributions different from what they used when they developed this, and also if you have the right transformation trees in your minimal stack in your host machine. The stack in the docker container + host machine, should output your with some topics and transforms, such as odometry and etc.
I found that the output odometry seems to be actually a RAW VIO if you compare with the odometry edges from the factor graph and that the "optimized_odometry" topic is not being published. To solve that:
Now, if you manage to get this working following these steps, you are going to face:
However, for most scenarios it should allow you to perform experiments such as calculating drift errors and getting some estimates to serve as a baseline.
Here is a deployment example: youtube Here is a deployment with 3D reconstruction in the custom stack: youtube