ToniRV / Kimera-VIO-Evaluation

Code to evaluate and tune SPARK VIO pipeline.
MIT License
21 stars 2 forks source link

WE NOW USE MIT-SPARK/Kimera-VIO-Evaluation, not this one.

Kimera VIO Evaluation

Code to evaluate and tune Kimera-VIO pipeline on Euroc's dataset.

This repository contains two main scripts:

[OUTDATED]

Prerequisites

We strongly recommend setting a new virtual environment to avoid conflicts with system-wide installations:

sudo apt-get install virtualenv
virtualenv -p python2.7 ./venv
source ./venv/bin/activate

Installation

git clone https://github.com/ToniRV/Kimera-VIO-Evaluation
cd Kimera-VIO-Evaluation
# you may want to do this instead for jupyter notebooks:
# pip install .[notebook]
pip install .
python setup.py develop

Example Usage

Main Evaluation

The script main_evaluation.py runs and evaluates the VIO performance by aligning estimated and ground-truth trajectories and computing error metrics. It then saves plots showing its performance.

The script expects an experiment yaml file with the following syntax:

executable_path: '$HOME/Code/spark_vio/build/stereoVIOEuroc'
results_dir: '$HOME/Code/spark_vio_evaluation/results'
params_dir: '$HOME/Code/spark_vio_evaluation/experiments/params'
dataset_dir: '$HOME/datasets/euroc'

datasets_to_run:
 - name: V1_01_easy
   segments: [1, 5]
   pipelines: ['S']
   discard_n_start_poses: 10
   discard_n_end_poses: 10
   initial_frame: 100
   final_frame: 2100
 - name: MH_01_easy
   segments: [5, 10]
   pipelines: ['S', 'SP', 'SPR']
   discard_n_start_poses: 0
   discard_n_end_poses: 10
   initial_frame: 100
   final_frame: 2500

The experiment yaml file specifies the following:

./evaluation/main_evaluation.py -r -a --save_plots --save_results --save_boxplots experiments/example_euroc.yaml

where, as explained below, the -r and -a flags run the VIO pipeline given in the executable_path and analyze its output.

Makefile

An example of command that is useful and commonly used for local evaluation is:

make euroc_evaluation

Which will call the Makefile with the command:

    @evaluation/main_evaluation.py -r -a -v --save_plots --save_boxplots --save_results --write_website experiments/full_euroc.yaml

[OUTDATED]

Regression Tests

The regression_tests.py script is in essence very similar to the main_evaluation.py script: it runs the VIO pipeline, computes error metrics, and displays results. The only difference is that its experiment yaml file expects two extra fields:

For example, below we expect the VIO pipeline to run by modifying each time the smartNoiseSigma parameter, while reporting results in

# Here goes the same as in a main_evaluation experiment file [...]
# This is the path where to store the regression tests.
regression_tests_dir: '$HOME/Code/spark_vio_evaluation/regression_tests'
# Here goes the datasets_to_run
# This is the list of parameters to regress, and the values to test.
regression_parameters:
  - name: 'smartNoiseSigma'
    values: [1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4]

Check the experiments folder for an example of a complete regression_test.yaml experiment file.

Once the regression tests have finished running, you can visualize the results using the plot_regression_tests.ipynb jupyter notebook. The notebook will mainly pull the results from the root of the regression test results, save all statistics in a file all_stats.yaml and plot results.

Note that the notebook will reload the all_stats.yaml if it finds one instead of repulling all statistics from the results directory. If you want the regression tests to query again the results dir, then remove the all_stats.yaml file at the root of results dir.

Usage

Run ./evaluation/main_evaluation.py --help to get usage information.

usage: main_evaluation.py [-h] [-r] [-a] [--plot]
                          [--plot_colormap_max PLOT_COLORMAP_MAX]
                          [--plot_colormap_min PLOT_COLORMAP_MIN]
                          [--plot_colormap_max_percentile PLOT_COLORMAP_MAX_PERCENTILE]
                          [--save_plots] [--save_boxplots] [--save_results]
                          experiments_path

Full evaluation of SPARK VIO pipeline (APE trans + RPE trans + RPE rot) metric
app

optional arguments:
  -h, --help            show this help message and exit

input options:
  experiments_path      Path to the yaml file with experiments settings.

algorithm options:
  -r, --run_pipeline    Run vio?
  -a, --analyse_vio     Analyse vio, compute APE and RPE

output options:
  --plot                show plot window
  --plot_colormap_max PLOT_COLORMAP_MAX
                        The upper bound used for the color map plot (default:
                        maximum error value)
  --plot_colormap_min PLOT_COLORMAP_MIN
                        The lower bound used for the color map plot (default:
                        minimum error value)
  --plot_colormap_max_percentile PLOT_COLORMAP_MAX_PERCENTILE
                        Percentile of the error distribution to be used as the
                        upper bound of the color map plot (in %, overrides
                        --plot_colormap_min)
  --save_plots          Save plots?
  --save_boxplots       Save boxplots?
  --save_results        Save results?
  -v, --verbose_sparkvio
                        Make Kimera-VIO log all verbosity to console. Useful
                        for debugging if a run failed.

Run ./evaluation/regression_tests.py --help to get usage information.

usage: regression_tests.py [-h] [-r] [-a] [--plot] [--save_plots]
                           [--save_boxplots] [--save_results]
                           experiments_path

Regression tests of SPARK VIO pipeline.

optional arguments:
  -h, --help          show this help message and exit

input options:
  experiments_path    Path to the yaml file with experiments settings.

algorithm options:
  -r, --run_pipeline  Run vio?
  -a, --analyse_vio   Analyse vio, compute APE and RPE

output options:
  --plot              show plot window
  --save_plots        Save plots?
  --save_boxplots     Save boxplots?
  --save_results      Save results?

Jupyter Notebooks

Provided are jupyter notebooks for extra plotting, especially of the debug output from Kimera-VIO. Follow the steps below to run them.

  1. Set up Kimera Evaluation as stated above (using the notebook extra) or install the required dependencies if you didn't use the notebook extra:
    pip install jupyter jupytext
  2. Open the notebooks folder in the Jupyter browser
    cd Kimera-Evaluation/notebooks
    jupyter notebook
  3. If the contents of the folder appear empty in your web-browser, you may have to manually add the jupytext content manager as described here
  4. Open the notebook corresponding to what you want to analyze first. plot-frontend.py is a good place to start.
  5. Provide the path to the folder with Kimera's debug information from your dataset (typically Kimera-VIO-ROS/output_logs/<yourdatasetname>)
  6. Run the notebooks! A useful beginner tutorial for using Jupyter notebooks can be found here. A guide for interpreting the output is coming soon.

Chart of implementation details:

Kimera-VIO evaluation diagram

Notes

The behaviour for the plots depends also on evo_config. For example, in Jenkins we use the default evo_config which does not split plots. Yet, locally, you can use evo_config to allow plotting plots separately for adding them in your paper.

References

@InProceedings{Rosinol19icra-incremental,
  title = {Incremental visual-inertial 3d mesh generation with structural regularities},
  author = {Rosinol, Antoni and Sattler, Torsten and Pollefeys, Marc and Carlone, Luca},
  year = {2019},
  booktitle = {2019 International Conference on Robotics and Automation (ICRA)},
  pdf = {https://arxiv.org/pdf/1903.01067.pdf}
}
@InProceedings{Rosinol20rss-dynamicSceneGraphs,
  title = {{3D} Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans},
  author = {A. Rosinol and A. Gupta and M. Abate and J. Shi and L. Carlone},
  year = {2020},
  booktitle = {Robotics: Science and Systems (RSS)},
  pdf = {https://arxiv.org/pdf/2002.06289.pdf}
}