appinho / SARosPerceptionKitti

ROS package for the Perception (Sensor Processing, Detection, Tracking and Evaluation) of the KITTI Vision Benchmark Suite
MIT License
246 stars 80 forks source link
cpp dbscan deep-learning deeplab evaluation kitti kitti-dataset multi-object-tracking object-detection python ros ros-kinetic ros-node ros-nodes ros-packages rosbag rviz semantic-segmentation sensor-fusion unscented-kalman-filter

License: MIT

SARosPerceptionKitti

ROS package for the Perception (Sensor Processing, Detection, Tracking and Evaluation) of the KITTI Vision Benchmark

Demo

Setup

Sticking to this folder structure is highly recommended:

    ~                                        # Home directory
    ├── catkin_ws                            # Catkin workspace
    │   ├── src                              # Source folder
    │       └── SARosPerceptionKitti         # Repo
    ├── kitti_data                           # Dataset
    │   ├── 0012                             # Demo scenario 0012
    │   │   └── synchronized_data.bag        # Synchronized ROSbag file

1) Install ROS and create a catkin workspace in your home directory:

mkdir -p ~/catkin_ws/src

2) Clone this repository into the catkin workspace's source folder (src) and build it:

cd ~/catkin_ws/src
git clone https://github.com/appinho/SARosPerceptionKitti.git
cd ~/catkin_ws
catkin_make
source devel/setup.bash

3) Download a preprocessed scenario and unzip it into a separate kitti_data directory, also stored under your home directory:

mkdir ~/kitti_data && cd ~/kitti_data/
mv ~/Downloads/0012.zip .
unzip 0012.zip
rm 0012.zip

Usage

1) Launch one of the following ROS nodes to perform and visualize the pipeline (Sensor Processing -> Object Detection -> Object Tracking) step-by-step:

source devel/setup.bash
roslaunch sensor_processing sensor_processing.launch home_dir:=/home/YOUR_USERNAME
roslaunch detection detection.launch home_dir:=/home/YOUR_USERNAME
roslaunch tracking tracking.launch home_dir:=/home/YOUR_USERNAME

Without assigning any of the abovementioned parameters the demo scenario 0012 is replayed at 20% of its speed with a 3 second delay so RViz has enough time to boot up.

2) Write the results to file and evaluate them:

roslaunch evaluation evaluation.launch home_dir:=/home/YOUR_USERNAME
cd ~/catkin_ws/src/SARosPerceptionKitti/benchmark/python
python evaluate_tracking.py

Results for demo scenario 0012

Class MOTA MOTP MOTAL MODA MODP
Car 0.881119 0.633595 0.881119 0.881119 0.642273
Pedestrian 0.546875 0.677919 0.546875 0.546875 0.836921

Contact

If you have any questions, things you would love to add or ideas how to actualize the points in the Area of Improvements, send me an email at simonappel62@gmail.com ! More than interested to collaborate and hear any kind of feedback.