marcelsheeny / radiate_sdk

SDK developed to access data from RADIATE dataset
MIT License
94 stars 34 forks source link

RADIATE Dataset

RADIATE (RAdar Dataset In Adverse weaThEr) is new automotive dataset created by Heriot-Watt University which includes Radar, Lidar, Stereo Camera and GPS/IMU.

We collected data in different weather scenarios (sunny, overcast, night, fog, rain and snow) to help the research community to develop new methods of vehicle perception

Dataset size

We annotated the radar images in 7 different scenarios: Sunny (Parked), Sunny/Overcast (Urban), Overcast (Motorway), Night (Motorway), Rain (Suburban), Fog (Suburban) and Snow (Suburban). We annotated 8 different types of objects (car, van, truck, bus, motorbike, bicycle, pedestrian and group of pedestrian). Below we show a graph with the number of individual instances labelled.

The size of each scenario can be visualised below:

Comparison with other datasets

RADIATE is the first public high resolution radar dataset which includes a large number of labelled road actors on public roads. It includes multi-modal sensor data collected in challenging weather conditions, such as dense fog and heavy snowfall. Camera, LiDAR and GPS data are also provided for all sequences. The table below shows a comparison with other relevant automotive datasets with radar (NuScenes, Oxford Radar RobotCar, MulRan and Astyx)

Sensors

Folder Structure and File Format

Sensor Calibration

Sensor calibration is required for multi-sensor fusion, feature and actor correspondence. The intrinsic parameters and distortion coefficients of the stereo camera are calibrated using the Matlab camera calibration toolbox. Then, rectified images can be generated to calculate depths. In terms of extrinsic calibration, the radar sensor is chosen as the origin of the local coordinate frame as it is the main sensor. The extrinsic parameters for the radar, camera and LiDAR are represented as 6 degree-of-freedom transformations (translation and rotation). They are performed by first explicitly measuring the distance between the sensors, and then fine-tuned by aligning measurements between each pair of sensors. The sensor calibration parameters are provided in a config/default-calib.yaml file. The sensors operate at different frame rates and we simply adopt each sensor data’s time of arrival as the timestamp.

The sensor calibration paremeters calculated are given below.

# Radar calibration parameters
radar_calib:
    T: [0.0, 0.0, 0.0]
    R: [0.0, 0.0, 0.0]

# Lidar calibration parameters
lidar_calib:
    T: [0.6003, -0.120102, 0.250012]
    R: [0.0001655, 0.000213, 0.000934]

# Left camera calibration parameters
left_cam_calib:
    T: [0.34001, -0.06988923, 0.287893]
    R: [1.278946, -0.530201, 0.000132]
    fx: 3.379191448899105e+02
    fy: 3.386957068549526e+02
    cx: 3.417366010946575e+02
    cy: 2.007359735313929e+02
    k1: -0.183879883467351
    k2: 0.0308609205858947
    k3: 0
    p1: 0
    p2: 0
    res: [672, 376]

# Right camera calibration parameters
right_cam_calib:
    T: [0.4593822, -0.0600343, 0.287433309324]
    R: [0.8493049332, 0.37113944, 0.000076230]
    fx: 337.873451599077
    fy: 338.530902554779
    cx: 329.137695760749
    cy: 186.166590759716
    k1: -0.181771143569008
    k2: 0.0295682692890613
    k3: 0
    p1: 0
    p2: 0
    res: [672, 376]

# Stereo calibration parameters
stereo_calib:
    TX: -120.7469
    TY: 0.1726
    TZ: 1.1592
    CV: 0.0257154
    RX: -0.0206928
    RZ: -0.000595637
    R: [[0.999983541478846,   0.000655753417350, -0.005699715684273],
        [-0.000622470939159,   0.999982758359834,   0.005839136322126],
        [0.005703446445424, -0.005835492311203,   0.9999667083098977]]

Annotation Structure

The annotation is a .json file. where each entry of a list contains id,class_name,bboxes. id is the object identification. class_name is a string with the name class. bboxes contains position: (x,y,width,height) where (x,y) is the upper-left pixel locations of the bounding box, of given width and height. And rotation is the angle in degrees using counter-clockwise.

Example:

RADIATE SDK

Software development kit (SDK) to use the RADIATE dataset. The SDK works with Python 3.7 or greater. The SDK is used for data calibration, visualisation, and pre-processing. Example below of information which can be retrieved from the SDK.

Installation

git clone https://github.com/marcelsheeny/radiate_sdk.git
cd radiate_sdk
pip install -r requirements.txt

Run demo.py to visualise the dataset.

Dependencies

matplotlib
opencv-python
pandas
numpy
pyyaml

How to use

The file 'config/config.yaml' controls which sensors to use and configure their parameters.

The file demo.py contains a small code which just display the annotations.

import radiate
import numpy as np
import os

# path to the sequence
root_path = 'data/radiate/'
sequence_name = 'tiny_foggy'

# time (s) to retrieve next frame
dt = 0.25

# load sequence
seq = radiate.Sequence(os.path.join(root_path, sequence_name))

# play sequence
for t in np.arange(seq.init_timestamp, seq.end_timestamp, dt):
    output = seq.get_from_timestamp(t)
    seq.vis_all(output, 0)

In order to get the annotation values, the variable 'output' is a dictionary with the sensor and its correspondent annotation.

Example:

output['sensors']['radar_cartesian'] contains a np.array with the radar image.

output['annotations']['radar_cartesian'] contains a list of bounding boxes with id, class_name and bbox. bbox : position is represented as x,y,width,height and bbox : rotation is the angle counter-clockwise in degrees. This is exemplified below:

'id':1
'class_name':'bus'
'bbox':{'position': [603.5340471042896, 149.7590074419735, 26.620884098218767, 73.56976270380676], 'rotation': 177.69489304897752}

The documentation of all radiate methods can be seen at: https://marcelsheeny.github.io/radiate_sdk/radiate.html

Vehicle Detection

As first baseline, we have performed evaluation of vehicle detection from single images. We defined a vehicle as one of the following classes: car, van, truck, bus, motorbike and bicycle.

We adopted the popular Faster R-CNN [29] architecture to demonstrate the use of RADIATE for radar based object detection. Two modifications were made to the original architecture to better suit radar detection:

To investigate the impact of weather conditions, the models were trained with the 2 different training datasets: data from only good and data from both good and bad weather. ResNet-50 and ResNet-101 were chosen as backbone models. The trained models were tested on a test set collected from all weather conditions and driving scenarios. The metric used for evaluation was Average Precision with Intersection over Union (IoU) equal to 0.5, which is the same as the PASCAL VOC and DOTA evaluation metric.

Below we can visualise a table with the results for each scenario and the Precision Recall curve for each network trained.

The figure bellow illustrates some qualitative results of radar based vehicle detection in various driving scenarios and weather conditions, using Faster R-CNN ResNet-101 trained in good weather only.

The code and the trained weights from radar based vehicle detection can be seen at https://github.com/marcelsheeny/radiate_sdk/tree/master/vehicle_detection