AIR-THU / DAIR-RCooper

[CVPR2024] Official implementation of "RCooper: A Real-world Large-scale Dataset for Roadside Cooperative Perception"
https://openaccess.thecvf.com/content/CVPR2024/html/Hao_RCooper_A_Real-world_Large-scale_Dataset_for_Roadside_Cooperative_Perception_CVPR_2024_paper.html
83 stars 6 forks source link
autonoumous-driving cooperative-perception dataset-and-benchmark multi-view roadside-perception

RCooper: A Real-world Large-scale Dataset for Roadside Cooperative Perception

paper supp arXiv ckpts video poster

This is the official implementation of CVPR2024 paper. "RCooper: A Real-world Large-scale Dataset for Roadside Cooperative Perception". Ruiyang Hao*, Siqi Fan*, Yingru Dai, Zhenlin Zhang, [Chenxi Li](), [Yuntian Wang](), Haibao Yu, Wenxian Yang, Jirui Yuan, Zaiqing Nie

Overview

Data Download

Please check the bottom of this page website to download the data. As shown in the figure bellow.

After downloading the data, please put the data in the following structure:

├── RCooper
│   ├── calib
|      |── lidar2cam
|      |── lidar2world
│   ├── data
|      |── folders named specific scene index
│   ├── labels
|      |── folders named specific scene index
│   ├── original_label
|      |── folders named specific scene index

Data Conversion

To facilitate the research of cooperative perception methods on RCooper. We provide the format converter from RCooper to other popular public cooperative perception datasets. After the conversion, researchers can directly employ the methods using several opensourced frameworks.

We now support the following conversions:

RCooper to V2V4Real

Setup the dataset path in codes/dataset_convertor/converter_config.py, and complete the conversion.

cd codes/dataset_converter
python rcooper2vvreal.py

RCooper to OPV2V

Setup the dataset path in codes/dataset_convertor/converter_config.py, and complete the conversion.

cd codes/dataset_converter
python rcooper2opv2v.py

RCooper to DAIR-V2X

Setup the dataset path in codes/dataset_convertor/converter_config.py, and complete the conversion.

cd codes/dataset_converter
python rcooper2dair.py

Quick Start

For detection training & inference, you can find instructions in docs/corridor_scene or docs/intersection_scene in detail. (Notes: you may need to set PYTHONPATH to call modified codes other than the pip-installed ones.)

For Tracking, you can find instructions in docs/tracking.md in detail.

All the checkpoints are released in link in the tabels below, you can save them in codes/ckpts/.

Benchmark

Results of Cooperative 3D object detection for corridor scenes

Method AP@0.3 AP@0.5 AP@0.7 Download Link
No Fusion 40.0 29.2 11.1 url
Late Fusion 44.5 29.9 10.8 url
Early Fusion 69.8 54.7 30.3 url
AttFuse 62.7 51.6 32.1 url
F-Cooper 65.9 55.8 36.1 url
Where2Comm 67.1 55.6 34.3 url
CoBEVT 67.6 57.2 36.2 url

Results of Cooperative 3D object detection for intersection scenes

Method AP@0.3 AP@0.5 AP@0.7 Download Link
No Fusion 58.1 44.1 23.8 url
Late Fusion 65.1 47.6 24.4 url
Early Fusion 50.0 33.9 18.3 url
AttFuse 45.5 40.9 27.9 url
F-Cooper 49.5 32.0 12.9 url
Where2Comm 50.5 42.2 29.9 url
CoBEVT 53.5 45.6 32.6 url

Results of Cooperative tracking for corridor scenes

Method AMOTA(↑) AMOTP(↑) sAMOTA(↑) MOTA(↑) MT(↑) ML(↓)
No Fusion 8.28 22.74 34.05 23.89 17.34 42.71
Late Fusion 9.60 25.77 35.64 24.75 24.37 42.96
Early Fusion 23.78 38.18 59.16 44.30 53.02 12.81
AttFuse 21.75 35.31 57.43 44.50 45.73 22.86
F-Cooper 22.47 35.54 58.49 45.94 47.74 22.11
Where2Comm 22.55 36.21 59.60 46.11 50.00 19.60
CoBEVT 21.54 35.69 53.85 47.32 47.24 18.09

Results of Cooperative tracking for corridor scenes

Method AMOTA(↑) AMOTP(↑) sAMOTA(↑) MOTA(↑) MT(↑) ML(↓)
No Fusion 18.11 39.71 58.29 49.16 35.32 41.64
Late Fusion 21.57 43.40 63.02 50.58 42.75 34.20
Early Fusion 21.38 47.71 62.93 50.15 36.80 42.75
AttFuse 11.84 36.63 46.92 39.32 29.00 53.90
F-Cooper -4.86 14.71 0.00 -45.66 11.52 50.56
Where2Comm 14.21 38.48 50.97 42.27 29.00 45.72
CoBEVT 14.82 38.71 49.04 44.67 33.83 35.69

Citation

If you find RCooper useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@inproceedings{hao2024rcooper,
  title={RCooper: A Real-world Large-scale Dataset for Roadside Cooperative Perception},
  author={Hao, Ruiyang and Fan, Siqi and Dai, Yingru and Zhang, Zhenlin and Li, Chenxi and Wang, Yuntian and Yu, Haibao and Yang, Wenxian and Jirui, Yuan and Nie, Zaiqing},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2024},
  pages={22347-22357}
}

Acknowledgment

Sincere appreciation for their great contributions.