AIR-THU / DAIR-V2X

Apache License 2.0
424 stars 65 forks source link
# DAIR-V2X and OpenDAIRV2X: Towards General and Real-World Cooperative Autonomous Driving

Project Page | Dataset Download | arXiv | OpenDAIRV2X



teaser

Table of Contents:

  1. Highlights
  2. News
  3. Dataset Download
  4. Getting Started
  5. Major Features
  6. Benchmark
  7. Citation
  8. Contaction

Highlights

News

Dataset Download

Getting Started

Please refer to getting_started.md for the usage and benchmarks reproduction of DAIR-V2X dataset.

Please refer to get_started_spd.md for the usage and benchmarks reproduction of V2X-Seq-SPD dataset.

Benchmark

You can find more benchmark in SV3D-Veh, SV3D-Inf, VIC3D and VIC3D-SPD.

Part of the VIC3D detection benchmarks based on DAIR-V2X-C dataset:

Modality Fusion Model Dataset AP-3D (IoU=0.5) AP-BEV (IoU=0.5) AB
Overall 0-30m 30-50m 50-100m Overall 0-30m 30-50m 50-100m
Image VehOnly ImvoxelNet VIC-Sync 9.13 19.06 5.23 0.41 10.96 21.93 7.28 0.78 0
Late-Fusion ImvoxelNet VIC-Sync 18.77 33.47 9.43 8.62 24.85 39.49 14.68 14.96 309.38
Pointcloud VehOnly PointPillars VIC-Sync 48.06 47.62 63.51 44.37 52.24 30.55 66.03 48.36 0
Early Fusion PointPillars VIC-Sync 62.61 64.82 68.68 56.57 68.91 68.92 73.64 65.66 1382275.75
Late-Fusion PointPillars VIC-Sync 56.06 55.69 68.44 53.60 62.06 61.52 72.53 60.57 478.61
Late-Fusion PointPillars VIC-Async-2 52.43 51.13 67.09 49.86 58.10 57.23 70.86 55.78 478.01
TCLF PointPillars VIC-Async-2 53.37 52.41 67.33 50.87 59.17 58.25 71.20 57.43 897.91

Part of the VIC3D detection and tracking benchmarks based on V2X-Seq-SPD:

Modality Fusion Model Dataset AP 3D (Iou=0.5) AP BEV (Iou=0.5) MOTA MOTP AMOTA AMOTP IDs AB(Byte)
Image Veh Only ImvoxelNet VIC-Sync-SPD 8.55 10.32 10.19 57.83 1.36 14.75 4
Image Late Fusion ImvoxelNet VIC-Sync-SPD 17.31 22.53 21.81 56.67 6.22 25.24 47 3300

TODO List

Citation

Please consider citing our paper if the project helps your research with the following BibTex:

@inproceedings{v2x-seq,
  title={V2X-Seq: A large-scale sequential dataset for vehicle-infrastructure cooperative perception and forecasting},
  author={Yu, Haibao and Yang, Wenxian and Ruan, Hongzhi and Yang, Zhenwei and Tang, Yingjuan and Gao, Xu and Hao, Xin and Shi, Yifeng and Pan, Yifeng and Sun, Ning and Song, Juan and Yuan, Jirui and Luo, Ping and Nie, Zaiqing},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023},
}
@inproceedings{dair-v2x,
  title={Dair-v2x: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection},
  author={Yu, Haibao and Luo, Yizhen and Shu, Mao and Huo, Yiyi and Yang, Zebang and Shi, Yifeng and Guo, Zhenglong and Li, Hanyu and Hu, Xing and Yuan, Jirui and Nie, Zaiqing},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={21361--21370},
  year={2022}
}

Contaction

If any questions and suggenstations, please email to dair@air.tsinghua.edu.cn.

Related Resources

Awesome