This repository contains the official PyTorch implementation of
Collaboration Helps Camera Overtake LiDAR in 3D Detection
Yue Hu, Yifan Lu, Runsheng Xu, Weidi Xie, Siheng Chen, Yanfeng wang
Presented at CVPR 2023
Abstract: Camera-only 3D detection provides an economical solution with a simple configuration for localizing objects in 3D space compared to LiDAR-based detection systems. However, a major challenge lies in precise depth estimation due to the lack of direct 3D measurements in the input. Many previous methods attempt to improve depth estimation through network designs, e.g., deformable layers and larger receptive fields. This work proposes an orthogonal direction, improving the camera-only 3D detection by introducing multi-agent collaborations. Our preliminary results show a potential that with sufficient collaboration, the camera might overtake LiDAR in some practical scenarios.
Dataset Support
SOTA collaborative perception method support
Visualization
Please refer to the INSTALL.md for detailed documentations.
We adopt the same setting as OpenCOOD which uses yaml file to configure all the parameters for training. To train your own model from scratch or a continued checkpoint, run the following commonds:
python opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir ${CHECKPOINT_FOLDER}]
Arguments Explanation:
hypes_yaml
: the path of the training configuration file, e.g. opencood/hypes_yaml/second_early_fusion.yaml
, meaning you want to train
an early fusion model which utilizes SECOND as the backbone. See Tutorial 1: Config System to learn more about the rules of the yaml files.model_dir
(optional) : the path of the checkpoints. This is used to fine-tune the trained models. When the model_dir
is
given, the trainer will discard the hypes_yaml
and load the config.yaml
in the checkpoint folder.Before you run the following command, first make sure the validation_dir
in config.yaml under your checkpoint folder
refers to the testing dataset path, e.g. opv2v_data_dumping/test
.
python opencood/tools/inference.py --model_dir ${CHECKPOINT_FOLDER} --fusion_method ${FUSION_STRATEGY} --save_vis_n ${amount}
Arguments Explanation:
model_dir
: the path to your saved model.fusion_method
: indicate the fusion strategy, currently support 'early', 'late', 'intermediate', 'no'(indicate no fusion, single agent), 'intermediate_with_comm'(adopt intermediate fusion and output the communication cost).save_vis_n
: the amount of saving visualization result, default 10The evaluation results will be dumped in the model directory.
Thank for the excellent cooperative perception codebases OpenCOOD and CoPerception.
Thank for the excellent cooperative perception datasets DAIR-V2X, OPV2V and V2X-SIM.
Thanks for the insightful previous works in cooperative perception field. Where2comm(NeruIPS22), CoAlign(ICRA23), V2VNet(ECCV20), When2com(CVPR20), Who2com(ICRA20), DiscoNet(NeurIPS21), V2X-ViT(ECCV2022), STAR(CoRL2022), CoBEVT(CoRL2022).
If you have any problem with this code, please feel free to contact 18671129361@sjtu.edu.cn.
If you find this code useful in your research then please cite
@inproceedings{CoCa3D:23,
author = {Yue Hu, Yifan Lu, Runsheng Xu, Weidi Xie, Siheng Chen, Yanfeng Wang},
title = {Collaboration Helps Camera Overtake LiDAR in 3D Detection},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023}
}