Official implementation of 'MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection'.
The paper has been accepted by ICCV 2023 🎉.
MonoDETR is the first DETR-based model for monocular 3D detection without additional depth supervision, anchors or NMS. We enable the vanilla transformer in DETR to be depth-guided and achieve scene-level geometric perception. In this way, each object estimates its 3D attributes adaptively from the depth-informative regions on the image, not limited by center-around features.
Note that the randomness of training for monocular detection would cause a variance of ±1 AP3D on KITTI.
The official results in the paper:
Models | Val, AP3D|R40 | ||
Easy | Mod. | Hard | |
MonoDETR | 28.84% | 20.61% | 16.38% |
New and better results in this repo:
Models | Val, AP3D|R40 | Logs | Ckpts | ||
Easy | Mod. | Hard | |||
MonoDETR | 28.79% | 20.83% | 17.47% | log | ckpt |
29.36% | 20.64% | 17.30% | log | ckpt | |
27.58% | 20.14% | 16.98% | log | ckpt |
Clone this project and create a conda environment:
git clone https://github.com/ZrrSkywalker/MonoDETR.git
cd MonoDETR
conda create -n monodetr python=3.8
conda activate monodetr
Install pytorch and torchvision matching your CUDA version:
conda install pytorch torchvision cudatoolkit
# We adopt torch 1.9.0+cu111
Install requirements and compile the deformable attention:
pip install -r requirements.txt
cd lib/models/monodetr/ops/
bash make.sh
cd ../../../..
Make dictionary for saving training losses:
mkdir logs
Download KITTI datasets and prepare the directory structure as:
│MonoDETR/
├──...
├──data/KITTIDataset/
│ ├──ImageSets/
│ ├──training/
│ ├──testing/
├──...
You can also change the data path at "dataset/root_dir" in configs/monodetr.yaml
.
You can modify the settings of models and training in configs/monodetr.yaml
and indicate the GPU in train.sh
:
bash train.sh configs/monodetr.yaml > logs/monodetr.log
The best checkpoint will be evaluated as default. You can change it at "tester/checkpoint" in configs/monodetr.yaml
:
bash test.sh configs/monodetr.yaml
This repo benefits from the excellent Deformable-DETR and MonoDLE.
@article{zhang2022monodetr,
title={MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection},
author={Zhang, Renrui and Qiu, Han and Wang, Tai and Xu, Xuanzhuo and Guo, Ziyu and Qiao, Yu and Gao, Peng and Li, Hongsheng},
journal={ICCV 2023},
year={2022}
}
If you have any questions about this project, please feel free to contact zhangrenrui@pjlab.org.cn.