RunpeiDong / PointDistiller

[CVPR 2023] PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection
https://arxiv.org/abs/2205.11098
MIT License
66 stars 1 forks source link
3d-object-detection knowledge-distillation point-cloud

PointDistiller

PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection, CVPR'23
Linfeng Zhang*, Runpei Dong*, Hung-Shuo Tai, and Kaisheng Ma

OpenAccess | arXiv | Logs

This repository contains the implementation of the paper PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection (CVPR 2023).

PointDistiller

Environment

This codebase was tested with the following environment configurations. It may work with other versions.

1. Installation

Please refer to getting_started.md for installation.

2. Datasets

We use KITTI and nuScenes datsets, please follow the official instructions for set up.

3. How to Run

Please make sure you have set up the environments and you can start knowledge distillation by running

DEVICE_ID = <gpu_id>
CUDA_VISIBLE_DEVICES=$DEVICE_ID python tools/train.py <student_cfg> --use-kd # for single gpu
bash ./tools/dist_train.sh <student_cfg> 8 --use-kd # for multiple gpus

License

PointDistiller is released under the MIT License. See the LICENSE file for more details.

Acknowledgements

Many thanks to following codes that help us a lot in building this codebase:

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{pointdistiller23,
  title={PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection},
  author={Linfeng Zhang and Runpei Dong and Hung-Shuo Tai and Kaisheng Ma},
  booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2023},
}