lovelyqian / CDFSOD-benchmark

A benchmark for cross-domain few-shot object detection (ECCV24 paper: Cross-Domain Few-Shot Object Detection via Enhanced Open-Set Object Detector)
Apache License 2.0
82 stars 7 forks source link

Cross-Domain Few-Shot Object Detection via Enhanced Open-Set Object Detector

PWCPWCPWCPWCPWCPWCPWCPWC

In this paper, we: 1) reorganize a benchmark for Cross-Domain Few-Shot Object Detection (CD-FSOD); 2) conduct extensive study on several different kinds of detectors (Tab.1 in the paper); 3) propose a novel CD-ViTO method via enhancing the existing open-set detector (DE-ViT).

In this repo, we provide: 1) links and splits for target datasets; 2) codes for our CD-ViTO method; 3) codes for the DE-ViT-FT method; (in case you would like to build new methods based on this baseline).

Datasets

We take COCO as source training data and ArTaxOr, Clipart1k, DIOR, DeepFish, NEU-DET, and UODD as targets.

image

Also, as stated in the paper, we adopt the "pretrain, finetuning, and testing" pipeline, while the pre-trained stage on COCO is directly taken from the DE-ViT, thus in practice, only the targets are needed to run our experiments.

The target datasets could be easily downloaded in the following links: (If you use the datasets, please cite them properly, thanks.)

To train CD-ViTO on a custom dataset, please refer to DATASETS.md for detailed instructions.

Methods

Setup

An anaconda environment is suggested, take the name "cdfsod" as an example:

git clone git@github.com:lovelyqian/CDFSOD-benchmark.git
conda create -n cdfsod python=3.9
conda activate cdfsod
pip install -r CDFSOD-benchmark/requirements.txt 
pip install -e ./CDFSOD-benchmark
cd CDFSOD-benchmark

Run CD-ViTO

  1. download weights: download pretrained model from DE-ViT.

  2. run script:

    bash main_results.sh

Run DE-ViT-FT

Add --controller to main_results.sh, then

bash main_results.sh

Acknowledgement

Our work is built upon DE-ViT, and also we use the codes of ViTDeT, Detic to test them under this new benchmark. Thanks for their work.

Citation

If you find our paper or this code useful for your research, please considering cite us (●°u°●)」:

@article{fu2024cross,
  title={Cross-Domain Few-Shot Object Detection via Enhanced Open-Set Object Detector},
  author={Fu, Yuqian and Wang, Yu and Pan, Yixuan and Huai, Lian and Qiu, Xingyu and Shangguan, Zeyu and Liu, Tong and Kong, Lingjie and Fu, Yanwei and Van Gool, Luc and others},
  journal={arXiv preprint arXiv:2402.03094},
  year={2024}
}