ucbdrive / few-shot-object-detection

Implementations of few-shot object detection benchmarks
Apache License 2.0
1.11k stars 224 forks source link
few-shot-learning few-shot-object-detection icml-2020

Few-Shot Object Detection (FsDet)

Language grade: Python Code style: black pre-commit

FsDet contains the official few-shot object detection implementation of the ICML 2020 paper Frustratingly Simple Few-Shot Object Detection. TFA Figure

In addition to the benchmarks used by previous works, we introduce new benchmarks on three datasets: PASCAL VOC, COCO, and LVIS. We sample multiple groups of few-shot training examples for multiple runs of the experiments and report evaluation results on both the base classes and the novel classes. These are described in more detail in Data Preparation.

We also provide benchmark results and pre-trained models for our two-stage fine-tuning approach (TFA). In TFA, we first train the entire object detector on the data-abundant base classes, and then only fine-tune the last layers of the detector on a small balanced training set. See Models for our provided models and Getting Started for instructions on training and evaluation.

FsDet is well-modularized so you can easily add your own datasets and models. The goal of this repository is to provide a general framework for few-shot object detection that can be used for future research.

If you find this repository useful for your publications, please consider citing our paper.

@article{wang2020few,
    title={Frustratingly Simple Few-Shot Object Detection},
    author={Wang, Xin and Huang, Thomas E. and  Darrell, Trevor and Gonzalez, Joseph E and Yu, Fisher}
    booktitle = {International Conference on Machine Learning (ICML)},
    month = {July},
    year = {2020}
}

Updates

Table of Contents

Installation

Requirements

Build FsDet

To install depedencies (example for PyTorch v1.6.0, CUDA v10.2, Detectron2 v0.3):

Code Structure

Data Preparation

We evaluate our models on three datasets:

See datasets/README.md for more details.

If you would like to use your own custom dataset, see CUSTOM.md for instructions. If you would like to contribute your custom dataset to our codebase, feel free to open a PR.

Models

We provide a set of benchmark results and pre-trained models available for download in MODEL_ZOO.md.

Getting Started

Inference Demo with Pre-trained Models

  1. Pick a model and its config file from model zoo, for example, COCO-detection/faster_rcnn_R_101_FPN_ft_all_1shot.yaml.
  2. We provide demo.py that is able to run builtin standard models. Run it with:
    python3 -m demo.demo --config-file configs/COCO-detection/faster_rcnn_R_101_FPN_ft_all_1shot.yaml \
    --input input1.jpg input2.jpg \
    [--other-options]
    --opts MODEL.WEIGHTS fsdet://coco/tfa_cos_1shot/model_final.pth

    The configs are made for training, therefore we need to specify MODEL.WEIGHTS to a model from model zoo for evaluation. This command will run the inference and show visualizations in an OpenCV window.

For details of the command line arguments, see demo.py -h or look at its source code to understand its behavior. Some common arguments are:

Training & Evaluation in Command Line

To train a model, run

python3 -m tools.train_net --num-gpus 8 \
        --config-file configs/PascalVOC-detection/split1/faster_rcnn_R_101_FPN_base1.yaml

To evaluate the trained models, run

python3 -m tools.test_net --num-gpus 8 \
        --config-file configs/PascalVOC-detection/split1/faster_rcnn_R_101_FPN_ft_all1_1shot.yaml \
        --eval-only

For more detailed instructions on the training procedure of TFA, see TRAIN_INST.md.

Multiple Runs

For ease of training and evaluation over multiple runs, we provided several helpful scripts in tools/.

You can use tools/run_experiments.py to do the training and evaluation. For example, to experiment on 30 seeds of the first split of PascalVOC on all shots, run

python3 -m tools.run_experiments --num-gpus 8 \
        --shots 1 2 3 5 10 --seeds 0 30 --split 1

After training and evaluation, you can use tools/aggregate_seeds.py to aggregate the results over all the seeds to obtain one set of numbers. To aggregate the 3-shot results of the above command, run

python3 -m tools.aggregate_seeds --shots 3 --seeds 30 --split 1 \
        --print --plot