chenxy99 / AttFDNet

MIT License
0 stars 0 forks source link

Attentive Few-Shot Object Detection Network (AttFDNet)

This code implements the Attentive Few-Shot Object Detection Network (AttFDNet)

Disclaimer

We adopt the official implementation of the RFBNet as a baseline model for few-shot object detection. We also use the Boolean Map Saliency algorithm to extract human saliency result of given an image. Please refer to these links for further README information.

Requirements

  1. Requirements for Pytorch. We use Pytorch 0.4.1 in our experiments.

  2. Python 3.6+

  3. I also provide the conda environment RFBNet.yaml, you can directly run

$ conda env create -f RFBNet.yaml

to create the same environment where I successfully run my code.

  1. You also need to execute the following command
$ sh make.sh

in this repository folder to compile the NMS and other related modules.

Dataset

We provide the spitted VOC dataset in the Link.

  1. You need to manually change the home directory in code.
  2. You need to manually change the classes of each splits (e.g., split 1, split 2, split 3) in code corresponding to the given task as well as the training stages (e.g., base stage or novel stage).
  3. For the training dataset loader, we directly use BMS in the VOC dataloader to get the human saliency prediction.

Start training

As an example, for base stage of training split 1, you can directly use the follow command to train the model

$ python train_RFB.py --split split1 --save_folder ./weights/task1/source_300_0712_320embedding_20200227/

It needs vgg16_reducedfc.pth and we include it in the pretrained models Link.

As an example, for novel stage of training split 1, you can directly use the follow command to train the model

$ python train_RFB_target.py --split split1 --shots 2 --save_folder ./weights/task1/novel_2shot_05kd_seed0_2dist_div8_new/ --resume_net ./weights/task1/source_300_0712_320embedding_20200227/Final_RFB_vgg_VOC.pth

Evaluation

As an example, for evaluation of the base classes of split 1, you can directly use the follow command to evaluate the model

$ python test_RFB.py -split split1 --trained_model ./weights/task1/source_300_0712_320embedding_20200227/Final_RFB_vgg_VOC.pth

As an example, for evaluation of the novel classes of split 1, you can directly use the follow command to evaluate the model

$ python test_RFB_target.py --trained_model ./weights/task1/novel_2shot_05kd_seed0_2dist_div8_new/Final_RFB_vgg_VOC.pth

To evaluate the different dataset, you need to remove the file ''annots.pkl'' in the document ''./data/VOCdevkit/annotations_cache/'' where you put your dataset.

Pretrained models

We also provide some of the pretrained model from the Link.

It includes three models

You can run the follow command to evaluate the model for ./weights/task1/novel_2shot_05kd_seed0_2dist_div8_new

$ python test_RFB_target.py --trained_model ./weights/task1/novel_2shot_05kd_seed0_2dist_div8_new/Final_RFB_vgg_VOC.pth