This code implements the Attentive Few-Shot Object Detection Network (AttFDNet)
We adopt the official implementation of the RFBNet
as a baseline model for few-shot object detection. We also use the Boolean Map Saliency algorithm
to extract human saliency result of given an image. Please refer to these links for further README information.
Requirements for Pytorch. We use Pytorch 0.4.1 in our experiments.
Python 3.6+
I also provide the conda environment RFBNet.yaml, you can directly run
$ conda env create -f RFBNet.yaml
to create the same environment where I successfully run my code.
$ sh make.sh
in this repository folder to compile the NMS and other related modules.
We provide the spitted VOC dataset in the Link.
As an example, for base stage of training split 1, you can directly use the follow command to train the model
$ python train_RFB.py --split split1 --save_folder ./weights/task1/source_300_0712_320embedding_20200227/
It needs vgg16_reducedfc.pth
and we include it in the pretrained models Link.
As an example, for novel stage of training split 1, you can directly use the follow command to train the model
$ python train_RFB_target.py --split split1 --shots 2 --save_folder ./weights/task1/novel_2shot_05kd_seed0_2dist_div8_new/ --resume_net ./weights/task1/source_300_0712_320embedding_20200227/Final_RFB_vgg_VOC.pth
As an example, for evaluation of the base classes of split 1, you can directly use the follow command to evaluate the model
$ python test_RFB.py -split split1 --trained_model ./weights/task1/source_300_0712_320embedding_20200227/Final_RFB_vgg_VOC.pth
As an example, for evaluation of the novel classes of split 1, you can directly use the follow command to evaluate the model
$ python test_RFB_target.py --trained_model ./weights/task1/novel_2shot_05kd_seed0_2dist_div8_new/Final_RFB_vgg_VOC.pth
To evaluate the different dataset, you need to remove the file ''annots.pkl'' in the document ''./data/VOCdevkit/annotations_cache/'' where you put your dataset.
We also provide some of the pretrained model from the Link.
It includes three models
Base model for split 1 named ./weights/task1/source_300_0712_320embedding_20200227
Novel model for split 1 for 2 shots scenario named ./weights/task1/novel_2shot_05kd_seed0_2dist_div8_new
Novel model for split 1 for 3 shots scenario named ./weights/task1/novel_3shot_05kd_seed0_2dist_div8_new
You can run the follow command to evaluate the model for ./weights/task1/novel_2shot_05kd_seed0_2dist_div8_new
$ python test_RFB_target.py --trained_model ./weights/task1/novel_2shot_05kd_seed0_2dist_div8_new/Final_RFB_vgg_VOC.pth