This is the source code for our paper Boosting Few-shot Semantic Segmentation with Feature-Enhanced Context-Aware Network
The architecture of our proposed model is as follows
https://github.com/NUST-Machine-Intelligence-Laboratory/FECANET.git
/data
folder./data
folder.torchvision
: /train.py --datapath=/data/VOCdevkit --benchmark=pascal --backbone=vgg16/resnet50 --fold=0 --bsz=20 --fold=0 --lr=1e-3
3.To reproduce the 1-shot result reported in Table 2, run the script to train and test for set 0,1,2,3. The example for set 0 is
/train.py --datapath=/data/coco --benchmark=coco --backbone=vgg16/resnet50 --fold=0 --bsz=20 --fold=0 --lr=1e-3 --logpath=defalut_logpath
4.If you continue to train you saved model, you should add additional parameters, such as
/train.py --datapath=/data/coco --benchmark=coco --backbone=vgg16/resnet50 --fold=0 --bsz=20 --fold=0 --lr=1e-3 --resume --loadpath=dir/best_model.pt
/test.py --datapath=/data/VOCdevkit --benchmark=pascal --backbone=vgg16/resnet50 --nshot=5 --use_original_imgsize --fold=0 --load==dir/best_model.pt --lr=1e-3 --bsz=20
/test.py --datapath=/data/coco --benchmark=coco --backbone=vgg16/resnet50 --nshot=5 --use_original_imgsize --fold=0 --load==dir/best_model.pt --lr=1e-3 --bsz=20
/test.py --datapath=/data/VOCdevkit --benchmark=pascal --backbone=vgg16/resnet50 --nshot=5/1 --fold=0 --load==dir/best_model.pt --lr=1e-3 --bsz=20 --visualize --visual_fold_name=defalut_visual_fold_name
wget https://fecanet.oss-cn-shanghai.aliyuncs.com/pretrained_model.zip
and put it under FECANet folder.
/test.py --datapath=/data/VOCdevkit --benchmark=pascal --backbone=vgg16/resnet50 --nshot=1/5 --fold=0 --load==/pretrained_model/dense+our_pascal{0}_resnet.log/best_model.pt --lr=1e-3 --bsz=20 --use_original_imgsize
The experiment results on PASCAL-5i dataset
The experiment results on COCO-20i dataset
We borrow code from public projects (huge thanks to all the projects). We mainly borrow code from HSNet.