This is the code for our paper "MaskSplit: Self-supervised Meta-learning for Few-shot Semantic Segmentation", which is accepted to WACV 2022 and is available at [arXiv].
Create the following directory structure before running the code:
data
├── coco
│ ├── train2014
│ ├── train2014gt
│ ├── train2014saliency
│ ├── val2014
│ └── val2014gt
└── pascal
├── JPEGImages
├── SegmentationClassAug
└── saliency_unsupervised_model
1. PASCAL-5i
Download PASCAL VOC2012 devkit (train/val data):
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
SegmentationClassAug and saliency_unsupervised_model can both be downloaded from the following links we provided:[SegmentationClassAug] and [saliency_unsupervised_model].
2. COCO-20i
Download COCO2014 train/val images:
wget http://images.cocodataset.org/zips/train2014.zip wget http://images.cocodataset.org/zips/val2014.zip
COCO2014 train/val annotations and train2014saliency can be downloaded from the following links: [train2014gt.zip], [val2014gt.zip], [train2014saliency.zip].
To be able to train the models, pretrained backbones are needed, which can be downloaded from: https://drive.google.com/drive/folders/1gYzrgP5oxAKBlWrloozXPqbQeZbdYTus?usp=sharing.
We also provide pretrained models, which are trained with our approach. These can be found at [PASCAL-5i].
Before starting training there are some steps that should to be taken:
Then, run the command:
python src/train.py --config {path_to_config_file}
To test a model, you need to first do:
Then, run the command:
python src/test.py --config {path_to_config_file}
We are grateful to the authors of https://github.com/mboudiaf/RePRI-for-Few-Shot-Segmentation from which some parts of our code are inspired.