Implementation used in our paper:
Adaptive Masked Proxies for Few Shot Segmentation
Extended Version: Accepted in ICCV'19 for the Extended version.
Workshop Paper: Accepted in Learning from Limited Labelled Data Workshop in Conjunction with ICLR'19.
Deep learning has thrived by training on large-scale datasets. However, for continual learning in applications such as robotics, it is critical to incrementally update its model in a sample efficient manner. We propose a novel method that constructs the new class weights from few labelled samples in the support set without back-propagation, relying on our adaptive masked proxies approach. It utilizes multi-resolution average pooling on the output embeddings masked with the label to act as a positive proxy for the new class, while fusing it with the previously learned class signatures. Our proposed method is evaluated on PASCAL-5i dataset and outperforms the state of the art in the 5-shot semantic segmentation. Unlike previous methods, our proposed approach does not require a second branch to estimate parameters or prototypes, which enables it to be used with 2-stream motion and appearance based segmentation networks. The proposed adaptive proxies allow the method to be used with a continuous data stream. Our online adaptation scheme is evaluated on the DAVIS and FBMS video object segmentation benchmark. We further propose a novel setup for evaluating continual learning of object segmentation which we name incremental PASCAL (iPASCAL) where our method has shown to outperform the baseline method.
1-way 1-shot segmentation
2-way 1-shot segmentation
Current Code is tested on torch 0.4.1 and torchvision 0.2.0. and python 3.6.9
virtualenv --system-site-packages -p python3 ./venv
source venv/bin/activate
pip install -r requirements.txt
Download trained weights here fcn8s_pasal_normalize_training.zip
To use with google Colab upload notebook with the following url Demo
python train.py --config configs/fcn8s_pascal.yaml
python fewshot_imprinted.py --binary BINARY_FLAG --config configs/fcn8s_pascal_imprinted.yml --model_path MODEL_PATH --out_dir OUT_DIR
The updated results after using CosineSimLayer during training that normalizes both features and weights
Fold | 0 | 1 | 2 | 3 | mIoU |
---|---|---|---|---|---|
AMP - 1shot | 39.6 | 52.1 | 46.7 | 34.6 | 43.3 |
AMP - 5shot | 44.5 | 57.3 | 50.8 | 41.4 | 48.5 |
python vis_preds.py VIS_FOLDER
Check Experiments.md Results reported in the short version paper were using the Foregound IoU and the dataloader provided random pairs that weren't exactly same as the ones used by OSLSM. While the corrected results in the extended version reported using Foreground IoU per class and using the pairs generated by OSLSM code exactly.
To reproduce results using our dataloader instead of reading random pairs generated from OSLSM code check prev_results branch.
Please cite our paper if you find it useful in your research
@InProceedings{Siam_2019_ICCV,
author = {Siam, Mennatullah and Oreshkin, Boris N. and Jagersand, Martin},
title = {AMP: Adaptive Masked Proxies for Few-Shot Segmentation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}