MartaYang / SEGA

Codes for the WACV 2022 paper: "SEGA: Semantic Guided Attention on Visual Prototype for Few-Shot Learning"
MIT License
21 stars 1 forks source link

SEGA: Semantic Guided Attention on Visual Prototype for Few-Shot Learning
(WACV 2022)

Fengyuan Yang, Ruiping Wang, Xilin Chen

[Paper link], [Supp link]

1. Requirements

2. Datasets

Note: the above datasets are the same as previous works (e.g. FewShotWithoutForgetting, DeepEMD) EXCEPT that we include additional semantic embeddings (GloVe word embeddings for the first 3 datasets and attributes embeddings for CUB-FS). Thus, remember to change the argparse arguments semantic_path in training and testing scripts.

3. Usage

Our training and testing scripts are all at scripts/ and are all in the form of jupyter notebook, where both the argparse arguments and output logs can be easily found.

Let's take training and testing paradigm on miniimagenet for example. For the 1st stage training, run all cells in scripts/01_miniimagenet_stage1.ipynb. And for the 2nd stage training and final testing, run all cells in scripts/01_miniimagenet_stage2_SEGA_5W1S.ipynb.

4. Results

The 1-shot and 5-shot classification results can be found in the corresponding jupyter notebooks.

5. Pre-trained Models

The pre-trained models for all 4 datasets after our first training stage can be downloaded from here.

Citation

If you find our paper or codes useful, please consider citing our paper:

@inproceedings{yang2022sega,
  title={SEGA: Semantic Guided Attention on Visual Prototype for Few-Shot Learning},
  author={Yang, Fengyuan and Wang, Ruiping and Chen, Xilin},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={1056--1066},
  year={2022}
}

Acknowledgments

Our codes are based on Dynamic Few-Shot Visual Learning without Forgetting and MetaOptNet, and we really appreciate it.

Further

If you have any question, feel free to contact me. My email is fengyuan.yang@vipl.ict.ac.cn