YDU-uva / ProtoDiff

Official code for ProtoDiff
10 stars 2 forks source link

ProtoDiff

This repository contains the code for ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided Diffusion.

Citation

@inproceedings{protodiff,
  title={ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided Diffusion},
  author={Du, Yingjun and Xiao, Zehao and Liao, Shengcai and Snoek, Cees},
  booktitle={NeurIPS 23},
  year={2023}
}

Running the code

Preliminaries

Environment

Datasets

Download the datasets and link the folders into materials/ with names mini-imagenet, tiered-imagenet and imagenet. Note imagenet refers to ILSVRC-2012 1K dataset with two directories train and val with class folders.

When running python programs, use --gpu to specify the GPUs for running the code (e.g. --gpu 0,1). For Classifier-Baseline, we train with 4 GPUs on miniImageNet and tieredImageNet and with 8 GPUs on ImageNet-800. Meta-Baseline uses half of the GPUs correspondingly.

In following we take miniImageNet as an example. For other datasets, replace mini with tiered or im800. By default it is 1-shot, modify shot in config file for other shots. Models are saved in save/.

1. Training Classifier-Baseline

python train_classifier.py --config configs/train_classifier_mini.yaml

2. Training ProtoDiff

python train_1.py --config configs/train_meta_mini.yaml