download datasets from here:
dataset | IMDB | YELP | GDRD | PPR |
---|---|---|---|---|
original link | Link | Link | Link | Link |
and organize data as follows:
|--$corpus
| |-- imdb2
| | | a
| | | b
| |-- yelp2
| | | a
| | | b
| |-- gdrd
| | | a
| | | b
| |-- ppr
| | | a
| | | b
install required packages:
pip install -r requirement.txt
config with configuration files:
|--$cfgs
| |-- {Model}_model.yml # configurations for each base model
| |-- config.py # configurations for running settings
| |-- constants # constant configurations
training and evaluating on full-shot scenario (e.g., IMDB datasets)
python run.py --run train --version test_fullshot --gpu 0 --dataset imdb2a --model bert
training and evaluating on few-shot scenario (e.g., IMDB datasets)
python run.py --run fewshot --version test_fewshot --gpu 0 --dataset imdb2b --model bert
evaluating on zero-shot scenario (e.g., IMDB datasets)
python run.py --run zeroshot --version test_zeroshot --gpu 0 --dataset imdb2b --model bert
awaiting update.