Authors: Wenxuan Zhang,
Paul Janson,
Rahaf Aljundi,
Mohamed Elhoseiny @
KAUST Vision-CAIR,
TME  
Use this repo to reproduce the results of our methods as well as the baselines.
conda env create -f environment.yml
conda activate clip
Use the following to install the learning rate scheduler
pip install 'git+https://github.com/katsura-jp/pytorch-cosine-annealing-with-warmup'
Prepare the datasets by following the instructions in the data
folder.
python main.py dataset=[cifar100 | cub | cars | aircraft | gtsrb | birdsnap ]
Supported baselines:
flyp
: Finetune like you pretrain [paper]er
: Experimence replay [paper]lwf
: Learning without forgetting [paper]mas
: Memory aware synapses [paper]prd
: Prototype-sample relation distillation [paper]loraewc
: LoRA finetune with EWC regularization[paper]slca
: Slow learner with classifier alignment [paper]sparsecl
: Sparse Continual Learning [paper]spg
: Soft-masks parameter updating [paper]zscl
: Zero-shot Continual Learning [paper]python main.py \
dataset=[cifar100 | cub | cars | aircraft | gtsrb | birdsnap ] \
baseline@_global_=[flyp | er | lwf | mas | prd | loraewc | slca | sparsecl | spg | zscl]
For replay based method, use balanced_buffer=False
to apply uniform sampling (uniformly from buffer and the current task)
python main.py dataset=your_dataset baseline@_global_=your_baseline balanced_buffer=False
Use joint=True
for joint training
python main.py dataset=your_dataset baseline@_global_=your_baseline joint=True
Adjust buffer_size
to scaling down or up the buffer size
python main.py dataset=your_dataset baseline@_global_=your_baseline buffer_size=0.5
Adjust num_tasks
to adjust the number of split of dataset
python main.py dataset=your_dataset baseline@_global_=your_baseline num_tasks=20
@inproceedings{zhang2024overcoming,
title={Overcoming Generic Knowledge Loss with Selective Parameter Update},
author={Zhang, Wenxuan and Janson, Paul and Aljundi, Rahaf and Elhoseiny, Mohamed},
booktitle={CVPR},
year={2024}
}