gqk / LAE

A Unified Continual Learning Framework with General Parameter-Efficient Tuning, ICCV 2023 [PyTorch Code]
https://arxiv.org/abs/2303.10070
Apache License 2.0
69 stars 3 forks source link

A Unified Continual Learning Framework with General
Parameter-Efficient Tuning

[Qiankun Gao](https://github.com/gqk), [Chen Zhao](https://zhao-chen.com/), [Yifan Sun](https://yifansun-reid.github.io), Teng Xi, Gang Zhang, [Bernard Ghanem](https://www.bernardghanem.com/), [Jian Zhang](https://github.com/jianzhangcs) [[`Paper`](https://github.com/gqk/LAE/files/12387816/03318.pdf)] [[`Supp`](https://github.com/gqk/LAE/files/12387826/03318-supp.pdf)] [[`arXiv`](https://arxiv.org/abs/2303.10070)] [[`BibTex`](#citation)]

News

Installation

Dataset

  1. Create a dataset root diretory, e.g., data.
  2. CIFAR100 and ImageNet-R datasets will be automatically downloaded, while DomainNet requires manual download.
  3. Overview of dataset root diretory

    ├── cifar100
    │   └── cifar-100-python
    ├── domainnet
    │   ├── clipart
    │   ├── infograph
    │   ├── painting
    │   ├── quickdraw
    │   ├── real
    │   └── sketch
    └── imagenet-r
        ├── imagenet-r
        ├── train_list.txt
        └── val_list.txt

    :warning: The train-validation split of ImageNet-R dataset are consistent with the L2P JAX code, replace the train_list.txt and val_list.txt with train_list_coda-p.txt and val_list_coda-p.txt if you want to use the train-validation splitation of CODA-Prompt.

Experiment

Acknowledgement

Citation

@inproceedings{gao2023lae,
  title={A Unified Continual Learning Framework with General Parameter-Efficient Tuning}
  author={Gao, Qiankun and Zhao, Chen and Sun, Yifan and Xi, Teng and Zhang, Gang and Ghanem, Bernard and Zhang, Jian},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  pages={11483--11493},
  year={2023}
}