jianghaojun / Awesome-Parameter-Efficient-Transfer-Learning

A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.
MIT License
391 stars 26 forks source link
computer-vision deep-learning machine-learning multimodal-deep-learning parameter-efficient-learning parameter-efficient-tuning transfer-learning

Awesome-Parameter-Efficient-Transfer-Learning

A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.

Content

Why Parameter Efficient?

Pre-training, then fully fine-tuning is a long standing paradigm in deep learning. However, as pre-trained models are scaling up, e.g. GPT-3(175B params), fully fine-tuning them on various downstream tasks has a high risk of overfitting. Moreover, in practice, it would be costly to train and store a large model for each task. To overcome the above issues, researchers started to explore Parameter-Efficient Transfer Learning which aims at adapting large-scale pre-trained model to various downstream tasks by modifying as less parameter as possible. Inspired by the great advances in NLP domain and the continuous trend of scaling up models, scholars in computer vision and multimodal domains also join the research craze.

Keywords Convention

We follow the general idea of PromptPapers to label the papers.

The abbreviation of the work.

The main explored task of the work.

Other important information of the work.

Papers

Prompt

Adapter

Unified

Others

Contribution

Contributors

Contributing to this paper list

Acknowledgement

The structure of this repository is following thunlp/DeltaPapers which focuses on collecting awesome parameter-efficient transfer learning papers in nature language processing domain. Check out their repository if you are interested in the progress of NLP domain.