jiphyeonjeon / season3

Jiphyeonjeon Season 3
MIT License
39 stars 6 forks source link

DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter #38

Open jinmang2 opened 2 years ago

jinmang2 commented 2 years ago

집현전 중급반 스터디

Abstract

As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.

jinmang2 commented 2 years ago

발표자료: https://drive.google.com/file/d/1qTECNfIimcq-MSreySNGy3ziMmlXV--3/view?usp=sharing

jinmang2 commented 1 year ago
mycogno commented 1 year ago

위의 발표 링크가 다른 발표와 연결되어서 다시 링크 올려드립니다.