jiphyeonjeon / season3

Jiphyeonjeon Season 3
MIT License
39 stars 6 forks source link

RoBERTa: A Robustly Optimized BERT Pretraining Approach #29

Open jinmang2 opened 2 years ago

jinmang2 commented 2 years ago

집현전 중급반 스터디

Abstract

Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.

schema-0 commented 2 years ago

집현전3기 11조 RoBERTa 발표자료 입니다. https://drive.google.com/file/d/1vokSJX2UBOZz4aqqz7G4KastPl0m_02g/view?usp=sharing

jinmang2 commented 2 years ago