morioka / reading

0 stars 0 forks source link

Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning #35

Open morioka opened 4 years ago

morioka commented 4 years ago

https://twitter.com/arxiv_org/status/1324123219612246018

https://arxiv.org/abs/2011.01403 Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning Beliz Gunel, Jingfei Du, Alexis Conneau, Ves Stoyanov Stanford, FAIR State-of-the-art natural language understanding classification models follow two-stages: pre-training a large language model on an auxiliary task, and then fine-tuning the model on a task-specific labeled dataset using cross-entropy loss. Cross-entropy loss has several shortcomings that can lead to sub-optimal generalization and instability. Driven by the intuition that good generalization requires capturing the similarity between examples in one class and contrasting them with examples in other classes, we propose a supervised contrastive learning (SCL) objective for the fine-tuning stage. Combined with cross-entropy, the SCL loss we propose obtains improvements over a strong RoBERTa-Large baseline on multiple datasets of the GLUE benchmark in both the high-data and low-data regimes, and it does not require any specialized architecture, data augmentation of any kind, memory banks, or additional unsupervised data. We also demonstrate that the new objective leads to models that are more robust to different levels of noise in the training data, and can generalize better to related tasks with limited labeled task data. ![image](https://user-images.githubusercontent.com/1615546/98460751-6ece2680-21ea-11eb-9d89-ac96cd4c3c0e.png) ![image](https://user-images.githubusercontent.com/1615546/98460763-85747d80-21ea-11eb-99f2-07bedf55117b.png)
DeepTecher commented 4 years ago

have u found the implement?

morioka commented 3 years ago

not yet.