princeton-nlp / SimCSE

[EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821
MIT License
3.33k stars 505 forks source link

Question about used models #230

Closed MLKoz closed 1 year ago

MLKoz commented 1 year ago

Hello, I would like to know why you conducted experiments with BERT and RoBERTa instead of XLNet or DeBERTa? I can read that XLNet is better than BERT/RoBERTa in many NLP tasks and I think about testing XLNet using SimCSE, maybe do you know some disadvatanges? Thanks.

gaotianyu1350 commented 1 year ago

Hi,

Our method is model agnostic and should be able to adapt to any pre-trained models. We choose BERT/RoBERTa because they are more commonly used in the community. I also believe that RoBERTa performs better than XLNet on a lot of tasks.

github-actions[bot] commented 1 year ago

Stale issue message