facebookresearch / XLM

PyTorch original implementation of Cross-lingual Language Model Pretraining.
Other
2.87k stars 495 forks source link

[Question] Does XLM-R follows RoBERTa or XLM for MLM? #351

Open mani-rai opened 2 years ago

mani-rai commented 2 years ago

Hugging Face states that:

It is based on Facebook’s RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data.

While XLM-R paper states:

We follow the XLM approach as closely as possible, only introducing changes that improve performance at scale.

The confusion is RoBERTa uses dynamic masking whereas XLM uses static one. Can somebody explain me what exactly is XLM-R doing in MLM?