facebookresearch / XLM

PyTorch original implementation of Cross-lingual Language Model Pretraining.
Other
2.87k stars 495 forks source link

How is sentence piece model trained in XLM-R? #350

Open mani-rai opened 2 years ago

mani-rai commented 2 years ago

I understand training sentence piece model in monolingual case. But in multilingual case, its not clear enough. It's because dataset sizes across languages varies greatly. I think this leads to biased shared vocabulary.

  1. Is it using sampling technique while training sentence piece as well?
  2. If yes, how many times is sampling performed?
  3. Isn't it better to go through all the text in dataset to create sub-words vocab instead of just the samples?