facebookresearch / MLQA

New dataset
Other
298 stars 24 forks source link

Fine-tuning the XLM-R on the dev set of each language #12

Open nooralahzadeh opened 4 years ago

nooralahzadeh commented 4 years ago

Hi, Have you tried to fine-tune the XLM-R model after pre-trained on English on the dev set of other languages (few-shot learning) and then evaluate on its test set? The strange thing is that the performance on XLM-R is lower in few-shot learning compare to the zero-shot setting.

Thanks