Closed potamides closed 3 years ago
merged. thanks very much, @potamides ! The results look nice overall.
btw, do you have any idea why the original XMoverScore performs so badly on MLQE-PE (en-de, en-zh and si-en)?
I think the cause for the bad performance on the high-resource language pairs en-de and en-zh is caused by a lack of variability in the assigned scores. This is also already discussed in the MLQE-PE and WMT 2020 Shared Task on Quality Estimation papers. Relevant excerpt:
MT quality for the high-resource language pairs, in particular English-German, was the most challenging to predict. As discussed in Fomicheva et al. (2020a), the MT outputs for this language pair have little variability in terms of perceived MT quality. The vast majority of translations were assigned high scores during DA evaluation, which makes it difficult to capture any meaningful variation between the DA scores.
The bad performance on si-en is less obvious to me however. This could be due to the generally comparatively poor performance of mBERT on low-resource languages.
Thanks! That makes sense to me. The biased datasets (en-de and en-zh) could be too hard for metrics, even for bilingual experts, to properly assign scores.
btw, I found that Sinhala is not covered in mBERT, and that could lead to the bad results on si-en. But it's nice to see that re-mapping techniques help a bit.
btw, I found that Sinhala is not covered in mBERT, and that could lead to the bad results on si-en. But it's nice to see that re-mapping techniques help a bit.
Ah that makes sense. Thanks for looking into that! I wasn't aware of that.
Hello there, I wanted to evaluate XMoverScore on the WMT16 and MLQE-PE datasets. Since remapping matrices did not exist for all language pairs, I head to compute some myself. With this pull request I want to contribute these remapping matrices to the XMoverScore project. I provide new CLP and UMD projection tensors extracted from both the 8th and 12th mBERT layer for the following language directions:
For English-German and Romanian-English I used the Europarl v7 corpus, for the non-european language pairs English-Chinese and English-Russian I used the UN v1 corpus and for the low-resource language pairs Nepali-English and Sinhala-English I used the flores v1 corpus. This is also reflected in the file names. The following two tables summarize the reached Pearson correlation on both datasets. New language pairs are highlighted:
en-ru
ro-en
en-de
en-zh
ro-en
ne-en
si-en