Open eu9ene opened 1 month ago
Translation by the teacher model: "Language environment and native teachers. Non-obvious moments".
I don't see any made-up words in the full text as well even though the fluency is far from perfect. Only a couple of informal abbreviation were translated as is, for example "дз" (home work) as "DZ" and "выпускники инязов" (graduates of universities that specialize in foreign languages) as "graduates of inaz". This text is written in a very informal style, so I guess overall lower quality of translation is expected.
This means we have the issue with made-up words specifically for the student model.
I don't see any made-up words in translation of news where more formal language is used.
I wonder if this could be caused by the decoder being too shallow, or the decoder not being big enough. This could be good to experiment with, and also test the difference in the performance of the models.
Maybe lexical shortlisting could also be affecting this?
We recently got a report about made up words in Turkish.
I also tested the new ru-en model and noticed a lot of non-existent words there as well.
For example: https://habr.com/ru/articles/842924/
The title is translated as "Language environment and teacher-sdocument. Non-evous moments". It should be "Language environment and native-speaking teachers. Non-obvious points".
We should investigate why this happens. It might have something to do with the recent robustness fixes or using a shortlist.