The provided pre-train model produce strange translation results on single/short word translation, for example:
hi --> 1 。 Hi ,你好
hey --> 嘿,嘿,嘿,嘿
dog --> 1 。一条经过训练的狗
apple --> 如 : (1) 苹果。
In that case, the model is not usable at all if it fails in translating common words accurately. Is there any solution on eliminating such problem via training/fine-tuning?
Dear author,
The provided pre-train model produce strange translation results on single/short word translation, for example:
hi --> 1 。 Hi ,你好 hey --> 嘿,嘿,嘿,嘿 dog --> 1 。一条经过训练的狗 apple --> 如 : (1) 苹果。
In that case, the model is not usable at all if it fails in translating common words accurately. Is there any solution on eliminating such problem via training/fine-tuning?