j0ma / ancestral-decipherment

1 stars 0 forks source link

Yiddish-to-German "decipherment" #3

Open j0ma opened 1 year ago

j0ma commented 1 year ago

After training Nada's model for 20 epochs (like in her paper) this is what we get

When we take a random German sentence from the dev set, the model does quite well:

$ vipe < random_german.txt |
    while read line; \
    do \
        echo $line \
        echo $line | bash translate_from_stdin.sh 2>/dev/null \
    done

> aber was konnte ich weiter thun die bequemlichkeit des dienstes in
< aber was kommte ich weiter thum die bequenlichkeit des diemstes i

Clearly there are some mistakes like s/n/m/g but overall the performance is remarkably good.

The model is also able to handle just a little chunk of the sentence:

> aber was konnte ich weiter thun
< aber was kommte ich weiter thu

However the model is quite brittle. If we think up a random sentence that is not taken from the development data but is nevertheless similar to the sentence above, we get pseudo-Swedish output:

> sag mir was kann ich tun
< nar som han fatt ock du

When we use a sentence whose syntax is a bit closer to the dev sentence, the model fares better. At least the output is now pseudo-German.

> aber sag mir was konnte ich weiter tun
< aber mag fur dam wollte uns deuter th

When we modify the sentence to be a bit more "Yiddish-like" (konnte is not valid Yiddish), the output degrades substantially:

> ober zog mir vos konnte ikh vayter tun
< aber was fur daz hattie uhl domier in

What's cool to notice is that the model does pick up on (aber, ober) and (was,vos).

j0ma commented 1 year ago

Posting this here as well, originally from #2 :

Both models trained for 20 epochs using 1xV100 GPU with update_freq=4, i.e. training with delayed updates simulating 4xGPUs.

Transformer model from paper

SER Accuracy    Language
0.163   27.983  multi

Halved learning rate

Halving the learning rate seems to bring gains

SER Accuracy    Language
0.139   31.326  multi