openai / finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"
https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf
MIT License
2.15k stars 503 forks source link

Why are the "wrong" sentences are learned during training via LM? #45

Open fabiang7 opened 5 years ago

fabiang7 commented 5 years ago

Maybe I don't interpret the model(...)-function correctly, but I see the following:

While training you put the correct and wrong rocstories together into the decoder. They both go through the embedding + decoder and then into the sparse_softmax_cross_entropy-function.

This means, though, that the model also learns to generate wrong sentences, or am I missing something?

My intuition would be to set all masks to 0 for the wrong sentences?!

Thanks and regards