codertimo / BERT-pytorch

Google AI 2018 BERT pytorch implementation
Apache License 2.0
6.11k stars 1.29k forks source link

Question about the loss of Masked LM #49

Open zhezhaoa opened 5 years ago

zhezhaoa commented 5 years ago

Thank you very much for this great contribution. I found the loss of masked LM didn't decrease when it reaches the value around 7. However, in the official tensorflow implementation, the loss of MLM decreases to 1 easily. I think something went wrong in your implementation. In additional, I found the code can not predict the next sentence correctly. I think the reason is: self.criterion = nn.NLLLoss(ignore_index=0). It can not be used as criterion for sentence prediction because the label of sentence is 1 or 0. We should remove ignore_index=0 for sentence prediction. I am looking forward to your reply~

tanaka-jp commented 5 years ago

I think the reason is: self.criterion = nn.NLLLoss(ignore_index=0). It can not be used as criterion for sentence prediction because the label of sentence is 1 or 0.

I think you are right. My loss of next sentence is very low, but the acc of next_correct is always near 50%.

raulpuric commented 5 years ago

I've been trying to repro BERT's pretraining results from scratch in my own time, and I have been unable to train beyond an masked LM loss of 5.4. So if anyone is able to get past this point I'd love to learn what you did.

codertimo commented 5 years ago

Sorry for my late update, and I think your point is right too. I'll fix it up ASAP

itamargol commented 5 years ago

What is the verdict here regarding next sentence task? Should we use 2 different loss function, without ignore=0 for sentence prediction?

And what about the MLM? anyone found a solution? Can't drop also beneath 6/7...

tanqiao2 commented 1 year ago

I have the same problem ........