monologg / R-BERT

Pytorch implementation of R-BERT: "Enriching Pre-trained Language Model with Entity Information for Relation Classification"
Apache License 2.0
351 stars 71 forks source link

Question about F1 results #1

Closed heslowen closed 4 years ago

heslowen commented 4 years ago

Hello, thanks for your works. I got the final F1 result 82.0% after 5 epoch training while 89.25% in the paper. What about you?

monologg commented 4 years ago

Hi:) As I write on the README page, the score shown during training is not the official score of SemEval 2010 Task 8. Actually there is a Official perl script for testing. I made a wrapper python file (official_eval.py). You may run this one:) Acually in my case, 88.92% is the max f1 score that I got. (Sadly, can't remember what random seed I used for this score...)

If your 82% comes from the official score, I recommend you to change the random seed and try it again

heslowen commented 4 years ago

aha, thank you so much! @monologg