liuwei1206 / LEBERT

Code for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling Using BERT Adapter"
336 stars 60 forks source link

Results all "O" on weibo #38

Closed zzysh12345 closed 2 years ago

zzysh12345 commented 2 years ago

Your work is really amazing! I am currently learning your code. When I try to train LEBERT on the weibo dataset, I find that the predicted results are all "O" despite that I haven't done any changes to your code. However, using the checkpoint you provide can indeed get good results. What could be the reason for this? How can I train on my own to get these checkpoints provided by you? I would be very appreciated if you helped me!

liuwei1206 commented 2 years ago

When training, do you use the stript for Weibo in the Weibo checkpoint?

zzysh12345 commented 2 years ago

When training, do you use the stript for Weibo in the Weibo checkpoint?

Yes, except that I change the model to "pytorch_model.bin" and add "do train"in that I want to train on my own. Moreover, when I try to train on resume, it reaches 90 which is lower than 95 reported. How can I reproduce the checkpoint provided?

liuwei1206 commented 2 years ago

It's strange. If you use my scripts, you should be able to reproduce the results. The running environment may be the reason. As mentioned in the previous issue, someone found the version of Cuda and other environments will have a big influence on the performance.