allenai / scibert

A BERT model for scientific text.
https://arxiv.org/abs/1903.10676
Apache License 2.0
1.47k stars 214 forks source link

JNLPBA Results #94

Open leonweber opened 4 years ago

leonweber commented 4 years ago

Hey,

thanks for your awesome work!

In the paper you write that you use macro F1-scores. On JNLPBA, however, we typically see micro average being ~4pp lower than the macro average and in the logs in your GitHub repository (https://github.com/allenai/scibert/tree/master/results) SciBERT seems to achieve micro F1-scores in the range of 77%. Thus, I’m wondering whether the macro in your paper might be a typo?

shizhediao commented 3 years ago

I was wondering how to produce macro average F1 from the code? Thanks!

kuldeep7688 commented 3 years ago

Hi I am using seqeval for calculating the statistics. Should i keep using seqeval or calculate manually ? Has anyone tried using Seqeval ?