jefferyYu / UMT

Preprocessed Datasets for our Multimodal NER paper
112 stars 13 forks source link

About number of entites in dataset #13

Open gagaein opened 3 years ago

gagaein commented 3 years ago

First, thank you for your excellent work! When I run your mode on Twitter2015, I noticed the eval result is below: precision recall f1-score support

     LOC     0.7721    0.8471    0.8079      1720
    MISC     0.3599    0.4072    0.3821       754
     ORG     0.6380    0.5860    0.6109       860
     PER     0.8363    0.8783    0.8568      1873
       _     0.0000    0.0000    0.0000         0

Please attend to the support column, num of entites does not match the description of dataset Twitter2015. For instance, here the num of PER entites is 1873 in dev set, while description of dataset Twitter2015 says the num of PER entites in dev set is 1816. I cannot understand why there can be more entities reported in eval result. And I sincerely ask for your help. Thanks Again :)

Miss-Ming commented 3 years ago

I also found this issue when running the code. Based on my observation, this is because the annotation quality of Twitter-2015 is not very high, and there are many entities starting with the 'I-PER' or 'I-LOC' tag. For example, line 2257 Seuss is labeled as 'I-PER', but its preceeding token is labeled as 'O'. In the evaluation script, these entities are also counted as entities; but in the paper only those entities starting with the 'B-type' tag are counted as entities. In contrast, the annotation quality of Twitter-2017 is relatively higher, and it does not have this issue.

gagaein commented 3 years ago

Thank you for your reply! I see the annotation trouble of Twitter-2015. That's really strange :( I will try to use your evaluation script to get the right performance scores. Thanks again for taking your valuable time!