hipe-eval / HIPE-scorer

A python module for evaluating NERC and NEL system performances as defined in the HIPE shared tasks (formerly CLEF-HIPE-2020-scorer).
https://hipe-eval.github.io
MIT License
13 stars 4 forks source link

ToDo CLEF scorer #1

Closed aflueckiger closed 4 years ago

aflueckiger commented 4 years ago

decisions

--> compare F1-score with CONLL script as a sanity check

programming

e-maud commented 4 years ago

@simon-clematide @aflueckiger @mromanello I was just wondering:

Columns are treated separately to compute elements, this is normal, but for the 'fine-grained' setting, wouldn't it make sense to have a score considering fine, components and nested columns together? Or perhaps fine+components together at least, to capture the fine case? Then there would be the question of how to do fuzzy vs exact, and it might multiply too much eval cases...

simon-clematide commented 4 years ago

That's worth thinking about. For the components case, it make sense to take into account what it is a component for. One way to think about it could be that the components are always thought with the fine-grained type attached. As if

# segment_iiif_link = _
M   B-pers  O   B-pers.ind  O   B-comp.title    O   Q2853810    _   NoSpaceAfter
.   I-pers  O   I-pers.ind  O   I-comp.title    O   Q2853810    _   _
Thibaudeau  I-pers  O   I-pers.ind  O   B-comp.name O   Q2853810    _   _

would actually be

# segment_iiif_link = _
M   B-pers  O   B-pers.ind  O   B-pers.ind.comp.title   O   Q2853810    _   NoSpaceAfter
.   I-pers  O   I-pers.ind  O   I-pers.ind.comp.title   O   Q2853810    _   _
Thibaudeau  I-pers  O   I-pers.ind  O   B-pers.ind.comp.name    O   Q2853810    _   _

Meaning: the components always have the fine-grained type attached to them.

e-maud commented 4 years ago

Super convenient, that would definitely ease the eval script. I think it makes sense to consider fine type + components for the NERC-fine task.

Summarizing eval settings for NERC, there would be:

In terms of concrete output (but you might have discussed/checked it already), we need to think of how to communicate results, since all this has to fit neatly somewhere. What about 1 table (csv) per bundle and per team? Happy to sketch it if needed.

aflueckiger commented 4 years ago

Evaluation sample

The script produces the following tsv output when evaluating for the coarse format, covering all regimes. What do you think @e-maud @mromanello @simon-clematide? I also computed the type-based macro scores. Should I include them as well here even though this leads to another amazing bloat of the file?

Evaluation Label P R F1 F1_std P_std R_std TP FP FN
NE_COARSE_LIT-micro-fuzzy LOC 1 0.987012987012987 0.993464052287582       76 0 1

shortened

aflueckiger commented 4 years ago

@e-maud @mromanello @simon-clematide I am currently writing the unit tests for the evaluation. I want to ask a quick question to make sure we are all on the same page concerning FP/FN. Consider the artificial example:

TOKEN PRED GOLD
Winthertur B-loc.adm.town B-loc.adm.town
Test I-loc.adm.town O

Following Batista's official definition of FP/FN, the example would result in 1 FN and 1 FP. Unfortunately, the strict scenario severely punishes wrong boundaries. Moreover, we reward systems that miss entities over systems that predict the wrong boundaries. Do we really want to follow this?

source: http://www.davidsbatista.net/blog/2018/05/09/Named_Entity_Evaluation/

Moreover, I am glad that we do this as there are severe miscalculations in the original code. :roll_eyes:

simon-clematide commented 4 years ago

Can you check what the okd conll eval does here?

Von meinem iPhone gesendet

Am 10.02.2020 um 15:41 schrieb aflueckiger notifications@github.com:

 @e-maud @mromanello @simon-clematide I am currently writing the unit tests for the evaluation. I want to ask a quick question to make sure we are all on the same page concerning FP/FN. Consider the artificial example:

TOKEN PRED GOLD . B-pers.ind O Herr I-pers.ind B-pers.ind Pasitsch B-pers.ind I-pers.ind er I-pers.ind O In the fuzzy scenario, this would lead to a single FP as one predicted entity overlaps with gold and the other is spurious (over-generated). In the strict scenario, this would lead to a threefold error. Both predictions are wrong resulting in two FP and the correct one is missing yielding another FN. In short, systems that over-generate entities while missing the correct ones are severely punished. Moreover, I am glad that we do this as there are severe miscalculations in the original code. 🙄

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

aflueckiger commented 4 years ago

I think the CoNLL shared task 2003 evaluated similar:

“precision is the percentage of named entities found by the learning system that are correct. Recall is the percentage of named entities present in the corpus that are found by the system. A named entity is correct only if it is an exact match of the corresponding entity in the data file.”

paper: https://www.aclweb.org/anthology/W03-0419.pdf

I could not find the evaluation script, so I am not entirely sure.

aflueckiger commented 4 years ago

@e-maud @simon-clematide @mromanello

Ok, we are getting to a ground in our dive for fishy metrics. :laughing:

CoNLL2000 also punishes wrong boundaries twice, one FP and one FN. source: https://www.clips.uantwerpen.be/conll2000/chunking/output.html

I think this is suboptimal as conservative systems predicting nothing are better off than systems predicting entities even in cases of unclear boundaries. Nevertheless, I suggest to follow this standard. Yet, we need to keep this in mind when evaluating the systems.

We could also raise participant's attention for this peculiarity in the README of the scorer. What's your take?

PS: our numbers are in line with the CoNLL2000 standard.

e-maud commented 4 years ago

Many thanks @aflueckiger for this diving! I would also vote for aligning ourselves on CoNNL (or are we fostering evaluation script error propagation through the years ? in any case, it is nice to be able to compare, even at a high level). Regarding warning the participants, at first I thought that by doing so we might encourage them not to predict when they are unsure, and therefore that it would be better not emphasizing this point. However, systems behaving as such would have bad fuzzy scores, so participants might not tune their systems in this direction. An in any case, it is also a matter of fairness so I think it is good if we mention it.

aflueckiger commented 4 years ago

Simon also shares this view. Thus, we keep the double punishment.

mromanello commented 4 years ago

can be closed