This pull request solves the problems discussed in issue #13. What it does is to use the boundaries defined in the NER task to set the boundaries of the EL task:
This makes the scorer to follow approaches used in other evaluators like Gerbil or NelEval, where the boundaries are defined by named entities.
Furthermore, it prevents wrong evaluation on consecutive entities linked to a NIL
EL is evaluated according to the fuzzy and strict scenarios:
In the fuzzy scenario, to get counted as correct, the system response needs only one overlapping link label with the gold standard.
In the strict one, an exact match between the the link and boundaries must be achieved to get counted as correct.
The strict scenario is useful specially when the EL system only disambiguates gold standard named entities (if the system works as it should, the fuzzy and strict values should be the same).
This pull request solves the problems discussed in issue #13. What it does is to use the boundaries defined in the NER task to set the boundaries of the EL task: