Closed wricketts closed 1 year ago
Use of neleval package deemed not feasible (broken links in the documentation make it hard to follow, plus it requires a format that our data is not in).
Instead, the plan is to modify evaluate.py
following the description used in this paper.
Evaluation treats each linking annotation as a tuple (Span, type, kbid).
Metrics are calculated as follows:
The dbpedia spotlight app does not seem to perform NIL recognition (marking entities as not having a node in a KB), so this will not be evaluated.
New Feature Summary
Update nel/evaluate.py to make use of this package (if feasible).
Related
No response
Alternatives
No response
Additional context
Github page