ad-freiburg / elevant

Entity linking evaluation and analysis tool
https://elevant.cs.uni-freiburg.de/
Apache License 2.0
18 stars 1 forks source link

Candidate set #17

Closed MikeDean2367 closed 2 weeks ago

MikeDean2367 commented 3 months ago

Hi, it's a great work!

I would like to know if it is necessary to ensure that the ground truth is in the candidate set when evaluating Entity Linking, particularly for the REL method.

I look forward to your response.

flackbash commented 3 months ago

Hi, I'm glad you like our tool :)

If you mean candidate set in the sense of "all entities that can potentially be linked by a system", i.e., the (sub)set of entities from the KB used by the system: I would say it depends on what exactly you want to evaluate.

If you want to know how well an entity linker performs on a given benchmark out of the box, then certainly its candidate set should be included in the evaluation. This means, ground truth entities that are not in the candidate set should not simply be ignored). If you want to evaluate how well different disambiguation methods work on a given benchmark, independent of the candidate set, then a comparison can be made fair by using same candidate sets for different methods (as is often done in the literature) or potentially by filtering out ground truth entities that are not in the candidate set (although this is probably not the preferred method).

In ELEVANT, we have two error categories related to the candidate set: "Wrong Candidates" and "Multiple Candidates". The meaning of "candidate set" here is slightly different from the one above (but closely related): The candidates are those entities which were considered for the disambiguation of a particular mention. Typically this candidate set is made up of all entities from the system's KB that have a name or alias which matches the mention.

The error category "Wrong Candidates" shows the percentage of disambiguation errors (not counting NER errors) where the true entity was not in the candidate set. The error category "Multiple Candidates" shows the percentage of disambiguation errors (not counting NER errors) where the candidate set consisted of more than a single entity and the true entity was one of them. I.e., cases where the system chose a wrong entity from a set of candidate entities. However, not all systems return information about these candidate sets. REL for example does not (as far as I know). For such linkers, these error rates cannot be computed.

I hope this helps!