MantisAI / nervaluate

Full named-entity (i.e., not tag/token) evaluation metrics based on SemEval’13
MIT License
154 stars 19 forks source link

More information about output? #68

Closed abhibha1807 closed 7 months ago

abhibha1807 commented 1 year ago

Is there a way to find out for which instance during evaluation was marked under 'correct' or 'incorrect' or 'spurious', etc for a particular evaluation schema?

vmenger commented 1 year ago

I would also be interested in this, so false positives/negatives can be analyzed further. On a first glance, this seems like it would be straightforward to implement, by modifying compute_metrics to add the entity in a separate list (e.g. correct_ents, incorrect_ents, spurious_ents) anytime it increments a counter.

To the maintainers: is this something you would consider adding? I can do a PR.

ivyleavedtoadflax commented 1 year ago

Hi @abhibha1807 @vmenger thanks for your comments. Please feel free to PR @vmenger and we will review :pray:

jackboyla commented 9 months ago

Hey all, I've submitted a PR that I think adds this functionality. Let me know what you think! 😃 @abhibha1807 @vmenger @ivyleavedtoadflax