Open olesar opened 4 years ago
Hi, in our case, before we used good old parse evaluation script such as a conll07.pl (stripping all connlu extensiosn) between annotators and the gold standard and now the regular connlu eval script. Never run kappa though but I know that some people have one.
Best,
Djamé
Le 23 mars 2020 à 22:40, Olga Lyashevskaya notifications@github.com a écrit :
I am wondering whether there are any current practices to assess inter-annotator agreement with respect to unlabeled trees (UAS) based on UD annotations? Both metrics and scripts would be greatly appreciated.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.
I am wondering whether there are any current practices to assess inter-annotator agreement with respect to unlabeled trees (UAS) based on UD annotations? Both metrics and scripts would be greatly appreciated.