studio-ousia / luke

LUKE -- Language Understanding with Knowledge-based Embeddings
Apache License 2.0
705 stars 101 forks source link

What is the version of seqeval I should use? #183

Open Michael-Evergreen opened 7 months ago

Michael-Evergreen commented 7 months ago

Hello, thanks for the work! I'm having trouble running your example to reproduce the checkpoint result using this command: python examples/ner/evaluate_transformers_checkpoint.py data/ner_conll/en/test.txt studio-ousia/luke-large-finetuned-conll-2003 --cuda-device 0 It gave me this error:

File "/usr/local/lib/python3.10/dist-packages/seqeval/scheme.py", line 55, in __init__
    self.prefix = Prefixes[token[-1]] if suffix else Prefixes[token[0]]
KeyError: 'r'

I wonder what's wrong here? Possbily my version of seqeval didn't match yours? It's not listed in your requirements.txt Also could you give a finetune example using huggingface? I'm aware of an example they gave here https://github.com/huggingface/transformers/tree/main/examples/research_projects/luke but its' quite bad on Conll2003 (0.5 F1 score)

ryokan0123 commented 7 months ago

You could try the version 1.2.2 as listed in pyproject.toml? https://github.com/studio-ousia/luke/blob/251654548a15b9a8ddc708a53b715bdf3a112102/pyproject.toml#L21

Michael-Evergreen commented 7 months ago

Hi ryokan, I see it now, thanks for answering. As for the second part of my question. Do you have any ideas why it is the case that the notebook only achieve an F1 of 0.5?

ryokan0123 commented 7 months ago

I am not familiar with the implementation but I think that some format issues cause the bad performance.

A common pitfall is that there are multiple formats for NER such as BIO, IOB, IOB2... If any of the model outputs, evaluation data and the evaluation script mismatch, it could lead to unexpected results.

For example, out script assues iob1 format by default. https://github.com/studio-ousia/luke/blob/251654548a15b9a8ddc708a53b715bdf3a112102/examples/ner/evaluate_transformers_checkpoint.py#L39