Open Michael-Evergreen opened 7 months ago
You could try the version 1.2.2
as listed in pyproject.toml?
https://github.com/studio-ousia/luke/blob/251654548a15b9a8ddc708a53b715bdf3a112102/pyproject.toml#L21
Hi ryokan, I see it now, thanks for answering. As for the second part of my question. Do you have any ideas why it is the case that the notebook only achieve an F1 of 0.5?
I am not familiar with the implementation but I think that some format issues cause the bad performance.
A common pitfall is that there are multiple formats for NER such as BIO, IOB, IOB2... If any of the model outputs, evaluation data and the evaluation script mismatch, it could lead to unexpected results.
For example, out script assues iob1 format by default. https://github.com/studio-ousia/luke/blob/251654548a15b9a8ddc708a53b715bdf3a112102/examples/ner/evaluate_transformers_checkpoint.py#L39
Hello, thanks for the work! I'm having trouble running your example to reproduce the checkpoint result using this command:
python examples/ner/evaluate_transformers_checkpoint.py data/ner_conll/en/test.txt studio-ousia/luke-large-finetuned-conll-2003 --cuda-device 0
It gave me this error:I wonder what's wrong here? Possbily my version of seqeval didn't match yours? It's not listed in your requirements.txt Also could you give a finetune example using huggingface? I'm aware of an example they gave here https://github.com/huggingface/transformers/tree/main/examples/research_projects/luke but its' quite bad on Conll2003 (0.5 F1 score)