huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
135.09k stars 27.03k forks source link

XLM-RoBERTa model for QA seems not properly work #7774

Closed antoniolanza1996 closed 3 years ago

antoniolanza1996 commented 4 years ago

Environment info

Who can help

albert, bert, GPT2, XLM: @LysandreJik

Information

Model I am using (Bert, XLNet ...): deepset/xlm-roberta-large-squad2

The problem arises when using:

The tasks I am working on is:

To reproduce

Steps to reproduce the behavior:

! wget https://raw.githubusercontent.com/rajpurkar/SQuAD-explorer/master/dataset/dev-v2.0.json
! python transformers/examples/question-answering/run_squad.py \
  --model_type xlm-roberta \
  --model_name_or_path 'deepset/xlm-roberta-large-squad2' \
  --do_eval \
  --do_lower_case \
  --predict_file 'dev-v2.0.json' \
  --output_dir 'output' \
  --overwrite_output_dir \
  --version_2_with_negative

Expected behavior

There are some values mismatch between:

  1. values reported in the model card here
  2. values obtained when Transformers is installed using pip install transformers
  3. values obtained when Transformers is installed from master

In particular:

LysandreJik commented 4 years ago

Thanks for reporting, will investigate.

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Forutanrad commented 2 years ago

hello, this command " --model_type xlm-roberta \" not work for me, can you help me? please