Closed paniabhisek closed 3 years ago
I'm not able to reproduce the issue. I went to this page, then clicked on "Open in colab" on the top right (chose PyTorch), and then run the question-answering tutorial, and it's working fine for me:
Hi @paniabhisek
For QA you could use the official run_qa.py
example scripts which now supports Trainer
and datasets
. You can find it here
https://github.com/huggingface/transformers/tree/master/examples/question-answering
@NielsRogge I ran the code in colab, it's working for me too. But not in conda environment.
@patil-suraj example-script only supports squad 1.1 ? Does it support squad 2.0 ?
It supports squad V1 and V2. For V2, just add the flag --version2_with_negative
(on top of --dataset_nme squad_v2
)
If you try to call train_dataset[137]
, it returns an error ([136]
and [138]
both work properly). It is because end_positions.append(encodings.char_to_token(i, answers[i]['answer_end']))
and end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] + 1))
do not find the correct token; the end_position[-1]
is None. The code before #9378 should work.
#9378-comment have worked for me. I was wondering how to use a snippet without an unfamiliar script so I can use my own language model. thanks @kevinthwu .
btw thanks @sgugger I can use the squad 2.0 with the option --version2_with_negative
.
I'm not closing as the docs are not updated yet.
It supports squad V1 and V2. For V2, just add the flag
--version2_with_negative
(on top of--dataset_nme squad_v2
)
the argument name is 'version_2_with_negative' (line 444 run_qa.py)
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Environment info
transformers
version: 4.2.1Who can help
Expected behavior: should have run without the error.