UBC-NLP / araT5

AraT5: Text-to-Text Transformers for Arabic Language Understanding
84 stars 18 forks source link

How to finetune your pretrained model for QA task? #1

Closed mellahysf closed 1 year ago

mellahysf commented 2 years ago

Hi,

Thank you for sharing your great work!

Can you tell me please how to finetune one of your pretrained LM for a Question-Answering (QA) task?

As input, I have question and context. As output one or multiple answers.

It's very urgent, please!

Thank you so much.

mellahysf commented 2 years ago

@Nagoudi @elmadany @mageed any suggestions about this please?

Nagoudi commented 2 years ago

Hi @mellahysf, Thank you for being interested in our models. You can simply follow this G colab. The only change you need to do is to upload the AraT5 model and toknizer instead of T5.