Closed cberrioa closed 2 years ago
Hi, sorry for late! The prefix we use for T5 finetuning can be found here. https://github.com/asahi417/t5-question-generation/blob/master/t5qg/lm_t5.py#L19
Also, just want to note that this project is still in progress, so you'll see those model checkpoints updated often.
Hi, I have a question regarding the finetuned model "asahi417/question-generation-squad-t5-large". As far as I know, this model was trained using a multitask loss, and question answering is one of these tasks. My question is, is there any specific format to pass the model in order to answer question given a context? Thank you in advance!