Closed dkamalakar closed 1 year ago
Hey @dkamalakar ,
As of now, we don't support T5. We found RAG
more promising for generative QA and therefore implemented it in Haystack (see docs) . However, I think T5 could be an interesting addition and we could have a T5Generator
besides our RAGenerator
. Do you have a particular use case in mind where T5 would be better than RAG?
If you are interested in creating a PR, let me know. Otherwise, I will put this into our backlog and we will tackle it from our side.
Do you have a particular use case in mind where T5 would be better than RAG?
@tholor I'm also looking for T5 support because even an untrained T5 gave me outstanding results in question answering when a context was giving. No other model came close. However, I couldn't try RAG yet, was just looking to find a quick way to try it before I setup a proper pipeline but there wasn't any online demo avail.
Did you guys tested T5 against RAG re qa with given context already and how where your results?
No we havent tested T5 vs RAG yet, but we will do so, especially before we start implementing a T5Generator
in haystack.
If you should benchmark both methods yourself in the meantime, please post your results here.
Closing this issue because Haystack supports T5 models in TransformersSummarizer
https://github.com/deepset-ai/haystack/blob/d157e41c1fac351a01fe9348ecb2d9f7b84e29e7/haystack/nodes/summarizer/transformers.py#L23
and also in PromptNode
: https://github.com/deepset-ai/haystack/blob/d2bba4935b2ccfa7ef875815a4a1bf98afcedbc1/haystack/nodes/prompt/prompt_node.py#L237
and in Seq2SeqGenerator
for generative question answering:
https://github.com/deepset-ai/haystack/blob/d2bba4935b2ccfa7ef875815a4a1bf98afcedbc1/haystack/nodes/answer_generator/transformers.py#L356
Question Any work around to use T5 ?
Additional context I am guessing T5 is currently not supported because of the nature of the architecture, i.e. Text to text nature of the model. Any recommendation how I can use T5 ?