google-research / text-to-text-transfer-transformer

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
https://arxiv.org/abs/1910.10683
Apache License 2.0
6.16k stars 755 forks source link

How t5 handle MRC multiple choices task like RACE? #28

Closed desperadoola closed 4 years ago

desperadoola commented 4 years ago

Hi there, I notice that, in SuperGlue, t5 handle ReCoRD by concatenating all candidate answers after the question and before the passage, and let the model to generate the correct answer. But when the length and amount of the candidates increase, for example in RACE, this might not be the best training and predicting way (I think), since the concatenated input sequence length might easily surpass 512 and the difficulty of decoding might also increase. Another example is passage re-ranking, we might need to get the scores for each answer.

A basic idea is that we concatenate passage with each candidate, and then get the logits/perplexity of model decoding the true or false token, and rank the candidates to get the final prediction. The question is, how could we find an easy way to get the intermediate logits in current t5, or is there any better solution for such tasks?

craffel commented 4 years ago

Are you sure that the concatenated (tokenized) input length will substantially exceed 512 for RACE? If it is substantially longer than 512 tokens, you can always fine-tune with a longer input length. T5 is trained with relative position encodings so it works fine to longer-than-512 sequences. For example we tried fine-tuning on MultiRC with 1024 sequence length and saw no gains. Also note that decoding would not be expensive; you would be predicting a single token corresponding to the answer index (e.g. ""A", "B", "C", or "D").

I'm not sure I understand how doing the ranking-based loss/eval would help alleviate any sequence length issue. I also feel that the ranking-based metrics overly complicate things when the basic text-to-text/maximum likelihood framework seems to work well (for example, we achieved SoTA on WNLI/WSC without using a ranking-based loss, as was required by previous work to get better-than-chance accuracy). But, to answer your question, you can use "perplexity_eval" mode to get the perplexity. https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/utils.py#L1697

desperadoola commented 4 years ago

Thanks for your answer, that's very helpful. Now I know how to solve RACE :P . But still, I'm wondering how t5 could be used to handle the case where we might have over 100 queries, like in Passage Retrieval.

craffel commented 4 years ago

Correct me if I'm wrong, but in the single-query case, isn't passage retrieval loosely equivalent to a span-based QA task? If so, with 100 queries, couldn't you feed in one query at a time along with the document? This would not be the most efficient way to do things but would likely work.

shamanez commented 4 years ago

@craffel

Since T5 is using relative attention mechanisms, is it possible to use sequence lengths more than 512 with pretrained T5, without fine-tuning it?

The answer for my issue says we can use any sea as the input where the only constraint is the memory.

craffel commented 4 years ago

Hi, yes, you can use any sequence length you want. Any relative position difference greater than 128 is mapped to the same ("very far away") bucket. We have gone up to 2048 internally.