Open ghost opened 4 years ago
The pre-trained models might have learned some implicit multi-hop reasoning. The multi-hop paths can also be used as input to guide the generation. Moreover, the techniques proposed to improve multi-hop QA could be integrated into the inverse problem.
Thanks for the reply. I do see it has a great enhancement for Question generation. Since the model might have learned some multi-hop reasoning, is it advisable to train it on the inverse HotpotQA + Inverse Squad together? Also, for adding a multi-hot context, it will be better to provide context as only supporting facts or the entire paragraphs? (https://hotpotqa.github.io/explorer.html) I feel providing the entire paragraph will improve its multi-hop reasoning but might even get into longer sequence length issues. LMK
It depends on whether we could extract accurate supporting facts. If the extracted facts are noisy, it would be better to feed the whole paragraph. Looking forward to your results.
Describe Model I am using (UniLM, MiniLM, LayoutLM ...):minilm
Is it a wise idea to use inverse HotPotQA and train minilm ckpt in s2s toolkit for the NQG task? Since hotpotQA generates multi-hop, will it make any sense? I was hoping to get more complex questions