Open zorazrw opened 2 years ago
Thanks @zorazrw ! Could you give a slightly more detailed example of what this would look like for completeness? I think the "new tasks" part is covered by this https://github.com/neulab/ExplainaBoard/issues/54
but you also want new functionality concerning handling retrieved contexts in retrieval-based QA systems, which is an interesting problem.
Is there a way to enable analysis for open-domain question answering datasets? Or at least on the Reading Comprehension (RC) side, given different retrieved contexts from multiple retrieval models, to use/submit different versions of the context dataset but for the same RC task.