google-research / bert

TensorFlow code and pre-trained models for BERT
https://arxiv.org/abs/1810.04805
Apache License 2.0
38.14k stars 9.6k forks source link

QA system architecture (inference) #440

Open tommykoctur opened 5 years ago

tommykoctur commented 5 years ago

Hello,

I would like to ask what is the best or recommended architecture to build (and deploy) BERT SQuAD as a custom QA system. For example, answer user question based on text data (multiple paragraphs/ contexts).

As I understand it BERT SQuAD takes question and context and returns start and end position in a context where an answer is (IF answer is there). For real use case, the user won't provide context. The User ask a question and require an answer. So the system should have a database of contexts inside.

What is the most efficient way to do that?

Iteratively run BERT SQuAD on user question with all contexts in the database? (all combinations Question and Context(1....n). Is there any other solution? Is it possible to "pre-encode" all contexts to reduce computing load?

Thank you in advance !

dsindex commented 5 years ago

i think this paper ‘end to end open domain question answering...’ is highly related what you want. https://arxiv.org/pdf/1902.01718v1.pdf