I would like to ask what is the best or recommended architecture to build (and deploy) BERT SQuAD as a custom QA system. For example, answer user question based on text data (multiple paragraphs/ contexts).
As I understand it BERT SQuAD takes question and context and returns start and end position in a context where an answer is (IF answer is there). For real use case, the user won't provide context. The User ask a question and require an answer. So the system should have a database of contexts inside.
What is the most efficient way to do that?
Iteratively run BERT SQuAD on user question with all contexts in the database? (all combinations Question and Context(1....n).
Is there any other solution? Is it possible to "pre-encode" all contexts to reduce computing load?
Hello,
I would like to ask what is the best or recommended architecture to build (and deploy) BERT SQuAD as a custom QA system. For example, answer user question based on text data (multiple paragraphs/ contexts).
As I understand it BERT SQuAD takes question and context and returns start and end position in a context where an answer is (IF answer is there). For real use case, the user won't provide context. The User ask a question and require an answer. So the system should have a database of contexts inside.
What is the most efficient way to do that?
Iteratively run BERT SQuAD on user question with all contexts in the database? (all combinations Question and Context(1....n). Is there any other solution? Is it possible to "pre-encode" all contexts to reduce computing load?
Thank you in advance !