Closed kruthikakr closed 3 years ago
Hi @kruthikakr could you please use this format for QA_input and let me know whether it solves your problem?
QA_input = [ { "questions": ["Why is model conversion important?"], "text": "Model conversion lets people easily switch between frameworks." }]
Thank you for response Yes it worked.
and the output of result is [{'task': 'qa', 'predictions': [{'question': 'How did the gross margin change?', 'id': '0-0', 'ground_truth': [], 'answers': [{'score': 13.446313858032227, 'probability': None, 'answer': 'largely stable quarter-to-quarter', 'offset_answer_start': 2103, 'offset_answer_end': 2136, 'context': 'egment. The gross margin remained largely stable quarter-to-quarter, falling from 39.8 percent to 39', 'offset_context_start': 2069, 'offset_context_end': 2169, 'document_id': '0-0'}, {'score': 12.707515716552734, 'probability': None, 'answer': 'largely stable quarter-to-quarter, falling from 39.8 percent to 39.5 percent', 'offset_answer_start': 2103, 'offset_answer_end': 2179, 'context': 'in remained largely stable quarter-to-quarter, falling from 39.8 percent to 39.5 percent. Included t', 'offset_context_start': 2091, 'offset_context_end': 2191, 'document_id': '0-0'}, {'score': 12.551894187927246, 'probability': None, 'answer': 'The gross margin remained largely stable quarter-to-quarter', 'offset_answer_start': 2077, 'offset_answer_end': 2136, 'context': 'tions (DSS) segment. The gross margin remained largely stable quarter-to-quarter, falling from 39.8 ', 'offset_context_start': 2056, 'offset_context_end': 2156, 'document_id': '0-0'}, {'score': 11.987415313720703, 'probability': None, 'answer': 'falling from 39.8 percent to 39.5 percent', 'offset_answer_start': 2138, 'offset_answer_end': 2179, 'context': 'ly stable quarter-to-quarter, falling from 39.8 percent to 39.5 percent. Included therein are acquis', 'offset_context_start': 2108, 'offset_context_end': 2208, 'document_id': '0-0'}, {'score': 11.898091316223145, 'probability': None, 'answer': 'largely stable quarter-to-quarter, falling from 39.8 percent to 39.5 percent.', 'offset_answer_start': 2103, 'offset_answer_end': 2180, 'context': 'in remained largely stable quarter-to-quarter, falling from 39.8 percent to 39.5 percent. Included t', 'offset_context_start': 2091, 'offset_context_end': 2191, 'document_id': '0-0'}], 'no_ans_gap': 7.642787456512451}]}]
should i call the aggregator here to have the final answer ?
Thanks for trying that. Happy to hear that it works now. I will make sure that the tutorial is updated. Aggregation is only needed to aggregate predictions across different samples. The document here is relatively small. It seems that all the predictions are coming from the same sample. in that case, take the top answer with the highest score. Regarding the aggregation of predictions: I recommend having a look at this line of code: https://github.com/deepset-ai/FARM/blob/c7e5d1a694e5593cf1c0f6cf3c2916293bab55bb/farm/infer.py#L429 It shows that the aggregation of predictions depends on the model. I will close this issue after having updated the tutorial with regards to the format of QA_input.
please provide me with the information , In the squad question answering demo , there is a limit of 15000 words . Which parameters decides the limit . And where is the code line which outputs the top answer in case the context is small .. or when the aggregation should be used .
Hi not sure if it is a bug , but i am trying to follow the steps in the tutorial and found the issue I am using Transformer model model is being downloaded
from farm.infer import *
nlp = Inferencer.load("bert-large-uncased-whole-word-masking-finetuned-squad", task_type="question_answering")
Run predictions
QA_input = [{"qas": ["Why is model conversion important?"], "context": "Model conversion lets people easily switch between frameworks."}] result = nlp.inference_from_dicts(dicts=QA_input)
But getting the error as : question_text = q["question"] TypeError: string indices must be integers