Open Pucadopr opened 4 years ago
Hi Pelumi,
Did you install DrQA dependencies and Spacy?
Did you download Spacy en
model?
Hi Zaghaghi thanks for the reply. Yes I have the DrQA dependencies installed and also the Spacy en model installed. Do I need to remove the default Classpath?
File "/Users/pelumioladokun/Documents/API/DrQA/WebUI/drqa-webui/index.py", line 14, in query answers = process(question=data['question']) File "/Users/pelumioladokun/Documents/API/DrQA/WebUI/drqa-webui/services/__init__.py", line 55, in process question, candidates, top_n, n_docs, return_context=True File "/Users/pelumioladokun/Documents/API/DrQA/DrQA/drqa/pipeline/drqa.py", line 190, in process top_n, n_docs, return_context File "/Users/pelumioladokun/Documents/API/DrQA/DrQA/drqa/pipeline/drqa.py", line 203, in process_batch ranked = [self.ranker.closest_docs(queries[0], k=n_docs)] File "/Users/pelumioladokun/Documents/API/DrQA/DrQA/drqa/retriever/tfidf_doc_ranker.py", line 59, in closest_docs spvec = self.text2spvec(query) File "/Users/pelumioladokun/Documents/API/DrQA/DrQA/drqa/retriever/tfidf_doc_ranker.py", line 93, in text2spvec words = self.parse(utils.normalize(query)) File "/Users/pelumioladokun/Documents/API/DrQA/DrQA/drqa/retriever/tfidf_doc_ranker.py", line 83, in parse tokens = self.tokenizer.tokenize(query) TypeError: tokenize() missing 1 required positional argument: 'text'
I really don't know what I am doing wrong at this point. If you could please help with this error
Hi Zaghazi please help with error message, I really don't know what is not passing
File "/Users/pelumioladokun/Documents/API/DrQA/WebUI/drqa-webui/index.py", line 5, in <module> from services import DrQA, process File "/Users/pelumioladokun/Documents/API/DrQA/WebUI/drqa-webui/services/__init__.py", line 48, in <module> tokenizer=config['tokenizer'] File "/Users/pelumioladokun/Documents/API/DrQA/DrQA/drqa/pipeline/drqa.py", line 109, in __init__ self.ranker = ranker_class(**ranker_opts) File "/Users/pelumioladokun/Documents/API/DrQA/DrQA/drqa/retriever/tfidf_doc_ranker.py", line 41, in __init__ self.tokenizer = tokenizers.get_class(metadata['tokenizer'])() TypeError: 'NoneType' object is not callable
can't seem to get around this issue, please help I am using Spacy tokenizer