facebookresearch / DrQA

Reading Wikipedia to Answer Open-Domain Questions
Other
4.48k stars 896 forks source link

error while executing scirpts/pipeline/interaction.py #228

Open Ravikiran2611 opened 5 years ago

Ravikiran2611 commented 5 years ago

WHEN I RUN THE COMMAND python scirpts/pipeline/interaction.py I GET THE FOLLOWING ERROR

08/13/2019 11:49:58 AM: [ CUDA enabled (GPU 1) ] 08/13/2019 11:49:58 AM: [ Initializing pipeline... ] 08/13/2019 11:49:58 AM: [ Initializing document ranker... ] 08/13/2019 11:49:58 AM: [ Loading /home/zlabs-nlp/ravi/DrQA/data/wikipedia/docs-tfidf-ngram=2-hash=16777216-tokenizer=simple.npz ] Traceback (most recent call last): File "scripts/pipeline/interactive.py", line 70, in tokenizer=args.tokenizer File "/home/zlabs-nlp/ravi/DrQA/drqa/pipeline/drqa.py", line 109, in init self.ranker = ranker_class(**ranker_opts) File "/home/zlabs-nlp/ravi/DrQA/drqa/retriever/tfidf_doc_ranker.py", line 37, in init matrix, metadata = utils.load_sparse_csr(tfidf_path) File "/home/zlabs-nlp/ravi/DrQA/drqa/retriever/utils.py", line 36, in load_sparse_csr return matrix, loader['metadata'].item(0) if 'metadata' in loader else None File "/home/zlabs-nlp/miniconda3/envs/drqa/lib/python3.6/_collections_abc.py", line 666, in contains self[key] File "/home/zlabs-nlp/miniconda3/envs/drqa/lib/python3.6/site-packages/numpy/lib/npyio.py", line 262, in getitem pickle_kwargs=self.pickle_kwargs) File "/home/zlabs-nlp/miniconda3/envs/drqa/lib/python3.6/site-packages/numpy/lib/format.py", line 696, in read_array raise ValueError("Object arrays cannot be loaded when " ValueError: Object arrays cannot be loaded when allow_pickle=False

CAN ANYONE HELP ME WHAT THE ERROR IS THANKS IN ADVANCE!!!!!!!

pangyouzhen commented 5 years ago

I also counter this problem

pangyouzhen commented 5 years ago

change load_sparse_car function np.load(filename) to np.load(filename,allow_pickle=True) in utils.py

yifanli78 commented 5 years ago

I seemed to have solved the problem by downgrading numpy to 1.16.1

LxinG-YY commented 4 years ago

It will be ok to install numpy==1.16.2

However, another problem occurs to me. While [Initializing tokenizers and document retrievers], BlockingIOError: [Errno 11] Resource temporarily unavailable was raised.

12/05/2019 10:36:27 PM: [ CUDA enabled (GPU 2) ] 12/05/2019 10:36:27 PM: [ Initializing pipeline... ]12/05/2019 10:36:27 PM: [ Initializing document ranker... ] 12/05/2019 10:36:27 PM: [ Loading /home/liangxin/codes/DrQA_mac/data/wikipedia/docs-tfidf-ngram=2-hash=16777216-tokenizer=simple.npz ] 12/05/2019 10:37:09 PM: [ Initializing document reader... ] 12/05/2019 10:37:09 PM: [ Loading model /home/liangxin/codes/DrQA_mac/data/reader/multitask.mdl ] 12/05/2019 10:37:18 PM: [ Initializing tokenizers and document retrievers... ] Traceback (most recent call last): File "scripts/pipeline/interactive.py", line 70, in tokenizer=args.tokenizer File "/home/liangxin/codes/DrQA_mac/drqa/pipeline/drqa.py", line 146, in init initargs=(tok_class, tok_opts, db_class, db_opts, fixed_candidates) File "/usr/lib64/python3.6/multiprocessing/context.py", line 119, in Pool context=self.get_context()) File "/usr/lib64/python3.6/multiprocessing/pool.py", line 174, in init self._repopulate_pool() File "/usr/lib64/python3.6/multiprocessing/pool.py", line 239, in _repopulate_pool w.start() File "/usr/lib64/python3.6/multiprocessing/process.py", line 105, in start self._popen = self._Popen(self) File "/usr/lib64/python3.6/multiprocessing/context.py", line 277, in _Popen return Popen(process_obj) File "/usr/lib64/python3.6/multiprocessing/popen_fork.py", line 19, in init self._launch(process_obj) File "/usr/lib64/python3.6/multiprocessing/popen_fork.py", line 66, in _launch self.pid = os.fork() BlockingIOError: [Errno 11] Resource temporarily unavailable Process ForkPoolWorker-43: Traceback (most recent call last): File "/usr/lib64/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib64/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, *self._kwargs) File "/usr/lib64/python3.6/multiprocessing/pool.py", line 103, in worker initializer(initargs) File "/home/liangxin/codes/DrQA_mac/drqa/pipeline/drqa.py", line 39, in init PROCESS_TOK = tokenizer_class(tokenizer_opts) File "/home/liangxin/codes/DrQA_mac/drqa/tokenizers/corenlp_tokenizer.py", line 33, in init self._launch() File "/home/liangxin/codes/DrQA_mac/drqa/tokenizers/corenlp_tokenizer.py", line 55, in _launch self.corenlp = pexpect.spawn('/bin/bash', maxread=100000, timeout=60) File "/home/liangxin/drqa_venv/lib64/python3.6/site-packages/pexpect/pty_spawn.py", line 197, in init self._spawn(command, args, preexec_fn, dimensions) File "/home/liangxin/drqa_venv/lib64/python3.6/site-packages/pexpect/pty_spawn.py", line 297, in _spawn cwd=self.cwd, kwargs) File "/home/liangxin/drqa_venv/lib64/python3.6/site-packages/pexpect/pty_spawn.py", line 308, in _spawnpty return ptyprocess.PtyProcess.spawn(args, kwargs) File "/home/liangxin/drqa_venv/lib64/python3.6/site-packages/ptyprocess/ptyprocess.py", line 226, in spawn pid, fd = pty.fork() File "/usr/lib64/python3.6/pty.py", line 97, in fork pid = os.fork() BlockingIOError: [Errno 11] Resource temporarily unavailable**

anushkmittal commented 4 years ago

230 should solve it.

impulsecorp commented 4 years ago

change load_sparse_car function np.load(filename) to np.load(filename,allow_pickle=True) in utils.py

This fixed it for me.