facebookresearch / DrQA

Reading Wikipedia to Answer Open-Domain Questions
Other
4.48k stars 898 forks source link

Timeouts #2

Closed elbamos closed 7 years ago

elbamos commented 7 years ago

I'm getting timeouts when I try to run the pipeline:

[amos:~/projects/DrQA] [pytorch] master* ± python ./scripts/pipeline/interactive.py --no-cuda --tokenizer=corenlp
07/26/2017 08:27:04 PM: [ Running on CPU only. ]
07/26/2017 08:27:04 PM: [ Initializing pipeline... ]
07/26/2017 08:27:04 PM: [ Initializing document ranker... ]
07/26/2017 08:27:04 PM: [ Loading /Users/amos/projects/DrQA/data/wikipedia/docs-tfidf-ngram=2-hash=16777216-tokenizer=simple.npz ]
07/26/2017 08:27:49 PM: [ Initializing model... ]
07/26/2017 08:27:49 PM: [ Loading model /Users/amos/projects/DrQA/data/reader/multitask.mdl ]
07/26/2017 08:27:52 PM: [ Initializing tokenizers and document retrievers... ]

Interactive DrQA
>> process(question, candidates=None, top_n=1, n_docs=5)
>> usage()

>>> process("How many states in the United States?")
07/26/2017 08:28:25 PM: [ Processing 1 queries... ]
07/26/2017 08:28:25 PM: [ Retrieving top 5 docs... ]
Process ForkPoolWorker-1:
Process ForkPoolWorker-2:
Process ForkPoolWorker-3:
Process ForkPoolWorker-4:
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/expect.py", line 99, in expect_loop
    incoming = spawn.read_nonblocking(spawn.maxread, timeout)
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/pty_spawn.py", line 462, in read_nonblocking
    raise TIMEOUT('Timeout exceeded.')
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/expect.py", line 99, in expect_loop
    incoming = spawn.read_nonblocking(spawn.maxread, timeout)
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/expect.py", line 99, in expect_loop
    incoming = spawn.read_nonblocking(spawn.maxread, timeout)
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/pty_spawn.py", line 462, in read_nonblocking
    raise TIMEOUT('Timeout exceeded.')
Traceback (most recent call last):
pexpect.exceptions.TIMEOUT: Timeout exceeded.
pexpect.exceptions.TIMEOUT: Timeout exceeded.
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/expect.py", line 99, in expect_loop
    incoming = spawn.read_nonblocking(spawn.maxread, timeout)

During handling of the above exception, another exception occurred:

  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/pty_spawn.py", line 462, in read_nonblocking
    raise TIMEOUT('Timeout exceeded.')
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/pty_spawn.py", line 462, in read_nonblocking
    raise TIMEOUT('Timeout exceeded.')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
Traceback (most recent call last):
pexpect.exceptions.TIMEOUT: Timeout exceeded.
pexpect.exceptions.TIMEOUT: Timeout exceeded.
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()

During handling of the above exception, another exception occurred:

  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/multiprocessing/pool.py", line 103, in worker
    initializer(*initargs)
Traceback (most recent call last):
  File "/Users/amos/projects/DrQA/drqa/pipeline/drqa.py", line 39, in init
    PROCESS_TOK = tokenizer_class(**tokenizer_opts)
  File "/usr/local/anaconda3/envs/pytorch/lib/pytho  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
n3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/amos/projects/DrQA/drqa/tokenizers/corenlp_tokenizer.py", line 33, in __init__
    self._launch()
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/multiprocessing/pool.py", line 103, in worker
    initializer(*initargs)
  File "/Users/amos/projects/DrQA/drqa/pipeline/drqa.py", line 39, in init
    PROCESS_TOK = tokenizer_class(**tokenizer_opts)
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/amos/projects/DrQA/drqa/tokenizers/corenlp_tokenizer.py", line 33, in __init__
    self._launch()
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/multiprocessing/pool.py", line 103, in worker
    initializer(*initargs)
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/multiprocessing/pool.py", line 103, in worker
    initializer(*initargs)
  File "/Users/amos/projects/DrQA/drqa/tokenizers/corenlp_tokenizer.py", line 61, in _launch
    self.corenlp.expect_exact('NLP>', searchwindowsize=100)
  File "/Users/amos/projects/DrQA/drqa/tokenizers/corenlp_tokenizer.py", line 61, in _launch
    self.corenlp.expect_exact('NLP>', searchwindowsize=100)
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/spawnbase.py", line 390, in expect_exact
    return exp.expect_loop(timeout)
  File "/Users/amos/projects/DrQA/drqa/pipeline/drqa.py", line 39, in init
    PROCESS_TOK = tokenizer_class(**tokenizer_opts)
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/expect.py", line 107, in expect_loop
    return self.timeout(e)
  File "/Users/amos/projects/DrQA/drqa/tokenizers/corenlp_tokenizer.py", line 33, in __init__
    self._launch()
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/spawnbase.py", line 390, in expect_exact
    return exp.expect_loop(timeout)
  File "/Users/amos/projects/DrQA/drqa/pipeline/drqa.py", line 39, in init
    PROCESS_TOK = tokenizer_class(**tokenizer_opts)
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/expect.py", line 107, in expect_loop
    return self.timeout(e)
  File "/Users/amos/projects/DrQA/drqa/tokenizers/corenlp_tokenizer.py", line 61, in _launch
    self.corenlp.expect_exact('NLP>', searchwindowsize=100)
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/expect.py", line 70, in timeout
    raise TIMEOUT(msg)
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/spawnbase.py", line 390, in expect_exact
    return exp.expect_loop(timeout)
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/expect.py", line 70, in timeout
    raise TIMEOUT(msg)
  File "/Users/amos/projects/DrQA/drqa/tokenizers/corenlp_tokenizer.py", line 33, in __init__
    self._launch()
  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/expect.py", line 107, in expect_loop
    return self.timeout(e)
  File "/Users/amos/projects/DrQA/drqa/tokenizers/corenlp_tokenizer.py", line 61, in _launch
    self.corenlp.expect_exact('NLP>', searchwindowsize=100)
pexpect.exceptions.TIMEOUT: Timeout exceeded.
<pexpect.pty_spawn.spawn object at 0x425d10c18>
command: /bin/bash
args: ['/bin/bash']
buffer (last 100 chars): b'm~/projects/DrQA\x1b(B\x1b[m] [\x1b[36mpytorch\x1b(B\x1b[m] \x1b[32mmaster\x1b[31m*\x1b[31m\x1b(B\x1b[m \x1b[35m1\x1b(B\x1b[m \x1b[1m\xc2\xb1\x1b(B\x1b[m '
before (last 100 chars): b'm~/projects/DrQA\x1b(B\x1b[m] [\x1b[36mpytorch\x1b(B\x1b[m] \x1b[32mmaster\x1b[31m*\x1b[31m\x1b(B\x1b[m \x1b[35m1\x1b(B\x1b[m \x1b[1m\xc2\xb1\x1b(B\x1b[m '
after: <class 'pexpect.exceptions.TI  File "/usr/local/anaconda3/envs/pytorch/lib/python3.5/site-packages/pexpect/expect.py", line 70, in timeout
    raise TIMEOUT(msg)
MEOUT'>
match: None
match_index: None
exitstatus: None
flag_eof: False
pid: 41203
child_fd: 16
closed: False
timeout: 60
delimiter: <class 'pexpect.exceptions.EOF'>
logfile: None
logfile_read: None
logfile_send: None
maxread: 100000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0
delayafterclose: 0.1
delayafterterminate: 0.1
searcher: searcher_string:
    0: "b'NLP>'"
pexpect.exceptions.TIMEOUT: Timeout exceeded.

Is this a configuration issue? Any suggestions?

ajfisch commented 7 years ago

Hi @elbamos, this looks like you are having an issue with the CoreNLPTokenizer.

In the meantime, you can get around this by using the flag --tokenizer regexp. This won't guarantee the exact same performance numbers as reported in the README, but should work just fine.

ajfisch commented 7 years ago

pip install spacy && python -m spacy download en will also satisfy requirements for the spaCy tokenizer, which you can use. Its a bit faster than the regex one.

elbamos commented 7 years ago

Yes, that did it, and spacy's a better tokenizer anyway, thanks.

Very impressive results!

Have you experimented much beyond the training datasets? I wonder how representative they are of the distribution of questions in natural language.

ajfisch commented 7 years ago

We evaluated on the four different datasets reported in the paper: SQuAD, CuratedTREC, WebQuestions, and WikiMovies. Certainly each of these datasets has its own peculiarities and domains; though together I think they cover a fairly broad spectrum of (factoid) natural language questions.

Still the distribution of possible questions is very large and I'm sure we are not hitting parts of it. Multitasking on more domains will likely help (indeed multitasking on the 4 reported datasets already helps significantly).

jppgks commented 7 years ago

If you are like me and the classpath wasn't set because ZSH tried to expand filenames because of the wildcard *, escape the wildcard character:

export CLASSPATH=$CLASSPATH:/path/to/stanford-corenlp-full-2016-10-31/\*

Needless to say, this fixed the timeouts ⌛️

hansd410 commented 3 years ago

Hi @elbamos, this looks like you are having an issue with the CoreNLPTokenizer.

  • Do you have the corenlp jars in your CLASSPATH? (echo $CLASSPATH)
  • Are you using Java 8? (java -version)
  • Does the following work (should tokenize immediately)?
from drqa.tokenizers import CoreNLPTokenizer
tok = CoreNLPTokenizer()
tok.tokenize('hello world')

In the meantime, you can get around this by using the flag --tokenizer regexp. This won't guarantee the exact same performance numbers as reported in the README, but should work just fine.

Thank you for the guide. By the way, how did you know that the problem comes from java classpath or version? I really curious about it.