wellcometrust / deep_reference_parser

A deep learning model for extracting references from text
MIT License
24 stars 1 forks source link

Increase maximum length in nlp #36

Open lizgzil opened 4 years ago

lizgzil commented 4 years ago

When using the deep reference parser in Reach, we got the error: [note this is from the deep_reference_parser-2019.12.1-py3-none-any.whl version, but I think this issue still stands in the current DRP version]

...
  File "/usr/local/lib/python3.6/site-packages/deep_reference_parser/split_section.py", line 78, in split
    doc = nlp(text)
  File "/usr/local/lib/python3.6/site-packages/spacy/language.py", line 392, in __call__
    Errors.E088.format(length=len(text), max_length=self.max_length)
ValueError: [E088] Text of length 1154040 exceeds maximum of 1000000. The v2.x parser and NER models require roughly 1GB of temporary memory per 100,000 characters in the input. This means long texts may cause memory allocation errors. If you're not using the parser or NER, it's probably safe to increase the `nlp.max_length` limit. The limit is in number of characters, so you can check whether your inputs are too long by checking `len(text)`.

From looking at https://datascience.stackexchange.com/questions/38745/increasing-spacy-max-nlp-limit I think for split.py, split_parse.py and parse.py we just need to change the lines

nlp = en_core_web_sm.load()
doc = nlp(text)

to

nlp = en_core_web_sm.load()
nlp.max_length = len(text)
doc = nlp(text, disable = ['ner', 'parser'])