Open JWittmeyer opened 10 months ago
nlp.max_length
is not a hard internal constraint, but rather a kind of clunky way to protect users from confusing OOM errors. It was set with the "core" pipelines and a not-especially-new consumer laptop in mind. If you're not actually running out of memory on your system, you can increase it with no worries, especially for simpler tasks like tokenization only.
On the other hand, none of the components in a core pipeline benefit from very long contexts (typically a section or a page or even a paragraph is sufficient), so splitting up texts is often the best way to go anyway. Very long texts can use a lot of RAM, especially for parser
or ner
.
This limit for Japanese is completely separate from nlp.max_length
and is coming directly from sudachipy. (I actually hadn't encountered it before.)
Their error message seems fine (much better than an OOM message with a confusing traceback from the middle of the parser), so I don't know if it makes sense to us to add another check in the spacy Japanese tokenizer, which then might get out-of-sync with the upstream sudachipy constraints in the future.
But you're right that nlp.max_length
isn't going to help directly for limiting the length in bytes, well, unless you set it much lower. But again, a lower limit would probably be fine in practice.
We'll look at adding this to the documentation!
Thanks for the explanation, that helped clearing the confusion on my end and i know how to proceed for my usecase.
In case anyone ever stumbles upon this, here the code i went with for byte splitting (though probably still has a lot of optimization potential)
# splits not after x bytes but ensures that max x bytes are used without destroying the final character
def __chunk_text_on_bytes(text: str, max_chunk_size: int = 1_000_000):
factor = len(text) / __utf8len(text)
increase_by = int(max(min(max_chunk_size*.1,10),1))
initial_size_guess = int(max(max_chunk_size * factor - 10,1))
final_list = []
remaining = text
while len(remaining):
part = remaining[:initial_size_guess]
if __utf8len(part) > max_chunk_size:
initial_size_guess = max(initial_size_guess - min(max_chunk_size *.001,10),1)
continue
cut_after = initial_size_guess
while __utf8len(part) < max_chunk_size and part != remaining:
cut_after = min(len(remaining), cut_after+increase_by)
part = remaining[:cut_after]
if __utf8len(part) > max_chunk_size:
cut_after-=increase_by
final_list.append(remaining[:cut_after])
remaining = remaining[cut_after:]
return final_list
""" ... max_length (int): The maximum allowed length of text for processing. ... """
""" ... max_length (int): The maximum allowed length of text for processing. The behavior of max_length may vary for different languages. Please refer to the language-specific documentation for more details. ...
Thanks for the suggestion! I think that this description is slightly confusing for users, since nlp.max_length
itself will behave the same way for all languages. What we need to highlight is that some individual tokenizers or components, especially those that wrap third-party libraries, may have their own internal length restrictions.
Not sure if this is meant to happen or a misunderstanding on my part. I'm assuming misunderstanding so I'm going for Documentation Report.
The Language (nlp) class has a
max_length
parameter that seems to work different for e.g. japanese.I'm currently trying to chunk texts that are too long by considering the max_length and splitting based on that. For e.g. english texts this seems to work without any issues.
Basic approach code:
However for the config string
ja_core_news_sm
this doesn't work. After a bit of analyzation i noticed that not the length but the byte amount needs to be considered.However even with the byte approach i run into an error that looks like it's max_length related but maybe not really?
Slightly reduced Error trace:
I also double checked the values for max_length (1000000), string length (63876) & byte length(63960) Setting the max_length by hand to 1100000 didn't change the error message so I'm assuming something else (maybe sudachi itself?) defines the Input is too long error message.
What the actual issue is and how to solve it (for lookup size limits) would be great for the documentation.
Which page or section is this issue related to?
Not sure where to add since it I'm not sure if it's directly japanese related. However a note might be interesting at https://spacy.io/models/ja or https://spacy.io/usage/models#japanese.
Further a note for max_length in general might need extension (if correctly assumed maybe something like
character length isn't the classic python len(<string>) function but the byte size (e.g. letter "I" - len 1 - byte 1 & kanji "私" - len 1 - byte 3)