hplt-project / sacremoses

Python port of Moses tokenizer, truecaser and normalizer
MIT License
487 stars 57 forks source link

first call to MosesTokenizer.tokenize is very slow #61

Closed johnfarina closed 4 years ago

johnfarina commented 5 years ago

As in, it takes several minutes. Seems to happen independent ot the specified lang.

In [1]: from sacremoses import MosesTokenizer

In [2]: mt = MosesTokenizer(lang='ko')

In [3]: %time mt.tokenize("세계 에서 가장 강력한")
CPU times: user 3min 3s, sys: 1.75 s, total: 3min 5s
Wall time: 3min 11s
Out[3]: ['세계', '에서', '가장', '강력한']

Subsequent calls perform as expected:

In [4]: %time mt.tokenize("세계 에서 가장 강력한")
CPU times: user 819 µs, sys: 0 ns, total: 819 µs
Wall time: 823 µs
Out[4]: ['세계', '에서', '가장', '강력한']

Latest version of sacremoses (0.0.22). Is this a problem for anyone else?

johnfarina commented 5 years ago

With English, it actually seems to hang forever (I Ctrl-c'd the process after waiting for 15 minutes).

I think it's getting hung compiling regular expressions somewhere.

In [2]: mt = MosesTokenizer(lang='en')

In [3]: mt.tokenize("Hello, Mr. Smith!")
alvations commented 5 years ago

The first behavior of using the tokenizer first time seems reasonable. The regexes would be compiled and cached and in the case of the new expanded perluniprop files, they're huge, so it makes sense.

Second behavior of English hanging, shouldn't be the case:

[in]:

%time 
mt.tokenize("Hello, Mr. Smith!")

[out]:

CPU times: user 3 µs, sys: 1e+03 ns, total: 4 µs
Wall time: 7.15 µs

Might be some cache in the perluniprop files or some system problems.

Which OS are you using? Which Python verion?

alvations commented 5 years ago

But it do looks like the new version of the full perluniprop is does feel slower =( I'll run some benchmark.

johnfarina commented 5 years ago

I'm on MacOS 10.14.1, python 3.7.1 (via anaconda). I'll run some more tests on my end with English later today on some different OSs and python installs too to see if I can isolate the problem.

333aleix333 commented 5 years ago

I'm having the same isssue, Ubuntu 18.04 and python 3.7.3

333aleix333 commented 5 years ago

It works with python 3.6.8

myleott commented 5 years ago

Same issue here. Seems it's something wrong with re on Python 3.7

alvations commented 5 years ago

@myleott @johnfarina could you try and upgrade the Sacremoses? The current version should be 0.0.24.

~The reason behind the slowness don't seem to be the Python distribution. If any changes to the speed, upgrading Python should speed up regexes, https://docs.python.org/3/whatsnew/3.7.html (esp. with the flags regex compilation).~

It's probably because of the unichars -au inclusion of unamed characters from Perluniprops to resolve the CJK tokenization issues from https://github.com/alvations/sacremoses/issues/42. That caused the list of characters in IsAlpha to grow from 21674 to 476052 bytes and IsAlnum grew from 22414 to 478372.

It was too much of a performance cost for perfect accuracy on all possible characters, so the new version falls back to the only unichars without -au and statically adds the CJK characters as per needed instead of adding the universe of alphanumeric characters all the time.


P/S: Weird that the PR auto-closes the issue....

johnfarina commented 5 years ago

Substantial improvement for Korean with version 0.0.24!

In [1]: from sacremoses import MosesTokenizer
In [2]: mt = MosesTokenizer(lang="ko")
In [3]: %time mt.tokenize("세계 에서 가장 강력한")
  CPU times: user 5.84 s, sys: 41.7 ms, total: 5.88 s
Wall time: 6.04 s
Out[3]: ['세계', '에서', '가장', '강력한']

English is slower, weirdly:

In [1]: from sacremoses import MosesTokenizer
In [2]: mt = MosesTokenizer(lang="en")
In [3]: %time mt.tokenize("Hello, World!")
CPU times: user 11.6 s, sys: 89.9 ms, total: 11.7 s
Wall time: 11.9 s
Out[3]: ['Hello', ',', 'World', '!']

and Chinese takes almost 2 minutes on my machine, which is still a bit painful:

In [1]: from sacremoses import MosesTokenizer
In [2]: mt = MosesTokenizer(lang="zh")
In [3]: %time mt.tokenize("记者 应谦 美国")
CPU times: user 1min 54s, sys: 878 ms, total: 1min 55s
Wall time: 1min 56s
Out[3]: ['记者', '应谦', '美国']
alvations commented 5 years ago

@johnfarina Which Python version are you using for the above benchmark?

yannvgn commented 5 years ago

I have the same issue. It looks like it is indeed related to Python 3.7 🤔:

With Python 3.6.1 (Amazon Linux), sacremoses 0.0.24:

In [1]: from sacremoses import MosesTokenizer
In [2]: mt = MosesTokenizer(lang="en")
In [3]: %time mt.tokenize("Hello, World!")
CPU times: user 220 ms, sys: 0 ns, total: 220 ms
Wall time: 220 ms
Out[3]: ['Hello', ',', 'World', '!']

With Python 3.7.3 (Amazon Linux), sacremoses 0.0.24:

In [1]: from sacremoses import MosesTokenizer
In [2]: mt = MosesTokenizer(lang="en")
In [3]: %time mt.tokenize("Hello, World!")
CPU times: user 21.1 s, sys: 10 ms, total: 21.1 s
Wall time: 21.1 s
Out[3]: ['Hello', ',', 'World', '!']
johnfarina commented 5 years ago

@johnfarina Which Python version are you using for the above benchmark?

This was python 3.7.3 (via anaconda) on Mac OS 10.14.1. I tried the same with 3.7.1 on Mac and Ubuntu 16.04 too with similar results.

yannvgn commented 5 years ago

After doing some profiling with cProfile, the issue is indeed caused by a regression on Python >= 3.7, more precisely by sre_parse._uniq function, which did not exist on Python <= 3.6.

I've created a PR on Python repo which fixes the issue we have here. See: https://github.com/python/cpython/pull/15030

I came up with a very dirty quick fix (to be run before importing sacremoses, only on Python >= 3.7)

import sre_parse
sre_parse._uniq = lambda x: list(dict.fromkeys(x))
alvations commented 4 years ago

Thanks @yannvgn!! Great to see this resolved!