This adds a NTLKWordTokenizer which implements the default tokenizer from NLTK. The test suite from NLTK passes, however we are not handling one edge case due to the lack of the lookahead functionality in regex. I don't think it's worth adding another library as a dependency to address that, and marking it as a known limitation could be a workaround for now. That regexp is an enhancement proposed by NLTK on top of the classical Penn Treebank word tokenizer.
Currently this returns Vec<String> and I have struggled with making it return an iterator due to lifetime issues so far.
It's around 3x faster than the NLTK version in Python. It is very English specific and should probably not be used in other languages.
This adds a
NTLKWordTokenizer
which implements the default tokenizer from NLTK. The test suite from NLTK passes, however we are not handling one edge case due to the lack of the lookahead functionality in regex. I don't think it's worth adding another library as a dependency to address that, and marking it as a known limitation could be a workaround for now. That regexp is an enhancement proposed by NLTK on top of the classical Penn Treebank word tokenizer.Currently this returns
Vec<String>
and I have struggled with making it return an iterator due to lifetime issues so far.It's around 3x faster than the NLTK version in Python. It is very English specific and should probably not be used in other languages.
TODO: