Open aseifert opened 6 years ago
Another way, based on https://stackoverflow.com/a/2998550:
def is_word_char(c, _categories=frozenset({'Ll', 'Lu', 'Lt', 'Lo', 'Lm', 'Nd', 'Pc'})):
return unicodedata.category(c) in _categories
Another way to do it:
from functools import lru_cache
from flashtext import KeywordProcessor
class NonWordBoundaries:
def __init__(self, *predicates):
self.predicates = predicates
@lru_cache(maxsize=128)
def __contains__(self, ch):
for predicate in self.predicates:
if predicate(ch):
return True
return False
def main():
words_to_search = ["рок"]
keyword_processor = KeywordProcessor()
keyword_processor.set_non_word_boundaries(NonWordBoundaries(str.isalpha, str.isdigit))
keyword_processor.add_keywords_from_list(words_to_search)
keywords_found = keyword_processor.extract_keywords('рок порок роковой')
print(keywords_found)
Not sure about performance though. But at least it is easy to modify the behaviour.
Benchmarks vs. Regex are for the English only char set. Is increasing the word boundaries like this effecting flashtext performance in any significant way?
Hi there,
I think the only safe way to deal with issue #48 would be to test against the
\W
class [1]. Judging from the benchmarks linked on https://github.com/vi3k6i5/flashtext#why-not-regex this seems to run slower by a factor of 1-2 though.Best, Alex
[1] Quoting the Python docs: