@cessen in https://github.com/unicode-rs/unicode-segmentation/pull/77 and https://github.com/unicode-rs/unicode-segmentation/pull/79 implemented a complementary pair of optimizations for grapheme segmentation: one which optimizes the binary search to stay in the found region, and one which handles ASCII cases. They work well together, even for non-Latin text, because the caching is much more efficient when common punctuation/spaces are handled directly and do not invalidate the cache.
It might be worth doing the same thing for word, line, and sentence breaking. In some of these cases a table lookup instead of if branches may work out better, though I think if branches will typically be faster.
It may also be worth tweaking the binary search so that even if the character is not in the cached range, it can search nearby first. This is useful for e.g. Indic scripts, where you're going to have a rapid mix of categories, but all the code points are nearby.
@cessen in https://github.com/unicode-rs/unicode-segmentation/pull/77 and https://github.com/unicode-rs/unicode-segmentation/pull/79 implemented a complementary pair of optimizations for grapheme segmentation: one which optimizes the binary search to stay in the found region, and one which handles ASCII cases. They work well together, even for non-Latin text, because the caching is much more efficient when common punctuation/spaces are handled directly and do not invalidate the cache.
It might be worth doing the same thing for word, line, and sentence breaking. In some of these cases a table lookup instead of if branches may work out better, though I think if branches will typically be faster.
It may also be worth tweaking the binary search so that even if the character is not in the cached range, it can search nearby first. This is useful for e.g. Indic scripts, where you're going to have a rapid mix of categories, but all the code points are nearby.