walling / unorm

JavaScript Unicode 8.0 Normalization - NFC, NFD, NFKC, NFKD.
http://git.io/unorm
Other
379 stars 45 forks source link

Ideas for optimizations #15

Open walling opened 10 years ago

walling commented 10 years ago

Just tossing some ideas out there.

The technical report outlines the Quick_Check algorithms for fast verification whether a given string is in a given normalization form: Detecting Normalization Forms.

These are YES / NO / MAYBE answers, so in theory it should be possible to implement this as two regexps:

  1. Whitelist regexp: If this regexp matches, the answer is YES, otherwise continue.
  2. Blacklist regexp: If this regexp matches, the answer is NO, otherwise MAYBE.

The regexps should be automatically generated. It is complicated by the fact that JavaScript regexps only support UCS-2 and not UTF-16, so we have to manually calculate surrogate pairs (and nested matching groups). See punycode.js.

If implemented, we could add test functions, fx. 'foo'.isNormalization('NFC'). Internally they could be used for speeding up any given normlization. Something like this: Use the whitelist regexp to match the longest prefix in this normalization, then cut that out and normalize the rest recursively. We only need to normalize the parts that are not in the normalization already, but we have to be careful about the boundaries between normalized/non-normalized. Some more strategies are outlined in the technical report: Optimization Strategies.

First and foremost, we should make a benchmark test suite, to actually gain some information whether these optimizations gives a boost in speed for long strings. And it would be nice to know how much it means for the size of the library.

walling commented 10 years ago

@phadej, any comments? Do you feel this is necessary, nice-to-have or not important. I'm not sure how many browsers are going to support native ES6 normalization soon.

Status:

phadej commented 10 years ago

The benchmark suite is definitely something we should start with.

I'm not sure how fast or slow unorm is in the first place. Might be that those kind of optimisations will make sense only on large inputs (>1MB), which might be rare.

walling commented 10 years ago

True, a benchmark suite would be nice.

termi commented 10 years ago

The regexps should be automatically generated. It is complicated by the fact that JavaScript regexps only support UCS-2 and not UTF-16, so we have to manually calculate surrogate pairs (and nested matching groups).

Can this project help you?

phadej commented 10 years ago

I work from time to time on https://github.com/phadej/tarzan, but it can't go beyond UCS-2 for know. Forgot that this is here (and tarzan could be used to generate regexps). I'll try to find time to implement surrogate pairs in tarzan.