jtauber / pyuca

a Python implementation of the Unicode Collation Algorithm
MIT License
216 stars 23 forks source link

rethinking the Trie #14

Open jtauber opened 6 years ago

jtauber commented 6 years ago

The key to any speedup and/or reduction in memory consumption is likely to involve replacing the Trie.

The Trie only exists because we need to be able to do longest-prefix retrieval on a dictionary.

We could use a regular Python dictionary if incoming strings were tokenised by collation key.

What I'm basically thinking is:

  1. build a regular python dictionary for the collation table (some keys of which will consist of multiple characters)
  2. build a regex using the keys in the collation table
  3. use that regex to tokenise incoming strings (this has the effect of a Trie)
  4. look the tokens up (some of which will be multiple characters) in the python dictionary

It will be worth having some sort of timed test to see if this approach actually makes a (positive) difference but I suspect it might.

jtauber commented 5 years ago

I'm not actually sure this approach is feasible in the general case but the algorithm actually involves writing back to the string being tokenised so it can't be "pre-tokenised"