Currently the tokenizer repeats the token matching over and over. Caching
is tricky because of lookAhead() calls occuring inside calls to
nextToken(). The previous caching implementation had bugs due to this, but
it's important to get this in there.
Original issue reported on code.google.com by peterj...@gmail.com on 15 Apr 2007 at 11:52
Original issue reported on code.google.com by
peterj...@gmail.com
on 15 Apr 2007 at 11:52