Try a 50k minified CSS file and the parser will choke. This comes down to tokenize.js's tokenize() function. It needs to continually make a shorter and shorter str by using str = str.substr(match.length). Perhaps make this faster by starting patterns at a given offset, working on buffers, or re-engineering the whole matching algorithm to work entirely in JavaScript using a character-by-character approach.
Try a 50k minified CSS file and the parser will choke. This comes down to tokenize.js's tokenize() function. It needs to continually make a shorter and shorter
str
by usingstr = str.substr(match.length)
. Perhaps make this faster by starting patterns at a given offset, working on buffers, or re-engineering the whole matching algorithm to work entirely in JavaScript using a character-by-character approach.