The lexer is now faster and uses less memory. A LISP lexer used to take 1.4 seconds to tokenize a 1GiB file with the OpenCL backend and now it takes 0.3 seconds.
Optimizations:
Removed an extremely silly way of solving for tokens based on a path in the DFA. Now it solves based on the last state.
Precomputes every function composition and enumerates them to make a table.
The lexer is now faster and uses less memory. A LISP lexer used to take 1.4 seconds to tokenize a 1GiB file with the OpenCL backend and now it takes 0.3 seconds.
Optimizations: