The idea is to first analyze the input string and turn it into an array of ANSI codes and characters. Slicing is then done by operating on the array. It basically rewrites the entire library, but yields a higher performance (+149% throughput, and more when reusing some of the work for slicing the same string multiple times). For my methodology on testing the performance, see #37. Some of the improvements are also taken from there.
I had to change two tests though - one unnecessarily used two separate foreground colors where the 2nd overwrote the 1st. And in another one, the expected string had the end codes in the same order as the start codes, while all others had it in the opposite order. I didn't see any difference when printing those strings.
original:
20 822 ops/s, ±1.70%
(snip)
other PR:
41 278 ops/s, ±4.02%
this PR:
51 791 ops/s, ±2.93%
and if the result of the tokenization is memorized (export tokenize function, and add a copy of the sliceAnsi function which operates on a token array, instead of a string) , repeated slices on the same input (like in the test case) are much faster (+300% throughput):
this PR, with reusing work:
84 850 ops/s, ±3.21%
FYI, I have TypeScript code for this change. Let me know if that is preferred.
This PR is an alternative to https://github.com/chalk/slice-ansi/pull/37
The idea is to first analyze the input string and turn it into an array of ANSI codes and characters. Slicing is then done by operating on the array. It basically rewrites the entire library, but yields a higher performance (+149% throughput, and more when reusing some of the work for slicing the same string multiple times). For my methodology on testing the performance, see #37. Some of the improvements are also taken from there.
I had to change two tests though - one unnecessarily used two separate foreground colors where the 2nd overwrote the 1st. And in another one, the expected string had the end codes in the same order as the start codes, while all others had it in the opposite order. I didn't see any difference when printing those strings.
and if the result of the tokenization is memorized (export
tokenize
function, and add a copy of thesliceAnsi
function which operates on a token array, instead of a string) , repeated slices on the same input (like in the test case) are much faster (+300% throughput):FYI, I have TypeScript code for this change. Let me know if that is preferred.
closes: #37