ThePrimeagen / ts-rust-zig-deez

605 stars 160 forks source link

Elixir lexer now with 300% more blazingly fast and 300% less memory usage #255

Open madlep opened 1 year ago

madlep commented 1 year ago

Was thinking about this, and worked in a bunch of optimisations to make things ✨ blazingly faster

TL;DR - 3x faster, 3x less memory usage

Apologies to @ryanwinchester . Was gonna make just a couple of tweaks, but ended up touching pretty much all the original code. The same logic is still there. It's just all being called a bit differently.

Operating System: macOS
CPU Information: Apple M1 Pro
Number of Available Cores: 8
Available memory: 16 GB
Elixir 1.14.5
Erlang 26.0.1

Benchmark suite executing with the following configuration:
warmup: 20 s
time: 20 s
memory time: 5 s
reduction time: 5 s
parallel: 1
inputs: none specified
Estimated total run time: 1.67 min

Benchmarking Lexer ...
Benchmarking OldLexer ...

Name               ips        average  deviation         median         99th %
Lexer         515.43 K        1.94 μs  ±1439.50%        1.83 μs        2.13 μs
OldLexer      180.10 K        5.55 μs   ±316.21%        5.29 μs        9.79 μs

Comparison:
Lexer         515.43 K
OldLexer      180.10 K - 2.86x slower +3.61 μs

Memory usage statistics:

Name        Memory usage
Lexer            7.95 KB
OldLexer        24.13 KB - 3.04x memory usage +16.19 KB

**All measurements for memory usage were the same**

Reduction count statistics:

Name     Reduction count
Lexer                394
OldLexer             950 - 2.41x reduction count +556

**All measurements for reduction count were the same**
ryanwinchester commented 1 year ago

I 100% had the intermediary step with variable assignment (returning {token, rest}) because I knew it was going to be reviewed on-stream and I wanted it to be easier to read and focus on the sweetness of binary pattern-matching.

They did the on-stream review, so cool with optimizing it.

Fa-C-Shus commented 1 year ago

I think your work and review matters, but I’m no collaborator either

madlep commented 1 year ago

I 100% had the intermediary step with variable assignment (returning {token, rest}) because I knew it was going to be reviewed on-stream and I wanted it to be easier to read and focus on the sweetness of binary pattern-matching.

They did the on-stream review, so cool with optimizing it.

Yeah, I watched that, it is was good to follow along with. The previous version is much more readable than this version. It's definitely sacrificing readability for performance here.