Closed franciscod closed 8 months ago
...maybe it makes sense to generate the expressions just once and then hardcode them? this would save ~1.8s on each run, and benchmarking array manipulation and string concatenation isn't the focus here...
...maybe it makes sense to generate the expressions just once and then hardcode them? this would save ~1.8s on each run, and benchmarking array manipulation and string concatenation isn't the focus here...
Yes, I agree it's better to make benchmarks as specific as possible so we can better isolate performance differences. Can you make this change? I can merge this PR afterwards.
I've split out the generation to a helper python script, and pasted the resulting strings into the benchmark.
Also I noticed that the parse was running slower than the parse+execute. Wondering about caching, I added a first "cold" run, and then a second parse run (that is exactly the same as the first one), and the results look a bit more reasonable.
Reference results on M2 mac mini:
There are 3 phases: generate strings, parsing, executing.
The string generation phase is as follows:
Only the 4 basic math (two argument aka binary) operations are used, it starts with a pool of nodes (initially a node is just a variable), and randomly combines 2 nodes until only one node remains.
Parsing and executing are straightforward uses of the Expression class.
Reference results on M2 mac mini: