Open niklas-heer opened 1 year ago
I believe setting rounds to 0 should be about equivalent
@francescoalemanno that is a brilliant idea. Would at least make things way easier to implement.
@niklas-heer
I reimplement the benchmark for C++, Java, Golang, Python and JavaScript: https://github.com/Glavo/leibniz-benchmark
I run twenty rounds of benchmarking and count the average time spent on the last ten rounds. Here is the result I got:
I think the "clean" result is the one that can reflect the real world situation.
The main factor affecting the results now is the startup and loading time, not the real performance of the language.
As suggested in #51 by @HenrikBengtsson more "clean" data for the calculation of pi could be gathered by measuring the performance of each language with and without calculating pi and then subtracting the one from the other.
I think it would be best to keep both data. "Real world" data with startup and IO, and "clean" data for just calculating pi. I would keep both data in the CSV, but I'm not sure which one to favour for the image creation. Probably the "clean" data 🤔
In terms of implementation. I can see two approaches:
Obviously, both would require adjustments to
scbench
and the analysis step.