Closed oscardssmith closed 1 year ago
Thank you for your contributition @oscardssmith. 🙏 Also thank you @giordano. 👍
Thanks for merging this. Any chance you can update the readme?
@oscardssmith I'm currently working on a new version using earthly and a new custom Go application to be able to isolate test runs better. Having a big Docker image doesn't work very well. The new version will be able to use the best Docker image for the language.
I'll keep you updated.
If you want to track progress: https://github.com/niklas-heer/speed-comparison/tree/next
thanks!
@oscardssmith thank you for your patience. I'm done reworking. The results are very different. But it also has been over 4 years since the last run, so it's probably also due to changes in the languages itself. Here are the results:
@niklas-heer how are you running the benchmark? In particular, are you avoiding including compilation time by running the function twice? On my 7-year old laptop this function takes about 4ms, when removing compilation latency:
julia> @time f(rounds) # first run, it includes compilation
0.013784 seconds (31.09 k allocations: 1.708 MiB, 72.31% compilation time)
3.1415916535917745
julia> @time f(rounds) # second run, it doesn't include compilation
0.004539 seconds (1 allocation: 16 bytes)
3.1415916535917745
julia> using BenchmarkTools # use package for more accurate benchmarking
julia> @btime f($rounds) # interpolate `rounds` to avoid accessing a non-const global variable
3.793 ms (0 allocations: 0 bytes)
3.1415916535917745
Are you including whole Julia startup time by any chance? That'd completely dwarf run time of such quick functions, making the benchmark pretty meaningless.
@giordano I'm running it in Docker containers on my MacBook. I wouldn't look too much at the absolute value of the results, but more on the relative performance.
Are you including whole Julia startup time by any chance?
Yes, the Go application measures the time it takes a given command to be executed. Here is an example:
./scbench "julia leibniz.jl" -i $iterations -l "julia --version" --export json --lang "Julia"
This means julia leibniz.jl
is measured as a whole. There is not really a way around it if it should be automated.
What do you think?
The BenchmarkTools.jl
package (but you need to install it with using Pkg; Pkg.add("BenchmarkTools")
) does the looping business under the hood. For example
$ julia --startup-file=no -L leibniz.jl -E 'print("\033[2K\033[1G"); using BenchmarkTools; @belapsed(f($rounds))'
0.003792468
The printing is to remove the value of pi printed by the script
This means
julia leibniz.jl
is measured as a whole. There is not really a way around it if it should be automated.What do you think?
I think this goes against what you said you do on the README, which states
Are the compile times included in the measurements?
No they are not included, because when running the program in the real world this would also be done before.
You are including Julia's internal compilation / startup time, as well as the JIT compilation of the actual function in question.
Putting the code in a function means that you get type stability dropping the time down to 145 ms on my computer.