JuliaCI / BenchmarkTools.jl

A benchmarking framework for the Julia language
Other
604 stars 97 forks source link

`tune!` on benchmarkable with evals set. #328

Closed vchuravy closed 9 months ago

vchuravy commented 10 months ago

The docs state:

Note that the setup and teardown phases are executed for each sample, not each evaluation. Thus, the sorting example above wouldn't produce the intended results if evals/sample > 1 (it'd suffer from the same problem of benchmarking against an already sorted vector).

So looking at the example:

julia> using BenchmarkTools

julia> x = rand(100000);

julia> b = @benchmarkable (@assert !issorted(y); sort!(y)) setup=(y = copy($x)) evals=1
Benchmark(evals=1, seconds=5.0, samples=10000)

julia> run(b) # Succeeds since evals is set to 1 
# ...

julia> tune!(b)
ERROR: AssertionError: !(issorted(y))
Stacktrace:
  [1] var"##core#228"(y::Vector{Float64})
    @ Main ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:489
  [2] var"##sample#229"(::Tuple{}, __params::BenchmarkTools.Parameters)
    @ Main ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:497
  [3] _lineartrial(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters; maxevals::Int64, kwargs::@Kwargs{})
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:161
  [4] _lineartrial(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters)
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:152
  [5] #invokelatest#2
    @ BenchmarkTools ./essentials.jl:887 [inlined]
  [6] invokelatest
    @ BenchmarkTools ./essentials.jl:884 [inlined]
  [7] lineartrial
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:35 [inlined]
  [8] tune!(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters; progressid::Nothing, nleaves::Float64, ndone::Float64, verbose::Bool, pad::String, kwargs::@Kwargs{})
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:251
  [9] tune!
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:247 [inlined]
 [10] tune!(b::BenchmarkTools.Benchmark)
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:247
 [11] top-level scope
    @ REPL[7]:1

The docs also state:

evals: The number of evaluations per sample. For best results, this should be kept consistent between trials. A good guess for this value can be automatically set on a benchmark via tune!, but using tune! can be less consistent than setting evals manually (which bypasses tuning). Defaults to BenchmarkTools.DEFAULT_PARAMETERS.evals = 1. If the function you study mutates its input, it is probably a good idea to set evals=1 manually.

So why is this a problem? PkgBenchmarks.jl calls tune! on every benchmarkable, thus changing evals leading to false results when benchmarking mutating functions.

Not sure who maintains PkgBenchmarks these days but cc: @KristofferC @shashi @DilumAluthge

Potential solution would be to add a new parameter tuneable that does indeed skips tuning in all instances.

gdalle commented 10 months ago

@vchuravy I think that's precisely what I addressed with #318. And also the reason why I think BenchmarkTools.jl v2.0 is long overdue. But I would appreciate a discussion around it, cause that would mean a breaking release for a very central package.

gdalle commented 10 months ago

https://github.com/JuliaCI/BenchmarkTools.jl/milestone/1

gdalle commented 10 months ago

@vchuravy can you confirm that the fix works for you?

gdalle commented 9 months ago

closing since the example works on master