Closed aviatesk closed 2 years ago
The failure in Julia nightly is because this added benchmark suite isn't tuned yet, and thus it's tuned like evals=2
: https://github.com/JuliaCI/BaseBenchmarks.jl/runs/4379154336?check_suite_focus=true#step:5:7094
This disables setup
settings and this causes the failure.
I confirmed this benchmark suite works just correctly on my machine.
I think you need to specify evals=1
to @benchmarkable
Even though I set it manually here? https://github.com/JuliaCI/BaseBenchmarks.jl/blob/d3861653b3f88125aa5db5506650a352f0b1dece/src/inference/InferenceBenchmarks.jl#L173
That will work, assuming no other code later calls tune
Ah, evals = 2
is specified for our test case: https://github.com/JuliaCI/BaseBenchmarks.jl/blob/02548823de3a56da5ed9e5d79fef845c2f16d93b/test/runtests.jl#L10-L14
This commit setups a basic infrastructure for benchmarking Julia-level compilation pipeline.
InferenceBenchmarks
is based onInferenceBenchmarker <: AbstractInterpreter
, which maintains its own global inference cache, and so it allows us to run the compilation pipeline multiple times while avoiding caches generated by previous compilation to be reused.I set up a top-level benchmark group named
"inference": InferenceBenchmarks
, which is composed of the following subgroups:"inference"
: just benchmarks overall Julia-level compilation pipeline"abstract interpretation"
: benchmarks only abstract interpretation, i.e. without optimization"optimization"
: benchmarks only optimizationHere is an example of benchmark result obtained by comparing these two commits of
JuliaLang/julia
5c357e9
andd515f05
:This result is very satisfying because the refactor added in
d515f05
certainly improved Julia-level compilation performance by avoiding domtree construction in the SROA pass in many cases.