JuliaCI / BenchmarkTools.jl

A benchmarking framework for the Julia language
Other
617 stars 101 forks source link

Feature Idea: Custom Benchmarking Metric #176

Open JonasIsensee opened 4 years ago

JonasIsensee commented 4 years ago

Hi,

in my research we're developing an adaptive solver for some agent based simulations and we build a benchmark suite with PkgBenchmark. This already helps a lot but sometimes the results can be quite confusing because the runtime of the full solving process depends very strongly on the adaptive solver and its heuristics. One thing that would improve our benchmarks significantly would be if we could include a custom additional metric to our benchmark pipeline. (In this case that would just be the number of iterations)

We already have one working but inefficient way of doing this

@benchmarkable sleep(iterations/1000) setup=(iterations=solve(...))

which adds an entry in the PkgBenchmark judge/result files hinting at the number of iterations but of course this is very inefficient.

I also tried implementing a macro for this myself by essentially duplicating the code for @benmarkable but so far this lacks generality and how this should fit into the rest of the logic is not clear. ( https://github.com/JonasIsensee/BenchmarkTools.jl/tree/mytrial )

What are your thoughts? Would this be useful to others as well? Could this be done in a slightly more general way?

gdalle commented 1 year ago

So more generally this could mean including the output of the function you're benchmarking into the results?

JonasIsensee commented 1 year ago

Yes, essentially that. Either one would need to require simple floating point output (to be able to automatically reduce the output to mean / std), or the much more flexible option would be to accept a user-provided metric function that takes in time&returnvalue and spits out a numerical metric

gdalle commented 1 year ago

See #314