Open jakebolewski opened 8 years ago
I think this is because we removed the no-online from here: https://github.com/johnmyleswhite/Benchmarks.jl/blob/f5ef97e2ca1adf80c47064f8616e77b4b76b3b8c/src/benchmarkable.jl#L68
At some point, we need to pick a set of N examples that we want to get right and see whether that's actually achievable at all. As is, we seem to be oscillating because it's not clear there is a single correct solution.
cc @mbauman
Working as intended (but needs to be documented).
The latest swing of the pendulum has @benchmark
working with the same semantics as the @code_*
macros — it evaluates all arguments and only benchmarks the outermost function.
Oh, wait, I missed the factor of two difference between @benchmark sin(3.33)
and @benchmark sin(sin(3.33))
. That's definitely wonky.
Edit: Aha, that's because sin
's performance is dependent upon its inputs. I imagine that it's doing extra work to bring 3.33 into the domain it works in. @benchmark sin(-.18)
has comparable performance to sin(sin(3.33))
.
What does sin(-0.18729466354290317)
produce? I assume there's no real speed difference, but worth making sure.
(Yah, see my edit.)
Ok. Do you mind documenting this, Matt? You clearly have a better understanding of how we ended up with the current approach.
Sure, I can throw something together this weekend.
Many, many thanks.
Ok, that make's sense. Thanks for the clarification.