dart-lang / benchmark_harness

The official benchmark harness for Dart
https://pub.dev/packages/benchmark_harness
BSD 3-Clause "New" or "Revised" License
94 stars 26 forks source link

Stats for the measurements? #53

Open filiph opened 4 years ago

filiph commented 4 years ago

For the benchmark measurements to be useful when comparing two or more versions of some code, we need to know the margin of error (MoE). Otherwise, we can't know whether an optimization is actually, significantly better than the base.

Here's what I mean:

Commit Mean
e11fe3f0 14.91
bab88227 14.64

Without MoE, this looks good. We made the code almost 2% faster with the second commit, right? No:

Commit Mean MoE
e11fe3f0 14.91 0.17
bab88227 14.64 0.14

We actually have no idea if the new code is faster. But we wouldn't know this without the MoE column, and we might prematurely pick the wrong choice.

Right now benchmark_harness only gives a single number. I often resort to running the benchmark many times, in order to ascertain the variance of measurements. This is slow and wasteful, because it's basically computing a mean of means. A measurement that could last ~2 seconds takes X * ~2 seconds, where X is always >10 and sometimes ~100.

I'm not sure this is in scope of this package, seeing as this one seems to be focused on really tight loops (e.g. forEach vs addAll) and long-term tracking of the SDK itself. Maybe it should be a completely separate package?

I'm proposing something like:

  1. Create a list for individual measurements (e.g. List.generate(n * batchIterations, () => -1))
  2. Warmup
  3. Execute n batches, each with batchIterations of the actual measured code, and put the measured time into the list.
  4. Tear down
  5. Compute the mean and the margin of error. Optionally, print all the measurements or provide an object with all the statistics. (I'm personally using my own t_stats package but there are many others on pub, including @kevmoo's stats, plus this is simple enough to be simply implemented without any external dependency.)

PROs:

CONs:

I know this package is in flux now. Even a simple "no, not here" response is valuable for me.

kevmoo commented 4 years ago

It's easy enough to copy-paste the needed code here.

filiph commented 4 years ago

I went down a rabbit's hole of research on how to best present variance in benchmarks (there's a lot of prior art). I have a lot of notes. The gist is that even with MoE / standard deviation, comparing averages is too crude and leads to confusion. I'll investigate further.

My question above still stands: is this is in scope of this package?

MelbourneDeveloper commented 2 years ago

The other useful numbers would be standard deviation, median, min and max