Open danielmenezesbr opened 5 years ago
Hey @danielmenezesbr do you mean vertically centralized ? The error bar shows goes from min to max score (see https://github.com/jzillmann/jmh-visualizer/issues/11), so thats why it's not symmetric!
Thanks @jzillmann
Perhaps to be clearer it should be documented that the error bar uses min/max score instead of the confidence interval.
I agree this isn't the expected or (in my case) desired behavior. I expect to see the 99% confidence interval in decimal format (not integers). I'll give you a concrete example. I just benchmarked two methods:
Result "org.bitbucket.cowwoc.requirements.benchmark.JavaWithoutAssertsTest.requirementsAssertThat":
8.267 ±(99.9%) 0.647 ns/op [Average]
(min, avg, max) = (7.281, 8.267, 12.175), stdev = 1.908
CI (99.9%): [7.620, 8.914] (assumes normal distribution)
Result "org.bitbucket.cowwoc.requirements.benchmark.JavaWithoutAssertsTest.requirementsAssertThatWithAssertsDisabled":
10.113 ±(99.9%) 0.904 ns/op [Average]
(min, avg, max) = (6.826, 10.113, 12.389), stdev = 2.667
CI (99.9%): [9.208, 11.017] (assumes normal distribution)
As you can see, the confidence intervals do not overlap. The second method is faster than the first 99% of the time. Yet when I view this data in the jmh-visualizer the error bars overlap and it's not clear that this is the case.
We used the confidence interval for a long time and i was hesitating to change it - see https://github.com/jzillmann/jmh-visualizer/issues/11.
However i myself and others run into a lot of cases where it wasn't that meaningful. I guess it depends on the benchmarks and its iteration time what is more useful. If you execute a benchmark only for very few times (like 10 iterations and 2 forks), the 99th percentile / confidence interval has normally a big skew...
Maybe it makes sense to make this configurable ?
How are you guys using jmh-visualizer ? The web version ? Gradle ? Or Jenkins ?
Web version in my case.
It seems a good idea to make this configurable.
Web version too.