Open mknyszek opened 2 years ago
Note that the average number of GC cycles per op would not work, because in many scenarios where this matters, it wouldn't properly flag an issue (the average would just be zero, or very, very close to zero).
Is this actually the case? It sounds mostly like an issue of ensuring we include enough significant figures. As long as there are enough significant figures, we should be able to tell that a doubling of 0.00001 GC/op to 0.00002 GC/op is problematic.
That's true. It looks a little weird but I think you're right that as long as there are enough significant figures it's fine. I think even with a float64
we're fine on that front.
How would this behave when a mark phase is active at the start, or at the end, of the benchmark loop? (Possible to wait for the cycle to finish before entering the benchmark, but maybe that's not possible at the end. Maybe that's something more to report so a post-processor could choose to discard those results?)
How does it interact with B.{Stop,Start,Reset}Timer, especially around an in-progress mark phase?
Thanks @rhysh. I think I'd lean toward not trying to do anything about in-progress mark phases and just measure completed GC cycles between the {Start,Reset}Timer and StopTimer. It's noisy, but ultimately what we're trying to flag is that noise. If we blocked until the end of a mark phase, it creates a much more stable condition for the number of GC cycles which might mask the problem. If the mark phases are short (true most of the time) then masking isn't too much of a problem, so I don't feel particularly strongly about this if there are other good reasons to block.
Turning this on by default seems like a lot of noise, unless we believe that the majority of benchmarks are invalidated by GC issues. I don't think that is true.
Is there some way we can tell when to turn it on versus not?
Or failing that, is it enough to turn it on with b.ReportAllocs?
I think the problem with deciding whether to turn it on or not is that you don't really know until you've done at least one iteration. The first iteration would be missing the signal, but maybe that's fine? As opposed to a benchmark metric, we could also just emit a warning, or a benchmark key ("gc-noise: yes" or something), in which case that doesn't really matter.
I think grouping it with ReportAllocs would also make sense.
I think the problem with deciding whether to turn it on or not is that you don't really know until you've done at least one iteration. The first iteration would be missing the signal, but maybe that's fine? As opposed to a benchmark metric, we could also just emit a warning, or a benchmark key ("gc-noise: yes" or something), in which case that doesn't really matter.
I don't think there's a 'turn it on' problem. Surely asking for the GC count is just an atomic load. The decision about reporting can be done at the end of the benchmark. To me, a bigger problem is that you really care about whether the GC/op is different between two different benchmark runs, so there's no global view available to make the decision.
It sounds like you are saying maybe we can do something with the individual iterations we get during the determination of the right b.N. That would be great if so, but then it seems like we'd want to report some kind of GC/op variance. And we expect variance when b.N is too small, so it's not clear whether deciding anything from it would be meaningful.
For all those reasons I'm not sure we can do better than "report GC/op as part of b.ReportAllocs".
This proposal has been added to the active column of the proposals project and will now be reviewed at the weekly proposal review meetings. β rsc for the proposal review group
To me, a bigger problem is that you really care about whether the GC/op is different between two different benchmark runs, so there's no global view available to make the decision.
Sorry, this is what I thought you meant by the "turn it on" problem. In other words, retroactively reporting it. I think you're right though that this probably can't be better than just reporting it as part of ReportAllocs due to variance.
Retitled. Does anyone object to this?
As a bit of a strawman, GOGC (and GOMEMLIMIT) will likely have a large impact on GC/op. Would it make sense to include those as benchfmt keys so that they are captured as part of the benchmark, just as we capture CPU type and GOMAXPROCS (in the name rather than a key), which impact ns/op?
I think we'd need to add those keys universally since they come before we know if b.ReportAllocs
will be used. That feels like overkill to me, so I'd lean against including them, but figured I'd mention it.
Maybe only include GOGC if it's not the default 100?
Based on the discussion above, this proposal seems like a likely accept. β rsc for the proposal review group
No change in consensus, so accepted. π This issue now tracks the work of implementing the proposal. β rsc for the proposal review group
As someone who looks at benchmark results a lot, I didn't immediately feel this was useful: in general it will be a linear multiple of bytes/op, and the multiple can vary a lot depending on what has run previous to this benchmark to push the heap larger.
However, cases where it is a very small number do indeed flag potential noise in results.
Perhaps benchstat
could flag these.
Currently benchmarks defined in the testing package only show
ns/op
by default, while optionally reporting average allocation count and rate (allocs/op
,bytes/op
). Users may also export a custom metric, viatesting.(*B).ReportMetric
.A common issue with benchmarks defined by the testing package, especially microbenchmarks, is that they might execute only a handful of GC cycles. If, due to internal or external noise, like scheduling, an extra GC cycle gets executed, it could cause a significant shift in the results (typically
ns/op
). As a result, it's more difficult to trust the results from such benchmarks; they need either more iterations to be useful, or a change in how they're defined (say, with a bigger GOGC value).I propose exposing the ~total number of GC cycles~ average number of GC cycles that passed during a given benchmark run to help expose such issues. One issue with making the total a benchmark metric is that it depends on the benchmark time, or the number of benchmark iterations, so this number is not comparable if those are different between runs. I think this is OK, because this metric is just supposed to flag an issue, not to be used in actual performance comparisons. Tooling such as
benchstat
could simply understand this, or we could export some benchmark unit metadata that makes this explicit.I don't have strong feelings about the details, such as the metric name (maybe ~
total-GC-cycles
~GCs/op
?). While this metric potentially adds noise to benchmark output, I believe that cost is outweighed by the value of being able to identify this kind of noise early.Alternatively, we could make it the job of the testing package to flag issues around such performance cliffs, but I suspect it's cleaner to just export the raw data and have it be interpreted by downstream tooling.
One might think that the testing package could just add iterations to adjust for these cliffs, but this is very difficult to do in the general case. It would involve a lot of heuristics that will generally be fragile and depend on the GC implementation. On the other hand, it's straightforward for tools to find issues after the fact, by identifying outliers and correlating those outliers with relatively large (by %) differences in GC cycle count.
~Note that the average number of GC cycles per op would not work, because in many scenarios where this matters, it wouldn't properly flag an issue (the average would just be zero, or very, very close to zero).~
EDIT: Switched to average number of GC cycles.
CC @aclements @prattmic @dr2chase