In using the benchmark package. We're sometimes running into improvements and/or regressions in performance due to an update in toolchain. Is there a way we could include the toolchain used in the benchmark results? That way it's easier to clarify certain results or exclude this as a variable when there's a performance change.
One approach is to do what eg. SwiftNIO does and separate threshold results per toolchain as it’s built - it’s probably the most robust approach in general, WDYT?
In using the benchmark package. We're sometimes running into improvements and/or regressions in performance due to an update in toolchain. Is there a way we could include the toolchain used in the benchmark results? That way it's easier to clarify certain results or exclude this as a variable when there's a performance change.