Open chfast opened 1 month ago
i think that would be something that would need to be contributed. but i wouldn't add it to compare.py, i'd create a new aggregate.py or something.
but how would the metrics be aggregated? when you say "repetitions" i don't think you mean it in the same sense it's used in the library. are these JSON files different versions of the code or just reruns of the same code?
By repetitions I mean --benchmark_repetitions=N
. Let's say I have one result with 10 repetitions recorded, and one result with 20 repetitions recorded. The aggregated result should have 30 repetitions.
If the result files contain also stats over repetitions they would need to be recomputed or discarded.
I plan to run the same code but over long period of time, possibly interleaved with some other operations.
ah i see.. that doesn't seem unreasonable, but it doesn't exist today afaik.
Is your feature request related to a problem? Please describe.
I run a benchmark and store the results in JSON format. I'd like to have an option to aggregate the repetitions of the same benchmark from multiple JSON files.
Describe the solution you'd like
Describe alternatives you've considered
I can write such a script myself. But maybe such solution already exist and I just can't find it.