celerity / celerity-runtime

High-level C++ for Accelerator Clusters
https://celerity.github.io
MIT License
141 stars 18 forks source link

Improve CSV reporting of benchmarks, add stencil system benchmark #221

Closed PeterTh closed 11 months ago

PeterTh commented 11 months ago

This PR is in preparation of a follow-up which will use tags in the evaluation for categorization. This needs to be done in 2 steps due to the involvement of the repository state in the CI benchmark worklow.

Preparatory changes:

The last point might seem a bit unrelated. I added it because otherwise, our "full system" benchmark category would include only the many-task benchmark, which is quite far from Celerity's common use case currently.
We always wanted to extend this set anyway, so now I made a first step with a very simple iterative stencil benchmark that should better represent some common usage scenarios in terms of runtime system performance impact.

The goal is to be able to finish https://github.com/celerity/meta/issues/47 in the follow-up.

github-actions[bot] commented 11 months ago

Check-perf-impact results: (e3a4351784531e1861bff0cb14c92210)

:warning: Significant slowdown in some microbenchmark results: 16 individual benchmarks affected
:rocket: Significant speedup in some microbenchmark results: 16 individual benchmarks affected
:heavy_plus_sign: Added microbenchmark(s): benchmark stencil pattern with N time steps - 50 / iterations, benchmark stencil pattern with N time steps - 1000 / iterations

Overall relative execution time: 1.01x (mean of relative medians)

PeterTh commented 11 months ago

This goes to show that the threshold of "significance" also needs to be tweaked in the follow-up PR which reworks the reporting script.