Closed learning-chip closed 7 months ago
The ginkgo linear algebra framework performs continuous benchmarking on GitLab's CI, as described in their PASC paper. The SpMV benchmarks presented in the paper correspond to this setting
There exists the make perftests
build target since v0.1, which runs extensive performance tests.
Presently, the results of those are written in a machine-readable format to <build dir>/tests/performance/output/benchmarks
. Internally we do have Jenkins scripts that parses that output and produces performance graphs vs. the commit history. The tests take 1-2 days to complete, however, which is why they're not enabled here.
(Also not enabled are any distributed-memory parallel tests here, will create a new issue for that.)
Just to update: still looking for inspiration how to best enable perftests externally, aside from on internal resources only. Would be great if the results can auto-publish also.
Just to update: @byjtew will bring this online for our internal CI. Assigning to him
Large effort has been spent on manually benchmarking different algorithms (CG, HPCG, PageRank), Kernels (SpMV, SpMM), backends (blocking/nonblocking), for each commit and version.
It should be beneficial to set up some continous benchmarking using frameworks like Google Benchmark or Catch2. It can be further automated by GitHub Actions. A nice example is the Pandas benchmark page.