scientific-python / faster-scientific-python-ideas

Brainstorm how to make scientific Python ecosystem faster
BSD 3-Clause "New" or "Revised" License
10 stars 0 forks source link

Easy run time benchmarking in GitHub Actions #2

Open itamarst opened 3 months ago

itamarst commented 3 months ago

As the maintainer of a project on GitHub Actions, I would like to be able to have benchmarks run automatically on PRs, telling me when a PR slows things down, or perhaps speeds things up.

(A broader use case is tracking performance over time, but this rules out some of the solutions that work for the narrower use case, and it's not quite as important.)

This is harder than it sounds, because cloud CI runners use whatever random cloud VM you get assigned, which means inconsistent hardware. Inconsistent hardware means inconsistent results for normal benchmarking, so results are hard to compare. See e.g. https://bheisler.github.io/post/benchmarking-in-the-cloud/ for some experiments to show this noise. Traditionally people get around this by having a fixed hardware machine to run the benchmarks, which you can then hook up to CI as a runner machine (Github Actions supports this) but this is brittle and not very scalable across teams.

itamarst commented 3 months ago

Proposed solution: codspeed.io

This is an online service that does benchmarking, with free accounts for open source projects.

It uses cachegrind/callgrind or something similar (see here for a writeup on the idea.) Basically it counts CPU instructions run using a CPU simulator.

Benefits:

Downsides:

I feel like not noticing a 50% slowdown or 2× speedup is sufficient to rule out codspeed for anything using compiled code.

itamarst commented 3 months ago

Proposed solution: Double benchmark run, main vs PR

This is used by e.g. pyca/crytography project.

Benchmarks run on a normal CI runner in the cloud, with inconsistent hardware.

On every PR, within a single CI job:

The idea here is that by running the main branch too, you get a baseline on the same hardware as the benchmarks for the PR. So the comparison is meaningful.

This would be a lot better if done as a GitHub Actions pre-written action, so integrating it with a project is straightforward.

Benefits:

Downsides:

thomasjpfan commented 3 months ago

Quansight Labs has a blog post about their benchmarking experience on Github's CI with scikit-image: https://labs.quansight.org/blog/github-actions-benchmarks The blog starts with seeing how consistent CI hardware is over multiple days. In the "Run it on demand!" section, it goes into running benchmarks on PRs and comparing it to main.

Their Github Action for trigging benchmarks is defined here: https://github.com/scikit-image/scikit-image/blob/main/.github/workflows/benchmarks.yml