sympy / sympy_benchmarks

Some benchmarks of SymPy
14 stars 32 forks source link

Migrate CI to GitHub Actions from Travis CI #80

Closed brocksam closed 1 year ago

brocksam commented 1 year ago

This PR addresses issue #77, which requests CI be moved to GitHub Actions as there is no longer free usage of Travis CI. In this PR:

In the Travis CI job, the benchmarks were run against SymPy master, 1.4, 1.3, and 1.2. In the new GitHub Actions workflow, the benchmarks are now only run against SymPy master and 1.11, mirroring what is done by the workflow in the main SymPy repo.

moorepants commented 1 year ago

I guess we have to merge this to trigger the first actions run?

moorepants commented 1 year ago

I'm going to do that and you can tidy up things in a new PR if needed.

asmeurer commented 1 year ago

Ideally we should run this repo against older SymPy versions that we care about. The idea is that the benchmarks should work against older versions so that we can generate asv plots going back. Really we only need to run the benchmarks once against the old versions, just to make sure they run without error.

asmeurer commented 1 year ago

In fact, they only need to be run once against any version. The point of this CI is just to make sure the benchmarks work. The benchmarks CI in the SymPy repo tries to actually run the benchmarks to check against performance regressions. A true benchmarks run would be someone running the full asv run on their personal computer and uploading the results somewhere. I have a computer I have done this on in the past https://www.asmeurer.com/sympy_benchmarks/. If you ever need me to update those, I can do it. Just let me know.

brocksam commented 1 year ago

The point of this CI is just to make sure the benchmarks work.

If that's the case (and I agree with this statement) then the CI on this repo is currently doing what it should. It looks like asv run has a --strict argument that will cause the run command to exit with a non-zero return code if any benchmark fails. If I add this in a new PR and reduce the CI to only run with one SymPy version then that should be suitable. I think we should run against the most recent release rather than master as we should be able to guarantee that that works correctly with the benchmarks. Do you agree?

asmeurer commented 1 year ago

master is important too. If someone adds a new benchmark here we need to make sure it will work with the next version of SymPy too. Ideally we would run every benchmark against every commit, the same as asv run does, but that's too much work, so just checking some tags is good enough.