scipp / copier_template

Copier template for Scipp projects
BSD 3-Clause "New" or "Revised" License
1 stars 1 forks source link

Automated regression tests #187

Open jl-wynen opened 3 weeks ago

jl-wynen commented 3 weeks ago

We need a way to test whether our workflows still produce the accepted 'correct' results after we make some changes. E.g. in https://github.com/scipp/esssans/pull/135 and https://github.com/scipp/esssans/pull/143. However, there are changes that should change the result, such as adding a new correction or tuning a parameter. In Mantid, those accepted results are written to file and loaded to compare them to results from a new version of the code. This needs extra infrastructure to store and provide the files and extra work to update them. Here is a potential alternative.

Have a test script that does this procedure on each PR:

  1. Check out main.
  2. Run tests with a specific mark and save results of those tests to a folder results_main.
  3. Check out the head of the PR branch.
  4. Run tests with the same mark and save results to results_branch.
  5. For each file that exists in both results_main and results_branch, load the file and compare with sc.testing.assert_identical and sc.testing.assert_allclose.

The tests run this way can contain assertions to, e.g., make sure that the result has the expected shape. But the main purpose of these tests is writing data. That data can be any scipp object, e.g., the result of running a workflow.

This procedure would perform regression tests against main which we assume has the accepted 'correct' code. But it does not require storing result files in a public location.

What do you think? Does this make sense?

SimonHeybrock commented 3 weeks ago

Basically something similar to what ASV does (but for timings instead of results)?

But in general I am not sure this is a viable approach:

Should this be moved to https://github.com/scipp/ess_template, as it is ESS-specific?

nvaytet commented 3 weeks ago

I expect this to break a lot, from minor changes in workflows, leading to too many false positives?

If we make minor changes in workflow, I think we should know about it. See for example https://github.com/scipp/scippneutron/pull/514.

But yes, if we often make changes that modify the results, we do need a mechanism where we can easily ignore failed tests (or say "I accept this as the new reference solution")

MridulS commented 3 weeks ago

Sounds similar to something like https://github.com/matplotlib/pytest-mpl with adding baseline tests to compare matplotlib plots?

YooSunYoung commented 2 weeks ago

Sounds similar to something like https://github.com/matplotlib/pytest-mpl with adding baseline tests to compare matplotlib plots?

But this one seems like saving plots as a baseline and compare the new results with the existing files. What JL suggested is more like regression tests between branches so that we don't have to keep those baseline results as files I think...? And I would like to avoid keeping results as files if possible.

jl-wynen commented 2 weeks ago

What JL suggested is more like regression tests between branches so that we don't have to keep those baseline results as files I think...?

Correct