Closed dstansby closed 4 weeks ago
Are we sure we want this? Every time there's a new change, this test will have to be updated.
See https://github.com/UCL-ARC/python-tooling/issues/329 for context and motivation.
Definitely appreciate that this adds a bit more work when changing anything in the template, but I think it's worth it to catch potential errors such as in https://github.com/UCL-ARC/python-tooling/pull/309 which slipped through the PR author and reviewer.
I've tried to make the loop here as simple as possible:
See #329 for context and motivation.
Definitely appreciate that this adds a bit more work when changing anything in the template, but I think it's worth it to catch potential errors such as in #309 which slipped through the PR author and reviewer.
Consider me won over, I'd forgotten about #309
Sigh, putting this back as draft until I can fix the test
The tests are failing again @dstansby. main
had updated.
This adds a regression test for generated pacakge data. The test folder now has a data folder inside it which contains a reference copy of the generated package. This is then compared at test time to a newly generated package. Any differences are shown after the failing test with a diff that looks like:
To update the test files with an intentional change, when the test fails the test data is updated with the changes that can then be easily comitted (if desired).
Fixes https://github.com/UCL-ARC/python-tooling/issues/329