CABLE-LSM / benchcab

Tool for evaluation of CABLE land surface model
https://benchcab.readthedocs.io/en/latest/
Apache License 2.0
2 stars 3 forks source link

Add regression test to benchcab #32

Closed ccarouge closed 1 year ago

ccarouge commented 1 year ago

Regression testing is important to ensure we don't inadvertently break a functionality. For the moment, all analyses are done in me.org. The analysis script performs a scientific evaluation of the control and development outputs. Although it will clearly show the outputs are identical, it is more complicated and heavier to use than a simple regression test.

We should implement to run a bitwise comparison of the outputs after running the tasks. It can be done easily with cdo diff.

For optimisation, is it worth coding this so that the regression test for each experiment runs as soon as the outputs from both branches are written out? Or is it good enough to run the regression test once all the output is created? Considering we can run the regression tests in parallel at the end, it probably makes little difference to run all of them once all the outputs are created.

SeanBryan51 commented 1 year ago

@ccarouge should we implement the bitwise comparison as a separate job? That is, have another subcommand: benchcab fluxnet-run-regression-test which by default submits a PBS job that does solely the regression test (or the regression test can be run on the login node with the --no-submit option). Alternatively, we could lump in the comparison step in the same PBS job that runs CABLE.

Pros/cons to using a separate job for the comparison:

What do you think?