Closed ccarouge closed 1 year ago
@ccarouge should we implement the bitwise comparison as a separate job? That is, have another subcommand: benchcab fluxnet-run-regression-test
which by default submits a PBS job that does solely the regression test (or the regression test can be run on the login node with the --no-submit
option). Alternatively, we could lump in the comparison step in the same PBS job that runs CABLE.
Pros/cons to using a separate job for the comparison:
qsub
command supports the ability for jobs to wait on other jobs through the -W depend=...
option (see the nci user guide or man page for qsub
).qstat <job_id>
qstat <job_id>
-W depends=...
argument to qsub
, else we run qsub
as per usual.What do you think?
Regression testing is important to ensure we don't inadvertently break a functionality. For the moment, all analyses are done in me.org. The analysis script performs a scientific evaluation of the control and development outputs. Although it will clearly show the outputs are identical, it is more complicated and heavier to use than a simple regression test.
We should implement to run a bitwise comparison of the outputs after running the tasks. It can be done easily with
cdo diff
.For optimisation, is it worth coding this so that the regression test for each experiment runs as soon as the outputs from both branches are written out? Or is it good enough to run the regression test once all the output is created? Considering we can run the regression tests in parallel at the end, it probably makes little difference to run all of them once all the outputs are created.