This work is beyond the OpenCDSS Phase2 scope but could be done in the future to streamline testing.
The testing framework for datasets is working well and provides features to quickly set up dataset tests, run, and compare executable variants.
However, the number of combinations of datasets, executables, scenarios, and comparisons still requires time to "drive" the process via the interactive menu. It is possible to enhance the framework to automate the process, but this will require:
Configuration file or other input to indicate how to constrain the combinations.
Additional logic in the test script to automate
Possibly some checks on disk space to ensure that the resulting tests fit on the computer
A way to summarize the results
It would be possible to take a similar approach to the examples, as follows:
Create a folder datasets-suite and create create and run folders similar to examples.
Create a template TSTool command file similar to the ones that are used to do the comparison and heatmap, with name starting with test-. This command file would automate running a dataset test.
This process would still need some additional up-front setup automation to deal with creating new test variants, etc., because the default of doing all possible combinations would probably be overkill.
This work is beyond the OpenCDSS Phase2 scope but could be done in the future to streamline testing.
The testing framework for datasets is working well and provides features to quickly set up dataset tests, run, and compare executable variants.
However, the number of combinations of datasets, executables, scenarios, and comparisons still requires time to "drive" the process via the interactive menu. It is possible to enhance the framework to automate the process, but this will require:
It would be possible to take a similar approach to the examples, as follows:
datasets-suite
and createcreate
andrun
folders similar to examples.test-
. This command file would automate running a dataset test.