Closed parnumeric closed 2 years ago
After a quick implementation and testing it, it has been revealed it wouldn't be desirable (to continue narrowing down the ranges for search). The reason can be demonstrated by a resulting plot mean_correlations.png
. Ranges for some parameters are narrowed down so strongly that there is no more variation in them, which can be seen by vertical patterns for them (see, for example, maths_ticks_sd
).
The recommendation is to run parameterisation tests for as many iterations as needed. Restarting/resuming those simulations from a previously saved parameter set means restarting search for parameters within ranges of initial sizes again (as hardcoded in automation.py
(see CUSTOM_LIMITS
)), from already not random values in the beginning, but from some resulting values from the previous search (presumingly, better than random ones).
$PARAMETER_FILE is a CSV file used as a starting point for parameterisation tests from
$SCRIPT_DIR/reframe_parameterisation_infrastructure/reframe_tests
(as described in https://github.com/DurhamARC/classroom-abm/blob/master/hamilton/README.md#deploying-automated-parameterisation-tests-via-reframe). Parameterisation tests are executed by theparameterisation.sh
script in that directory. Every iteration of tests yields the new set of LHS parameters which is saved innext_lhs_params_$START_DATETIME.csv
, from which the next iteration starts. The issue is when we resume and continue the parameterisation tests from where the previous runs stopped, the range where every parameter is searched has again the initial size which is narrowed down with every iteration. This range of search (the lower and upper bounds) is determined inhamilton/parameter_analysis/automation.py
by the iteration number (which always begins with 0 when the simulation is restarted). It would be interesting to resume/continue from the search range already narrowed down by previous runs and not from the initial range again.