For reproducibility, I added functionality to pass a seed into SuStaIn, and then use this seed throughout so that results are consistent for a given seed (and parameters).
This then allows for a full functional test of SuStaIn. I've added two scripts to check the output from SuStaIn (in the "tests" subfolder): create_validation.py is for creating new validation benchmarks, and validation.py is for checking results are consistent with these benchmarks.
A full test (-f command line flag for validation.py) uses every class that inherits from AbstractSustain. The relevant test functions are @abstractmethods, so this should scale with future additions/subclasses. This also means that the simulator functions are now a part of the class they pertain to (though I haven't removed the "sim" subfolder for now).
For reproducibility, I added functionality to pass a seed into SuStaIn, and then use this seed throughout so that results are consistent for a given seed (and parameters).
This then allows for a full functional test of SuStaIn. I've added two scripts to check the output from SuStaIn (in the "tests" subfolder):
create_validation.py
is for creating new validation benchmarks, andvalidation.py
is for checking results are consistent with these benchmarks.A full test (
-f
command line flag forvalidation.py
) uses every class that inherits fromAbstractSustain
. The relevant test functions are@abstractmethod
s, so this should scale with future additions/subclasses. This also means that the simulator functions are now a part of the class they pertain to (though I haven't removed the "sim" subfolder for now).