Open eloisabentivegna opened 1 year ago
I think I can directly implement an automated test script as a flag in script_runmam.sh (eg sh script_runmam.sh - - test) that compares the best fit found by the code with the one expected. To account for the stochastic nature of the process, I can set a tolerance on the value of the likelihood (i.e. if the difference in the log likelihoods best fit from the Montecarlo analysis is less then 1, then the test is considered passed, failed otherwise(. I will develop the test script asap
Dear @eloisabentivegna I added a new part of the script "script_runmam.sh" with the option "-ts" which performs the basic smoke test described in the README. The "PASSED"/"FAIL" is determined by comparing the best fits (the last two lines in MaxLik.dat") with the expected values. The test is considerd passed if the difference in the likelihood is less then 1. The test requires the additional script "test.py"
At the moment, the test provided with the code is not guaranteed to yield identical results to the data provided in test/, due to the MCMC component. This is understandable but not idea, as a user won't in general know if the differences are due to the stochastic nature of the process or a problem with the installation. Would it be possible to design a deterministic test so users can be sure they have a functioning copy of the code? Or at least specify what tolerance the current test results should have? Automating the verification process would also be beneficial, e.g. by providing a diff action in script/script_runmam.sh which checks that the results are correct and issues a "Passed"/"Failed" statement (as opposed to requiring the user to compare data files manually).
(Part of the JOSS review https://github.com/openjournals/joss-reviews/issues/4800)