In order to know how much of the model the various tests are covering, it will be helpful to implement some code coverage reporting tools that will let developers know how much of the code is covered by various tests, and if there are any sections where coverage is missing.
This should be relatively straight-forward for any python files/tests, but could get more complicated for other languages (e.g. Fortran). So for now this issue will just represent the python scripts (with future issues hopefully being added for any other languages used in the model).
In order to know how much of the model the various tests are covering, it will be helpful to implement some code coverage reporting tools that will let developers know how much of the code is covered by various tests, and if there are any sections where coverage is missing.
This should be relatively straight-forward for any python files/tests, but could get more complicated for other languages (e.g. Fortran). So for now this issue will just represent the python scripts (with future issues hopefully being added for any other languages used in the model).