Closed bgoli closed 3 years ago
np.allclose()
. Although this can easily start to grow into something like unit tests. However, it can also serve as a basis for writing unit tests, so no harm done and no effort wasted.Those with question marks I'd regard as secondary. However, this can always be expanded on as well so not everything needs to be there from the word go.
Yes not packaged and eventually migrated to unit test. Verifying with pickles can be problematic as binary pickles are Python and OS specific while text pickles are not so efficient. I would just store them as dictionaries in a Python file/module or as JSON. The nose unit tests cover the basic algorithms (steady-state and continuation) but comparing integration algorithms, SBML translations, events, scans, etc. Even some "real world" test with larger models could be put in here.
On a separate note I would prefer if we try use python scripts here and keep notebooks in a new subfolder (of devtest) ... as they don't lend themselves to automated testing :-)
I agree about using pure Python scripts for the test.
As regards the pickles, my experience is different if you are restricting yourself to Numpy arrays. Then they are cross platform compatible either using np.save()
and np.load()
, or also with pickle.dump()
and pickle.load()
as long as you dump to and load from files opened in binary mode. Tested between Python versions (3.8 and 3.9) and on all 3 platforms. So I think it is quite feasible to add simulation results as long as we only add the actual numpy array with the numerical data. I would suggest going for np.save()
and np.load()
.
(In fact, in my latest papers https://doi.org/10.1093/insilicoplants/diab013 and https://doi.org/10.1093/insilicoplants/diab014 I have submitted *.npy
files as supplementary data).
Good to know it works with numpy arrays. I was pickling complex custom classes/instances so that is a slightly different use case. Great papers 🥇
we have the directories in place and can add examples as we go
Were you thinking of adding these examples for the 1.0 release already (i.e. internal testing now) or only later? I.e. is this still a milestone for 1.0? IOW should I be working on adding testing examples or is there more urgent stuff to do prior to the release?
The issue was to create a framework for this to use for future releases and that is done .... no further action needed for this release although all tests always welcome.
I have added additional tests in devtest. There is also a main file run_tests.py
which can just be called and if it completes without error, then all the tests pass. The tests check the simulated data against a saved set of reference data which is stored in pickles. Reference images are also stored in devtest/out, these end in -ref.png
. They have been added manually with git add -f
since the directory is gitignore
d. The other files in out will be overwritten by the tests, but the reference ones are not written out by the tests, so are persistent. This makes it quite handy to open an image viewer in the directory and scroll between the images (the fresh one and the ref one) to see if they align nicely.
Tested on Linux (Py 3.9) and Windows (Py 3.9). Test data generated on Linux. The pickles load perfectly under Windows and the tests pass.
@bgoli I would appreciate it if you could run the tests on your side. I've covered what I think are the most important areas, but feel free to add additional ones :wink:
We don't have time to write new unit tests but for now it would be useful to test various parts of PySCeS functionality to see if they run on different OS's and Python versions.
What about creating a new directory tree
/devtest
,/devtest/models
and/devtest/out
where we have scripts indevtest
that:/devtest/models
)devtest/out/modelname
)Then we have some quick scripts to run diverse functionality that can be quickly checked by eyeball.