Open sf-issues opened 12 years ago
Submitted by ceball Date: 2012-04-02 23:54 GMT
(Oops, double posted. Have deleted https://sourceforge.net/tracker/?func=detail&aid=3513638&group_id=53602&atid=470932. Adapted from there...)
We have our own, custom test framework.
For instance, topo/tests/init.py has various functions for finding, importing, and running unit tests. Or topo/tests/test_script.py has code for functional tests. There's also topo/tests/test_map_measurement.py. And we have doctests...but not in docstrings. There are probably other things, too! Some of this is documented in topo/tests/README.txt, and some in topo/tests/init.py - but it's very difficult just to see what we have!
After the hurdle of figuring out what we have, it turns out that adding new unit tests must be done in such a particular, complex way that nobody does it. On the other hand, adding new functional tests is so simple that anyone can do it, and therefore everyone does it differently (or the same person has done it differently every time...).
There are now several mature testing frameworks for Python (e.g. nose, as mentioned in the initial request). Additionally, for instance, numpy (which also has complex testing requirements) has produced some of their own code for handling test running, and that code integrates with nose.
We need to investigate the options to see if we can replace our custom framework with a standard one, which will make it easier to add new tests and to understand existing ones, and should also hopefully address many of the current framework's deficiencies.
Converted from SourceForge issue 3513268, submitted by jbednar Submit Date: 2012-03-30 14:45 GMT
Our testing support is very patchy, and it would be great to replace it with something more comprehensive, probably using an external package such as nose. Some problems with the current setup:
- often an assertion won't indicate which particular item failed the test (e.g. which array did not match), and which values of two arrays did not match is not always visible.
- usually the first test to fail in a given file will raise an exception, so that when there are problems we can't see how many problems there are, or if there are a number of related issues that could clarify what the underlying problem is.
- we presumably ought to be using doctests more consistently, both for simple unit tests of code and for testing examples in the manuals and tutorials.
- having the tests be completely separate from the code adds an activation barrier that discourages anyone from investigating how to add a test.
Time for a big project on this.