Closed brianthelion closed 11 years ago
excellent suggestions, brian. i would be delighted for mindboggle to have a comprehensive testing framework! i will defer to satra on how best to implement this. satra -- is there a particularly good model we should follow within nipy?
@brianthelion - use nose and you will need to import nosetest from numpy - they do a bunch of things to improve testing for numerical code.
So is there a consensus about how we want to package the tests and "devtest" data?
i will defer to satra.
On Mon, Jun 24, 2013 at 6:56 PM, brianthelion notifications@github.comwrote:
So is there a consensus about how we want to package the tests and "devtest" data?
— Reply to this email directly or view it on GitHubhttps://github.com/binarybottle/mindboggle/issues/19#issuecomment-19942028 .
for the unit tests, that data i would say should be included with mindboggle and should be lightweight - can't be an entire freesurfer brain for example. and i would say the goal there is not to test all edge cases but to ensure code coverage.
for regression tests, take a look at:
https://github.com/neurodebian/testkraut
@hanke has been trying to figure out a standard for regression testing, where the test mechanism can take care of downloading data, checking for installed packages, figuring out dependencies and not creating yet another testing framework. michael: is there an example that we can use where the data are downloaded from the intertubes - i know you created one?
so for the time being instead of creating a data package, let's put the data somewhere accessible on the web.
The idea for testkraut is to support a variety of distribution scenarios. If the input data is fixed (gold-standard stuff) a hash can be specified and testkraut will go and try to locate the file on the local machine (should be quick), visit a URL (if given) to download the file, or ask any configured "hashpot" (a webservice that delivers a file when queried with a hash). In any case a located/downloaded file willbe cached to speed up subsequent test runs. As a result, it should not matter much how the data is packaged up.
I also want to +1 the idea of adding lightweight data for max unittest coverage directly to the mindboggle sources -- this is what we need for distribution packaging -- and pretty much any user would want this to quickly checks if all is running as expected.
Closing this issue for now with the current plan:
All,
I am now at a point in my efforts where I need to build unit tests. It would seem, though, that the mindboggle package does not implement a comprehensive testing framework. In issue #7 there is some peripheral discussion of this topic but I thought we should consolidate it in a new thread. How would we like to proceed w.r.t testing?
Questions: Q1) Which testing framework should we use? Q2) How should we organize the test code? Q3) How do we want to store test data? (Please add others...)
My answers: A1) I am partial to nose. A2) See A3. A3) Here I want to make a clear distinction between test data for regression and unit testing, and test data for the edification of mindboggle end-users. For the moment I'll refer to these as "devtest data" and "dist" (as in "distributed") data, respectively. I completely agree with @satra that dist data should be delivered in its own packages and should use the sys.prefix/share standard. However, devtest data is structured much differently than dist data. For example, devtest data for unit tests will be stored on a per-unit basis. This makes it mostly worthless to our naive end-users, and therefore it probably doesn't need to be packaged and delivered to them. My impulse is to either (a) have a separate repo for tests and devtest data or (b) put the tests in the mindboggle repo while having the testing framework download the devtest data from an FTP server.
Cheers!