ContextLab / hypertools

A Python toolbox for gaining geometric insights into high-dimensional data
http://hypertools.readthedocs.io/en/latest/
MIT License
1.83k stars 160 forks source link

add unit tests #18

Closed andrewheusser closed 7 years ago

jeremymanning commented 7 years ago

the plot creation functions are difficult to test-- we can see if they crash, and/or have the user verify that a plot was created or looks reasonable, but it's difficult to fully automate this.

ideas for tests:

any others?

andrewheusser commented 7 years ago

this kind of testing would be good for our own purposes, but would we test this by hand, or automate it?

I think it would also be useful to verify that we are getting the right data in and out of our critical functions. For example, plot returns a fig handle, which contains the data, so we could have an automated test that inputs a known dataset and expects a known dataset in return. This way, as we change the guts/organization of the code, we can be more certain that bugs are not introduced. In the same way, we could test reduce and align, and any other critically important functions. I believe that Travis CI has the capability to do this automatically on each push to master, and then tell us whether all the tests passed. A set up like this would be useful as we start getting pull requests from outside of the lab

jeremymanning commented 7 years ago

if we can automate the tests (e.g. using known data) then let's do that. i agree that we want something like this in place before accepting outside contributions. i started to make a list above that tries to encapsulate the range of functionality of the toolbox, but i'm probably missing stuff and what i've described isn't automated.

a "quick and dirty" way of testing would be to generate a sequence of plots that we'd check by hand with each update. that'd be easy to implement, but harder to maintain (e.g. if we get many pull requests...although we probably still want to carefully manage what functionality we accept into the toolbox)

a more stable solution would be to set up a fully automated approach. we just need to verify periodically that something we hadn't thought of (e.g. that would have been obvious from looking at a plot) isn't cropping up. e.g. if the lines we're plotting become invisible or hidden behind some other graphics object, it'd produce bad plots but wouldn't necessarily trigger an error of the sort you're describing...

andrewheusser commented 7 years ago

here's an example of how seaborn does their testing with 'nose': https://github.com/mwaskom/seaborn/blob/master/seaborn/tests/test_algorithms.py

andrewheusser commented 7 years ago

^ i think if we all spent a full day together writing unit tests, we could knock this out very quickly

jeremymanning commented 7 years ago

It'd be great to have these tests run either automatically with each commit, or prior to doing a pull request

ljchang commented 7 years ago

I think most people are switching to py.test these days. Hook it up to travisci and coveralls once you get started.

-luke

On Dec 23, 2016, at 9:16 AM, Andy Heusser notifications@github.com wrote:

here's an example of how seaborn does their testing with 'nose': https://github.com/mwaskom/seaborn/blob/master/seaborn/tests/test_algorithms.py

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

andrewheusser commented 7 years ago

thanks for the info, Luke! will look into py.test

andrewheusser commented 7 years ago

implemented tests on #47. I'll look into travisci and coveralls now :)

ljchang commented 7 years ago

Awesome! not sure if you are doing documentation too, but sphinx is what most people use. We are using integration with readthedocs to host our documentation. The documentation builds after every commit. There are a couple of gotchas, the biggest thing I just discovered is their new anaconda option, which makes adding C based libraries way easier. I just added a bunch of tutorials this week using sphinx-gallery, which is pretty cool. i think it was built by the scikit-learn/nilearn guys for their documentation. You basically use restructured text and it will auto run your code and render it as html and a jupyter notebook. Feel free to check out how we set ours up. https://github.com/ljchang/nltools. You can also check out our yaml files if you want to see how we're integrating with travis and rtd.

andrewheusser commented 7 years ago

Great, thank you. Re: docs, I'm not sure..I've documented the API pretty extensively in the readme. @jeremymanning what do you think?

ljchang commented 7 years ago

oh yeah, the README.md looks great!

andrewheusser commented 7 years ago

closing this issue, and creating new issue for travis-ci integration