Closed karimbahgat closed 2 years ago
We'd probably want to rely on pytest
as a "backend" for the test runner/reporter. I won't be able to find time for this in the near future though.. maybe later at some point
I'd be happy to help with this. I'm no guru but I think I can get us started down the right path.
Please do @jmoujaes, that would be very helpful! Not sure how one usually writes tests, but I guess going through the features as described in the README step-by-step might be a good place to start, maybe reusing some of the existing examples, and adding additional variations for more difficult cases. Would prefer if it's possible to stick to the builtin unittest module, but otherwise I'm fine with an optional requirement just for the testing part.
Awesome, thanks for the thumbs up @megies @karimbahgat !
I'm happy with sticking to the unittest module until there's a need to rely on a third-party library.
We will probably want a code coverage tool like coverage.py to measure how much and which code is tested/untested.
Tox may also be useful for running our tests on all the different versions of python we support. That way we know the library works on all of them everytime we build.
I'm happy with sticking to the unittest module until there's a need to rely on a third-party library.
unittest has some overhead when writing tests, you should definitely have a look at https://docs.python-guide.org/writing/tests/#py-test as well. At obspy/obspy we're using unittest but if we'd start from scratch nowadays we would go with pytest.
We will probably want a code coverage tool like coverage.py to measure how much and which code is tested/untested.
Yep. coverage done in CI like Travis together with codecov is a solid choice, IMHO.
Tox may also be useful for running our tests on all the different versions of python we support. That way we know the library works on all of them everytime we build.
Never used tox so far but heard about it. For CI it's probably easier to use multiple builds with specific PYthon setups though?
@megies
unittest has some overhead when writing tests, you should definitely have a look at https://docs.python-guide.org/writing/tests/#py-test as well. At obspy/obspy we're using unittest but if we'd start from scratch nowadays we would go with pytest.
I took a look at the documentation. It looks helpful! I like the idea of test fixtures, and the builtin one for temporary directories. Curious about mocking, I assume that's done using the built-in monkeypatching test fixture? The documentation seems a little light. Have you had any experience with this compared to unittest's mock module?
Never used tox so far but heard about it. For CI it's probably easier to use multiple builds with specific PYthon setups though?
I haven't configured tox before either. I think the benefit is it allows you to run the build matrix locally. Whereas if you just specify it in travis config, you will have to manually test all the variations locally.
Curious about mocking, I assume that's done using the built-in monkeypatching test fixture? The documentation seems a little light. Have you had any experience with this compared to unittest's mock module?
Nope, only used the builtin mock so far. But I think most of testing in here should be pretty straight forward. Not sure if there's even much need for mocking?
it allows you to run the build matrix locally
Yeah, that was my impression as well, that it's mostly for testing different setups locally. I just rely on CI most of the time though, just check one python setup locally and then leave it to CI to see if it's good for all setups.
Regarding Tox, yes weโre already testing multiple versions in Travis CI. Only benefit would be testing it locally I guess, which is not very critical in my opinion.
My thinking is that it can speed up local development. However, it is not critical like you're saying. I've started on the tests. Would you prefer I make occasional pull requests, or leave it all for one pull request?
Would you prefer I make occasional pull requests, or leave it all for one pull request?
Whatever you prefer I would say, in any case it is a good idea to open a PR right away while you're working on it, so other people can see how you attack it.
Sounds good!
Thanks everyone who contributed to this! As a result of your help setting up the testing framework, pyshp now has a solid and expanding set of unittests based on pytest. These tests are now safely and seamlessly run at every commit through github actions, which makes development a whole lot easier, and has made me feel a whole lot more confident in the stability and reliability of pyshp!
Awesome, good to hear! :heart:
I think the issue of missing unit tests has been brought up a few times and would be a good add-on at this point, on the side of the existing doctests. Going through past and recent issues, it seems some of the more rare use-cases like m values, z values, and bbox special cases have contained bugs and did not get detected until recently. Recent commits have also resulted in some minor breaks that could have been avoided.
So if there are any unit test gurus out there willing to get it started, that would be a welcome addition! ๐ Perhaps @megies?