Closed reelmatt closed 4 years ago
Looks great! Make sure you omit the tests themselves in .coveragerc
(unless this is done by default?), otherwise the results count the huge number of test lines both in the numerator and denominator.
I've had a lot of success with pytest
as a test runner over the unittest
one. You don't have to change any of the tests, just add pytest
and pytest-cov
as dependencies, and then configure a pytest.ini
file to make sure the tests are detected correctly. I think we'd also remove the "run if __main__
" clauses from the tests. Later today I'll track down an example of a configuration that works, if we're interested in using pytest. (The docs are pretty good too.)
Regarding test data files, I think the easiest way (without having to configure setup.py
to package static files) is to abstract all the node JSON objects into a .py
file that has an importable constant:
TEST_DATA = {
"good_node" : {
"node_id": ...,
"node_key": ...
},
...
}
then import it in all the tests. Removes the overhead of the setup methods.
A few tweaks to the README to add status badges and a placeholder image of Pyworkflow.
Added ~20% coverage with new unit tests for the Pyworkflow package. Now broken up between a few test files and organized slightly better. There's still a lot of duplicate code for setup/node config that I'm not sure the best way to handle. If anyone has ideas/best practices on how to better set up a test rig and temp data files, please let me know!