Open DentonGentry opened 4 years ago
Hi, Denton- is this something I could help with?
Sure!
If you are comfortable working with Selenium (or a similar testing toolset), I think the the biggest area which could use better test coverage is the Jupyter notebook UI. https://github.com/ProjectDrawdown/solutions/issues/5 tracks that specifically.
If you're prefer to focus on the non-UI portion of the codebase, I'd suggest looking in tools/* as the coverage there is lower than in some of the rest.
One specific example: tools/world_data_xls_extract.py is a command line utility we use to pull data out of the project's Excel files in order to use it in Python. It has a unit test which imports world_data_xlsextract.py as a module, which means that the __main_\ routine is not being tested—and we've broken tools like this accidentally, so a test to catch it would be good.
There are other places which do this in the tree and could be used as an example, like tools/tests/test_solution_xls_extract_land.sh which runs tools/solution_xls_extract.py and checks various aspects of its output.
tools/tests/test_solution_xls_extract_land.sh is run from tools/tests/test_solution_xls_extract.py. It is done this way so that pytest will see the execution of test_solution_xls_extract_land.sh and include it in its coverage report.
So a similar shell script could be constructed for tools/world_data_xls_extract.py, and run the shell script by adding another test case in tools/tests/test_world_data_xls_extract.py
Just an aside here, and just my 2 cents but I would highly recommend using Cypress over Selenium for any browser-based testing that takes place :)
Following up a few months later:
Test coverage is a key deliverable. We're aiming for a future where the research team works directly with the Python models and no longer updates the Excel models. The researchers will not be software engineers, they will be experts in their various fields with sufficient knowledge of Python to be able to work on it. Automated tests provide confidence that unintended breakage is not being introduced.
We track code coverage in codecov.io, currently at 93%. The UI code coverage is somewhat lower, backend code coverage is somewhat higher. UI code coverage is best tackled using a UI testing framework (see issue #5), but some additional tests for non-UI code would also be welcome.
Note that 100% coverage is not something we're aiming for, but getting incrementally closer to 100% with reasonable amounts of effort is a great goal.