Closed malmans2 closed 3 years ago
Looks like there are a few modules that have a lower test coverage: https://codecov.io/gh/MillionConcepts/lhorizon/tree/main/lhorizon Coverage % is not necessarily the best metric though... Do you think the test suite is already complete and robust, or is there scope for improvement?
Particularly, the
handlers
module is not tested at all. Is there a particular reason for that?
Probably laziness. The members of handlers
are basically glue functions. They are easy to write white-box unit tests for, but white-box unit tests tell you very little about them; testing them in context requires a bunch of real or fake API calls and is not necessarily difficult but is at least slow. This being said, I can pretty straightforwardly turn the Mars solar angle example Notebook into an end-to-end test that will hit most of this module and is better than nothing.
More broadly...there is always scope for improvement. The test suite engages with only a tiny fraction of Horizons' functionality and possible outputs. It is very possible that there are nasty surprises lurking somewhere. Unfortunately, Horizons does hundreds of things to hundreds of thousands of distinct objects, and its features and outputs (particularly as exposed at its CGI endpoint) are not exhaustively documented or totally consistent. I think that writing an actually-complete test suite for Horizons API functionality, which would ideally include some amount of black-box verification against other (non-Horizons-derived) ephemerides, would take about 8-10 times as many work hours as the rest of the development of this module to date. But I will think about major gaps I could rectify without going fully down that rabbit hole.
Other things...across the codebase, I think I could be testing dispatching and exception handling more rigorously.
Particularly, the
handlers
module is not tested at all. Is there a particular reason for that?Probably laziness. The members of
handlers
are basically glue functions...
added reasonably meaningful tests for handlers
: https://github.com/MillionConcepts/lhorizon/commit/981374968a3ff0bb8a9eeeb196a1c8cc72be4a80
improved test coverage, edge handling, cruft removal for lhorizon_utils
, target
, and targeter_utils
: 41fdb19ae1f5a83373787058e81562d84619446f,
8864e37ff5063d8138d7159d57b08f8e69777d70
Thanks! See #13
I would also add instructions on how to locally run the tests in the README.
done! closing this for now.
Looks like there are a few modules that have a lower test coverage: https://codecov.io/gh/MillionConcepts/lhorizon/tree/main/lhorizon Coverage % is not necessarily the best metric though... Do you think the test suite is already complete and robust, or is there scope for improvement?
Particularly, the
handlers
module is not tested at all. Is there a particular reason for that?https://github.com/openjournals/joss-reviews/issues/3495