Closed steo85it closed 3 years ago
Many of the tests check against saved csv dumps of output from Horizons
. However, of course, all of the inputs to the package are also basically live dumps of similar output from Horizons
, so this is less reassuring than it could be. To guard against the possibility that either the tests or the package as a whole are parsing Horizons
output in a pervasively but subtly poor way that is introducing a systematic numerical error, I also have a few black-box "reasonableness" checks, e.g.:
Do you think I should have more of these? I have certainly considered adding calls to other data sources. Other purely analytical tests could be added, like checking to see if the position lhorizon
reports for a particular planet is within the expected margin of error from its approximate position as computed using Keplerian formulae.
In either case, I can add more documentation to lhorizon.tests
.
added some explanatory docstrings and comments to lhorizon.tests
submodules: 30a9140
leaving this issue open to get your opinion on the broader verification question.
Thanks for the detailed explanation: I think the tests are more than adequate and that should now be clear from the docstrings.
As from the title, are the (very extensive and working well) tests provided with the code only checking internal functionalities and consistency, or are they also verifying that results by the package are consistent with output by the
Horizons
platform (or byspiceypy
orastropy
or whatever else)?If yes, I would suggest to:
lhorizon/tests
, as that's completely missing at presentIf not, then:
https://github.com/openjournals/joss-reviews/issues/3495