Closed fawda123 closed 2 months ago
One idea for testing for specific output is using Snapshot testing.
Thank you for the feedback @fawda123 and @k-doering-NOAA! I will respond in this post which I will edit as I progress through adding testing functionality. Please reach out if you have any questions/comments/concerns!
We have updated our unit testing infrastructure to include many unit tests that verify output as intended. The following files in tests/testthat
have been updated:
helper-data.R
test-dist_objects.R
test-extras.R
test-predict.R
test-ssn_glm.R
test-ssn_lm.R
test-ssn_simulate.R
test-ssn-object.R
test-Torgegram.R
test-utils.R
You've got a lot of tests, which is good, but I'm not sure they're very comprehensive. I'm definitely guilty of this, but it seems most of your tests just verify things like the output type. This really isn't the nature of what tests should do - they really ought to verify the output is as you intended. For example, you're not writing a function to return a list, rather, you're writing a function to model spatial relationships in stream networks. Your test should verify the model output is as expected, not that it's just an
ssn_lm
object. You would have more confidence that your package is doing what it claims to do if you tested for specific output. How would you know something has gone awry if you're just verifying the output class? Often testing specific results can help solve this problem, e.g.:A more simple example... you've got a workflow in
R/coef.R
that will return model coefficients depending on thetype
argument. The last step of theif/else
chain will return an error if thetype
argument is invalid. Does this function really return this error if an incorrect entry fortype
is used? An explicit test would look something like this:https://github.com/openjournals/joss-reviews/issues/6389