Closed chrisvwn closed 6 years ago
Hi @chrisvwn. I just sent you email about how to get an answer to this. You could also post this question to our discussion forum, possibly in the Best Practices category https://discuss.ropensci.org/c/best-practices (though it might also fit in Questions or Packages).
Cheers, Stefanie
Thanks @stefaniebutland!
@chrisvwn the reason I think you should ask your question to the community is because I can't answer it but I'm sure someone will be able to. It's a very interesting question!
And by the way awesome that you started adding unit tests!
Hi @stefaniebutland @maelle
Thanks for the responses! Sorry for the delay, was wrapping up some other stuff up. I am joining the slack channel shortly and will post the question there. See you on the slack channel :)
Thanks. I did start adding unit tests as you advised before I got stuck.
@chrisvwn were you able to post your question? Happy end of the year!
@maelle Happy end of the year! I did post the question and got some help. Advice was to turn off the tests for CRAN. Sorry for the delay, I had taken a bit of a break. I am now figuring out how to write the test functions in the most efficient way possible. I am thinking something along the lines of caching a pre-downloaded raster but not sure.
@maelle Finally some progress. I have installed some tests and still thinking of what else to add. Could you have a look and let me know if what is there is sufficient?
Fantastic! I think we'll be able to start the actual review process. We can ask the reviewers to especially have a look at test if you feel they're insufficient. Can you confirm your package is now stable or do you plan other big changes before the review?
What's your code coverage? Install covr
and use the covr::package_coverage()
function to know. It indicates the proportion of code lines that are hit by tests. It doesn't need to be 100% and even 100% doesn't mean everything useful has been tested but it's a good indication.
You should therefore also add covr
integration with e.g. codecov.io to get a badge indicating test coverage. Take scrubr
for instance:
you can see the badge here https://github.com/ropensci/scrubr
in https://github.com/chrisvwn/Rnightlights/issues/2 you see a line to install covr
at the beginning and one to run it at the end.
then you'd need to activate codecov.io for your repo via going to its website.
Thanks @maelle ! Turns out I only have about 13% coverage. :-S I have some more work to do. I had put in the main function that pretty much goes through all the other functions with the idea that if I put in the same input I should get the same output. But I see how that can be problematic. Also, the function did end up being long-running. I think I understand now what I need to do. :)
The package is not yet stable. I am planning a few major changes, however, they may take some time to implement. Should I wait until it is stable before submitting to rOpensci? I hope to have a minimal impact on existing public interfaces to the package but nevertheless it is possible that some will get radically changed.
Yes it is better for you to wait until the package is stable, this way the reviewers can read at the "final" code.
If you have questions (about testing, about design choices) you can however ask them to the community before the review process itself!
It's actually less problematic to need much time before review than to take time using reviewers' feedback during the submission process, so take your time! The onboarding repo is not going anywhere so we can re-launch the submission process in a few weeks or months, no problem. 😉
Okay. I think it will take a few months to get stable. Let me work on this and the test coverage and then I will reopen this thread. If I have any questions I will ask them on the forum.
Thanks for everything so far!
Ok so I'll close the submission issue in ropensci/onboarding but expect you to open a new submission issue in a few months! Happy programming until then!
Thanks @maelle !
Hi @maelle,
I have put in a couple of tests for the package but since there aren't many exported functions that I can test without downloading large files I am not sure how to proceed.
Please advise.
Thanks,
Chris