MUCollective / multiverse

R package for creating explorable multiverse analysis
https://mucollective.github.io/multiverse/
GNU General Public License v3.0
62 stars 5 forks source link

Set up automated code coverage #22

Closed mjskay closed 5 years ago

mjskay commented 5 years ago

After setting up continuous integration with travis (#20 ), set up covr to do code coverage so you can see how much of your codebase is being tested. It integrates well with travis so it should be straightforward. See here: https://github.com/r-lib/covr

abhsarma commented 5 years ago

The code (on the dev branch) passed on Travis, but it is throwing errors with covr::package_coverage() (it did pass once, and then kept failing again)

Error: Failure in `/private/var/folders/64/jwsw0yc968n0v95w693g1wt80000gn/T/RtmpBVudsl/R_LIBS55156b99e69/multidy/multidy-tests/testthat.Rout.fail`
accessors.R#13) 
3. Error: basic retrieval works with `parameters()` (@test-accessors.R#20) 
Error: testthat unit tests failed
Execution halted
mjskay commented 5 years ago

can you get code coverage running locally?

abhsarma commented 5 years ago

Nope. It worked once but it keeps failing now. That's the error I get locally.

mjskay commented 5 years ago

Okay, I've figured it out. covr works by inserting covr::count() calls into lines of code in all code blocks ({ ... }) in order to track when different lines of code are hit. But it can't tell that the code blocks in our unevaluated code blocks aren't real lines of code that it should be tracking, so it inserts covr::count() into those expressions, so then they are no longer equal to the reference code chunks in the tests.

I opened an issue on covr about this: https://github.com/r-lib/covr/issues/381

The nice thing is this is currently only a problem with the way that $<-.multiverse is written; everything else seems to be fine. I figured out a workaround and will submit a pull request to fix it in a sec.

mjskay commented 5 years ago

Sounds like the recommended solution to this kind of thing in the future may be to just skip tests where covr gives different output using testthat::skip_on_covr() (which will be in a future version of testthat), as mentioned in the response to r-lib/covr#381

However, for our purposes since we have a relatively simple workaround that already works (and still allows testing without skipping any functions), we should stick with the way we're doing it now instead of using skip_on_covr().

Are there any other problems outstanding for this issue @abhsarma, or can we close it?

PS Looking at codecov we've got like 96% coverage --- awesome!!! Great work!

abhsarma commented 5 years ago

Okay, that sounds good. We can close this in that case.