yonicd / covrpage

Create a summary readme for the testthat subdirectory to communicate with potential users
https://yonicd.github.io/covrpage/
Other
52 stars 7 forks source link

Building the report when a test fails #3

Closed llrs closed 6 years ago

llrs commented 6 years ago

I was trying the package in another package where I still have some failing test (work in progress..). Could it be possible to generate the page even if a test failed?

I think it uses test_check (in case of using testthat), which has the argument of stop_on_failure set to TRUE by default. Changing stop_on_failure to FALSE would solve the issue, but maybe there is a reason behind this.

yonicd commented 6 years ago

the script is using test_dir which by default has stop_on_failure = FALSE. it may be the covr call

yonicd commented 6 years ago

fo the time being the i would suggest moving to expect_error in the testthat call until you fix your code and then once it runs switch to an affirmative test.

yonicd commented 6 years ago

i opened a new branch for this if you want to PR to it.

yonicd commented 6 years ago

@llrs this works now on the skip branch, you can test it out on a package you have. cc @dpastoor, @jimhester.

llrs commented 6 years ago

@yonicd thanks for the new branch. I tried and it comments a single line (the one producing the data to be tested) but not the failing test (or the expect_that line). Also the final report is not generated see below for details:


The error I get on the console:

> covrpage()

processing file: _covrpage.Rmd
  |....                                                             |   6%
   inline R code fragments

  |........                                                         |  12%
label: unnamed-chunk-1 (with options) 
List of 1
 $ include: logi FALSE

  |...........                                                      |  18%
  ordinary text without R code

  |...............                                                  |  24%
label: unnamed-chunk-2 (with options) 
List of 1
 $ include: logi FALSE

  |...................                                              |  29%
  ordinary text without R code

  |.......................                                          |  35%
label: unnamed-chunk-3 (with options) 
List of 1
 $ include: logi FALSE

  |...........................                                      |  41%
  ordinary text without R code

  |...............................                                  |  47%
label: unnamed-chunk-4 (with options) 
List of 1
 $ include: logi FALSE

  |..................................                               |  53%
  ordinary text without R code

  |......................................                           |  59%
label: unnamed-chunk-5 (with options) 
List of 1
 $ echo: logi FALSE

Quitting from lines 52-77 (_covrpage.Rmd) 
Error: Failure in `/tmp/RtmpNDFpvL/R_LIBS5d7c37dbca90/GSEAdv/GSEAdv-tests/testthat.Rout.fail`

On that file, after starting R and loading the package the output ends with:

> 
> test_check("GSEAdv")
── 1. Error: keepGPP works (@test_keeps.R#9)  ──────────────────────────────────────────────────────────────────────────────────────────
object 'out' not found
1: expect_equal(ncol(out), 5L) at testthat/test_keeps.R:9
2: quasi_label(enquo(object), label)
3: eval_bare(get_expr(quo), get_env(quo))
4: ncol(out)

══ testthat results  ═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════
OK: 183 SKIPPED: 1 FAILED: 1
1. Error: keepGPP works (@test_keeps.R#9) 

Error: testthat unit tests failed
Execution halted

The relevant lines on that file are here:

test_that("keepGPP works", {
#  out <- keepGPP(Info)
  expect_equal(ncol(out), 5L)
})

The commented line was introduced by covrpage, but then the expect is failing, as it can't find the out object

yonicd commented 6 years ago

The intent was to have the expectation fail, you are missing there a global variable β€˜out’ it is crashing for a different reason I think. Try replacing 5L with 6L.

llrs commented 6 years ago

I don't understand the comment. Why I'm missing there a global variable 'out'? Initially it wasn't commented in the test_that call, and after running the covrpage it got commented. (Doing rm(out); covrpage() resulted in the same output)

I on purpose changed the expectation in order to fail the test, if I change it back to 6L it will pass and it won't test if the branch works when a test is failing.

yonicd commented 6 years ago

You misunderstood me. I meant to keep it a vaild expectation test, i did not know it was 6 before you changed it. My suggestion is to keep the out object in the environment but have it try to check against something wrong on purpose. Removing out make little sense practically in a normal test file. Usually you would have out be created but with wrong dimensions and catching that error.

llrs commented 6 years ago

That is what I did initially, and still the covrpage function failed to generate the report (despite using the skip branch)

yonicd commented 6 years ago

Ok. Can you paste the Rout.fail lines so I can see why it is failing.

llrs commented 6 years ago

It is what I originally posted on the last block of code on my first comment after you pinged me.

> 
> test_check("GSEAdv")
οΏ½[31m──�[39m οΏ½[31m1. Error: keepGPP works (@test_keeps.R#9) οΏ½[39m οΏ½[31m──────────────────────────────────────────────────────────────────────────────────────────�[39m
object 'out' not found
1: expect_equal(ncol(out), 5L) at testthat/test_keeps.R:9
2: quasi_label(enquo(object), label)
3: eval_bare(get_expr(quo), get_env(quo))
4: ncol(out)

══ testthat results  ═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════
OK: 183 SKIPPED: 1 FAILED: 1
1. Error: keepGPP works (@test_keeps.R#9) 

Error: testthat unit tests failed
Execution halted
yonicd commented 6 years ago

That is saying out is missing. Can you rerun wo commenting out that line in the test?

yonicd commented 6 years ago

test_that("keepGPP works", { out <- keepGPP(Info) expect_equal(ncol(out), 5L) })

llrs commented 6 years ago

Yes, it says that it is missing because it gets commented when I run covrpage.

When I cleaned the session and redid the covrpage again, I got some other test commented and then a failing test in another archive. These branch comments my test at its will, and some of them are test that pass.

yonicd commented 6 years ago

Ah, ok. So the way this works is to

run testthat locate failed tests comment out the lines run covr uncomment the lines

The tests are mapped out using covrpage::map_testthat()

Try seeing that the lines are being mapped correctly to that test. It seemed to work ok on my end.

yonicd commented 6 years ago

ok. try again (repull). I noticed that i wrote map_test for files that have full namespacing for all function calls ie testthat::test_that and most of the time users will have just test_that which creates a bit of a different logic in the parser. This is what caused the seemingly random commenting of your file.

now the mapping should work for both types of inputs.

thanks for being the tester for this.

llrs commented 6 years ago

I installed from the latest version of the branch, but I got the same error: Error: Failure in in the file I found that the failing test is failing and nothing else (as previously happened, but now without random commenting):

══ testthat results  ════════════════════════════════════════════════════════════════════════
OK: 185 SKIPPED: 1 FAILED: 1
1. Failure: keepGPP works (@test_keeps.R#9) 

Error: testthat unit tests failed
Execution halted
yonicd commented 6 years ago

errr... what is the GH url of the repo are you testing? testthat should be failing that is the point, but covr should be running ok, ignoring that test.

llrs commented 6 years ago

I am using :

covrpage      * 0.0.51     2018-08-26 Github (yonicd/covrpage@b4a47d2) 
testthat      * 2.0.0      2017-12-13 CRAN (R 3.5.0)           
yonicd commented 6 years ago

my mistake. i didnt return the mapping directory to the right place. I also added some functionality that the function will cleanup itself all the commenting, even if it crashing during the covr build.

try again (hopefully the last time)

llrs commented 6 years ago

No, I get the same error "yonicd/covrpage@111cb6b", it only adds a new line at the end of a test I have

yonicd commented 6 years ago

Are you testing covrpage or your package?

yonicd commented 6 years ago

i have tried it on a few repos i checked out that i can manipulate the tests. for example @jonocarroll hosts ggeasy. I changed line 43 in test_labs to throw an error.

I run the new function in the root repo directory. It is acting as expected.

> covrpage::coverage_skip(test_path = 'tests/testthat')
βœ” | OK F W S | Context
βœ– |  3 1     | attr labs
──────────────────────────────────────────────────────────────────────────────────────────────
test-labs.R:43: failure: regular labs pass new labels through easy_labs 
`easy_res` not equal to `hard_res`.
Component β€œlabels”: Component β€œtitle”: 1 string mismatch
──────────────────────────────────────────────────────────────────────────────────────────────
βœ” |  9       | remove legend [0.2 s]
βœ” |  9       | remove axes [0.2 s]

══ Results ═══════════════════════════════════════════════════════════════════════════════════
Duration: 0.5 s

OK:       21
Failed:   1
Warnings: 0
Skipped:  0

No-one is perfect!
ggeasy Coverage: 84.21%
R/axis.R: 55.88%
R/labs.R: 95.00%
R/legend.R: 100.00%
llrs commented 6 years ago

Oh, sorry I was trying the covrpage not the covrpage_skip function. When I try the covrpage_skip function on my package, I see some errors could not find function "condPerPathways" (those are S4 methods not functions), then it outputs:

══ Terminating early ══════════════════════════════════════════════════
Too many failures

but on the linked file I have has the same error:

══ testthat results  ═════════════════════════════════════════════════════════════════
OK: 149 SKIPPED: 9 FAILED: 1
1. Failure: keepGPP works (@test_keeps.R#9) 

Error: testthat unit tests failed
Execution halted
yonicd commented 6 years ago

no worries, i added coverage_skip in the last few commits. It can be used as a more direct way to test covr with failing tests. It could be that testthat aborts when it reaches a threshold of failing tests.

a small caveat to coverage_skip is that it needs to run from the root directory of the package that it is testing (unlike covrpage that can be run with relational paths)

do the tests work in a regular context without force failing them?

ColinFay commented 6 years ago

Hey,

The function seems to fail if you simply do covrpage::coverage_skip(test_path = "tests/") but works when I run covrpage::coverage_skip(test_path = "tests/testthat/").

yonicd commented 6 years ago

I changed to default path of coverage_skip to tests/testthat to make it a more natural run by default. https://github.com/yonicd/covrpage/commit/05a5814d57071192eec2143fed019769e9ead0b7

llrs commented 6 years ago

@yonicd Yes, When I change back to the right value the devtools::test (using Rstudio) works as well as the covrpage::covrpage while covrpage::coverage_skip() fails.

Many thanks for your long and continued effort to solve this issue!

yonicd commented 6 years ago

what test path are you setting in the coverage_skip call and what directory are you running it from?

llrs commented 6 years ago

I am using covrpage::coverage_skip(test_path = "tests/testthat/") on the root of the package.

I get

══ Terminating early ═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════
Too many failures
GSEAdv Coverage: 77.33%
R/completness.R: 0.00%
R/expand.R: 0.00%
R/summary.R: 0.00%
R/pathwaysPerGene.R: 14.29%
R/calc-nPathways.R: 33.33%
R/simulateGSC.R: 36.30%
R/check-GSC.R: 59.52%
R/keep.R: 72.00%
R/simulate2GSC.R: 78.38%
R/fromSizeGenes_sizePathways.R: 78.57%
R/as_GeneSetCollection.R: 83.33%
R/sizesPer.R: 86.11%
R/AllGenerics.R: 90.24%
R/genesPerGene.R: 90.32%
R/cond.R: 90.91%
R/fromSizePathways.R: 92.50%
R/add.R: 92.59%
R/pathway.R: 94.64%
R/remove.R: 95.65%
R/nested.R: 97.14%
R/duplicated.R: 100.00%
R/estimate_sizes.R: 100.00%
R/fromSizeGenes.R: 100.00%
R/gene.R: 100.00%
R/genesPerPathway.R: 100.00%
R/modify.R: 100.00%
R/n_functions.R: 100.00%
R/sizes.R: 100.00%
R/utilities.R: 100.00%
R/zzz.R: 100.00%

When the test is set to pass, and it fails when a test fails.

yonicd commented 6 years ago

is your package on a github repo that i can clone and test?

llrs commented 6 years ago

Yes, it is llrs/GSEAdv. Although the specific test file is a new one I created and I haven't pushed yet (I can't in my current network), and as it is on github it doesn't pass the R CMD check. I will be able to push changes by Friday.

Let me know if I have to do something.

yonicd commented 6 years ago

ok. i'll check back in on Friday to test the commit. I have had a few other people test out the current build of coverage_skip and it seems to work for them.

llrs commented 6 years ago

@yonicd I updated the package in llrs/GSEAdv, the file I change to check the function covrpage is the test on tests/testthat/test_keeps.R. Let me know if you need the sessionInfo of my setup or something.

By the way, I would like to propose this package to check Bioconductor's packages in their severs. But they only accept packages available on CRAN. Do you plan to submit it on CRAN?

yonicd commented 6 years ago

i will check why it is failing on that repo.

I am planning on submitting to CRAN when I get a chance.

thanks!

yonicd commented 6 years ago
> biocLite("GSEAdv")
BioC_mirror: https://bioconductor.org
Using Bioconductor 3.7 (BiocInstaller 1.30.0), R 3.5.1 (2018-07-02).
Installing package(s) β€˜GSEAdv’
Old packages: 'digest', 'tinytex', 'batchtools', 'broom', 'callr', 'dbplyr',
  'devtools', 'energy', 'fansi', 'foreign', 'fs', 'ggplotify', 'ggtern', 'haven',
  'huxtable', 'later', 'leaflet', 'officer', 'openssl', 'pander', 'pkgconfig',
  'plotly', 'processx', 'RcppArmadillo', 'reticulate', 'rlang', 'roxygen2', 'scales',
  'sinew', 'streamR', 'subprocess', 'survival', 'sys', 'texPreview', 'usethis',
  'XML', 'xtable'
Update all/some/none? [a/s/n]: 
n
Warning message:
package β€˜GSEAdv’ is not available (for R version 3.5.1) 
SessionInfo ```r > devtools::session_info() Session info ------------------------------------------------------------------------ setting value version R version 3.5.1 (2018-07-02) system x86_64, darwin15.6.0 ui RStudio (1.2.942) language (EN) collate en_US.UTF-8 tz America/New_York date 2018-09-01 Packages ---------------------------------------------------------------------------- package * version date source base * 3.5.1 2018-07-05 local BiocInstaller * 1.30.0 2018-05-01 Bioconductor compiler 3.5.1 2018-07-05 local datasets * 3.5.1 2018-07-05 local devtools 1.13.5 2018-02-18 CRAN (R 3.5.0) digest 0.6.15 2018-01-28 CRAN (R 3.5.0) graphics * 3.5.1 2018-07-05 local grDevices * 3.5.1 2018-07-05 local memoise 1.1.0 2017-04-21 CRAN (R 3.5.0) methods * 3.5.1 2018-07-05 local packrat 0.4.9-3 2018-06-01 CRAN (R 3.5.0) stats * 3.5.1 2018-07-05 local tools 3.5.1 2018-07-05 local utils * 3.5.1 2018-07-05 local withr 2.1.2 2018-03-15 CRAN (R 3.5.0) ```
llrs commented 6 years ago

Yes, the GSEAdv package is not yet on Bioconductor, that's why that code block was commented. Install it from github using remotes or devtools if you want.

yonicd commented 6 years ago

seems ok on my end.

test_keeps.R ```r context("keep.R") test_that("keepGPP works", { }) test_that("keepGPP works", { out <- keepGPP(Info) expect_equal(ncol(out), 5L) }) ```
covrpage::coverage_skip() ```r > covrpage::coverage_skip() βœ” | OK F W S | Context βœ– | 0 3 | Testing adding method ────────────────────────────────────────────────────────────────────────────────────── test_add.R:4: error: gene as character unused arguments (gene = "2", pathway = "1430728") 1: .handleSimpleError(function (e) { handled <<- TRUE test_error <<- e options(expressions = expressions_opt_new) on.exit(options(expressions = expressions_opt), add = TRUE) e$expectation_calls <- frame_calls(11, 2) test_error <<- e register_expectation(e) e$handled <- TRUE test_error <<- e }, "unused arguments (gene = \"2\", pathway = \"1430728\")", quote(add(Info, gene = "2", pathway = "1430728"))) at tests/testthat/test_add.R:4 2: eval(code, test_env) test_add.R:15: error: pathway as character unused arguments (gene = c("2", "5", "7"), pathway = "156581") 1: .handleSimpleError(function (e) { handled <<- TRUE test_error <<- e options(expressions = expressions_opt_new) on.exit(options(expressions = expressions_opt), add = TRUE) e$expectation_calls <- frame_calls(11, 2) test_error <<- e register_expectation(e) e$handled <- TRUE test_error <<- e }, "unused arguments (gene = c(\"2\", \"5\", \"7\"), pathway = \"156581\")", quote(add(Info, gene = c("2", "5", "7"), pathway = "156581"))) at tests/testthat/test_add.R:15 2: eval(code, test_env) test_add.R:31: error: pathway as character unused arguments (gene = "2", pathway = c("156581", "211")) 1: expect_warning(gsc <- add(Info, gene = "2", pathway = c("156581", "211")), "Removing") at tests/testthat/test_add.R:31 2: quasi_capture(enquo(object), capture_warnings, label = label) 3: capture(act$val <- eval_bare(get_expr(quo), get_env(quo))) 4: withCallingHandlers(code, warning = function(condition) { out$push(condition) invokeRestart("muffleWarning") }) 5: eval_bare(get_expr(quo), get_env(quo)) ────────────────────────────────────────────────────────────────────────────────────── βœ” | 8 | Testing list to GeneSetCollection method βœ” | 1 | calc.nPathways βœ” | 1 | Testing check-GSC methods βœ” | 1 | Testing utilities method βœ” | 16 | Testing cond* method βœ” | 22 | Testing drop method [0.1 s] βœ” | 6 | Testing duplications βœ” | 2 | estimate.n* βœ” | 3 | Testing fromSizeGenes_sizePathways βœ” | 2 | Testing fromSizeGenes method βœ” | 2 | Testing fromSizePathways method βœ” | 7 | Testing gene method βœ” | 6 | Testing genePerGene βœ” | 5 | genesPerPathway βœ” | 2 | Estimating numbers βœ– | 0 1 1 | keep.R ────────────────────────────────────────────────────────────────────────────────────── test_keeps.R:3: skip: keepGPP works Empty test test_keeps.R:9: failure: keepGPP works ncol(out) not equal to 5. 1/1 mismatches [1] 6 - 5 == 1 ────────────────────────────────────────────────────────────────────────────────────── βœ” | 12 | Testing modify method βœ” | 8 | Testing nested method βœ” | 7 | Testing pathway method βœ” | 13 | Testing simulations to create GeneSetCollections [0.3 s] βœ” | 22 | Testing sizePathways and sizeGenes βœ” | 1 | Testing summary methodnd sizeGenes βœ” | 21 | Testing utilities method ══ Results ═══════════════════════════════════════════════════════════════════════════ Duration: 1.1 s OK: 174 Failed: 4 Warnings: 0 Skipped: 1 GSEAdv Coverage: 78.44% R/expand.R: 0.00% R/summary.R: 0.00% R/pathwaysPerGene.R: 14.29% R/calc-nPathways.R: 33.33% R/simulateGSC.R: 39.26% R/check-GSC.R: 71.43% R/keep.R: 72.00% R/simulate2GSC.R: 78.38% R/fromSizeGenes_sizePathways.R: 78.57% R/as_GeneSetCollection.R: 83.33% R/sizesPer.R: 86.11% R/AllGenerics.R: 86.67% R/genesPerGene.R: 90.32% R/cond.R: 90.91% R/fromSizePathways.R: 92.50% R/add.R: 92.59% R/pathway.R: 94.64% R/nested.R: 97.14% R/remove.R: 97.67% R/completness.R: 100.00% R/duplicated.R: 100.00% R/estimate_sizes.R: 100.00% R/fromSizeGenes.R: 100.00% R/gene.R: 100.00% R/genesPerPathway.R: 100.00% R/modify.R: 100.00% R/n_functions.R: 100.00% R/sizes.R: 100.00% R/utilities.R: 100.00% R/zzz.R: 100.00% ```
covrpage.md output Tests and Coverage ================ 02 September, 2018 08:52:33 This output is created by [covrpage](https://github.com/yonicd/covrpage). ## Coverage Coverage summary is created using the [covr](https://github.com/r-lib/covr) package. | Object | Coverage (%) | | :------------------------------------------------------------------- | :----------: | | GSEAdv | 78.44 | | [R/expand.R](../R/expand.R) | 0.00 | | [R/summary.R](../R/summary.R) | 0.00 | | [R/pathwaysPerGene.R](../R/pathwaysPerGene.R) | 14.29 | | [R/calc-nPathways.R](../R/calc-nPathways.R) | 33.33 | | [R/simulateGSC.R](../R/simulateGSC.R) | 39.26 | | [R/check-GSC.R](../R/check-GSC.R) | 71.43 | | [R/keep.R](../R/keep.R) | 72.00 | | [R/simulate2GSC.R](../R/simulate2GSC.R) | 78.38 | | [R/fromSizeGenes\_sizePathways.R](../R/fromSizeGenes_sizePathways.R) | 78.57 | | [R/as\_GeneSetCollection.R](../R/as_GeneSetCollection.R) | 83.33 | | [R/sizesPer.R](../R/sizesPer.R) | 86.11 | | [R/AllGenerics.R](../R/AllGenerics.R) | 86.67 | | [R/genesPerGene.R](../R/genesPerGene.R) | 90.32 | | [R/cond.R](../R/cond.R) | 90.91 | | [R/fromSizePathways.R](../R/fromSizePathways.R) | 92.50 | | [R/add.R](../R/add.R) | 92.59 | | [R/pathway.R](../R/pathway.R) | 94.64 | | [R/nested.R](../R/nested.R) | 97.14 | | [R/remove.R](../R/remove.R) | 97.67 | | [R/completness.R](../R/completness.R) | 100.00 | | [R/duplicated.R](../R/duplicated.R) | 100.00 | | [R/estimate\_sizes.R](../R/estimate_sizes.R) | 100.00 | | [R/fromSizeGenes.R](../R/fromSizeGenes.R) | 100.00 | | [R/gene.R](../R/gene.R) | 100.00 | | [R/genesPerPathway.R](../R/genesPerPathway.R) | 100.00 | | [R/modify.R](../R/modify.R) | 100.00 | | [R/n\_functions.R](../R/n_functions.R) | 100.00 | | [R/sizes.R](../R/sizes.R) | 100.00 | | [R/utilities.R](../R/utilities.R) | 100.00 | | [R/zzz.R](../R/zzz.R) | 100.00 |
## Unit Tests Unit Test summary is created using the [testthat](https://github.com/r-lib/testthat) package. | | file | n | time | error | failed | skipped | warning | | ----------------------------------- | :-------------------------------------------------------------------------------- | -: | ----: | ----: | -----: | ------: | ------: | | test\_add.R | [test\_add.R](testthat/test_add.R) | 0 | 0.004 | 3 | 0 | 0 | 0 | | test\_asGeneSetCollection.R | [test\_asGeneSetCollection.R](testthat/test_asGeneSetCollection.R) | 8 | 0.036 | 0 | 0 | 0 | 0 | | test\_calcnPathways.R | [test\_calcnPathways.R](testthat/test_calcnPathways.R) | 1 | 0.002 | 0 | 0 | 0 | 0 | | test\_check.R | [test\_check.R](testthat/test_check.R) | 1 | 0.002 | 0 | 0 | 0 | 0 | | test\_completness.R | [test\_completness.R](testthat/test_completness.R) | 1 | 0.002 | 0 | 0 | 0 | 0 | | test\_condPer.R | [test\_condPer.R](testthat/test_condPer.R) | 16 | 0.030 | 0 | 0 | 0 | 0 | | test\_drop.R | [test\_drop.R](testthat/test_drop.R) | 22 | 0.124 | 0 | 0 | 0 | 0 | | test\_duplicates.R | [test\_duplicates.R](testthat/test_duplicates.R) | 6 | 0.009 | 0 | 0 | 0 | 0 | | test\_estimate\_sizes.R | [test\_estimate\_sizes.R](testthat/test_estimate_sizes.R) | 2 | 0.003 | 0 | 0 | 0 | 0 | | test\_fromSizeGenes\_sizePathways.R | [test\_fromSizeGenes\_sizePathways.R](testthat/test_fromSizeGenes_sizePathways.R) | 3 | 0.015 | 0 | 0 | 0 | 0 | | test\_fromSizeGenes.R | [test\_fromSizeGenes.R](testthat/test_fromSizeGenes.R) | 2 | 0.046 | 0 | 0 | 0 | 0 | | test\_fromSizePathways.R | [test\_fromSizePathways.R](testthat/test_fromSizePathways.R) | 2 | 0.032 | 0 | 0 | 0 | 0 | | test\_gene.R | [test\_gene.R](testthat/test_gene.R) | 7 | 0.020 | 0 | 0 | 0 | 0 | | test\_genesPerGene.R | [test\_genesPerGene.R](testthat/test_genesPerGene.R) | 6 | 0.016 | 0 | 0 | 0 | 0 | | test\_genesPerPathway.R | [test\_genesPerPathway.R](testthat/test_genesPerPathway.R) | 5 | 0.010 | 0 | 0 | 0 | 0 | | test\_keep.R | [test\_keep.R](testthat/test_keep.R) | 2 | 0.003 | 0 | 0 | 0 | 0 | | test\_keeps.R | [test\_keeps.R](testthat/test_keeps.R) | 2 | 0.004 | 0 | 1 | 1 | 0 | | test\_modify.R | [test\_modify.R](testthat/test_modify.R) | 12 | 0.066 | 0 | 0 | 0 | 0 | | test\_nested.R | [test\_nested.R](testthat/test_nested.R) | 8 | 0.046 | 0 | 0 | 0 | 0 | | test\_pathway.R | [test\_pathway.R](testthat/test_pathway.R) | 7 | 0.025 | 0 | 0 | 0 | 0 | | test\_simulations.R | [test\_simulations.R](testthat/test_simulations.R) | 13 | 0.302 | 0 | 0 | 0 | 0 | | test\_sizes.R | [test\_sizes.R](testthat/test_sizes.R) | 22 | 0.060 | 0 | 0 | 0 | 0 | | test\_sizesPer.R | [test\_sizesPer.R](testthat/test_sizesPer.R) | 6 | 0.016 | 0 | 0 | 0 | 0 | | test\_summary.R | [test\_summary.R](testthat/test_summary.R) | 1 | 0.005 | 0 | 0 | 0 | 0 | | test\_utilities.R | [test\_utilities.R](testthat/test_utilities.R) | 21 | 0.028 | 0 | 0 | 0 | 0 |
Show Detailed Test Results | file | context | test | status | n | time | | :----------------------------------------------------------------------------------- | :----------------------------------------------- | :--------------------------------- | :------ | -: | ----: | | [test\_add.R](testthat/test_add.R#L4) | Testing adding method | gene as character | ERROR | 0 | 0.001 | | [test\_add.R](testthat/test_add.R#L15) | Testing adding method | pathway as character | ERROR | 0 | 0.001 | | [test\_add.R](testthat/test_add.R#L31_L33) | Testing adding method | pathway as character | ERROR | 0 | 0.002 | | [test\_asGeneSetCollection.R](testthat/test_asGeneSetCollection.R#L5) | Testing list to GeneSetCollection method | info | PASS | 5 | 0.025 | | [test\_asGeneSetCollection.R](testthat/test_asGeneSetCollection.R#L16) | Testing list to GeneSetCollection method | as.GeneSetCollection | PASS | 3 | 0.011 | | [test\_calcnPathways.R](testthat/test_calcnPathways.R#L6) | calc.nPathways | works | PASS | 1 | 0.002 | | [test\_check.R](testthat/test_check.R#L4) | Testing check-GSC methods | isolation | PASS | 1 | 0.002 | | [test\_completness.R](testthat/test_completness.R#L9) | Testing utilities method | completness | PASS | 1 | 0.002 | | [test\_condPer.R](testthat/test_condPer.R#L6) | Testing cond\* method | condPerGenes missing | PASS | 4 | 0.007 | | [test\_condPer.R](testthat/test_condPer.R#L15) | Testing cond\* method | condPerGenes specific | PASS | 4 | 0.008 | | [test\_condPer.R](testthat/test_condPer.R#L24) | Testing cond\* method | condPerPathways missing | PASS | 4 | 0.008 | | [test\_condPer.R](testthat/test_condPer.R#L34) | Testing cond\* method | condPerPathways specific | PASS | 4 | 0.007 | | [test\_drop.R](testthat/test_drop.R#L5) | Testing drop method | gene as character | PASS | 6 | 0.051 | | [test\_drop.R](testthat/test_drop.R#L18) | Testing drop method | drop gene as numeric | PASS | 3 | 0.019 | | [test\_drop.R](testthat/test_drop.R#L30) | Testing drop method | pathway as character | PASS | 2 | 0.010 | | [test\_drop.R](testthat/test_drop.R#L39) | Testing drop method | drop pathway as numeric | PASS | 1 | 0.005 | | [test\_drop.R](testthat/test_drop.R#L45) | Testing drop method | pathway as character | PASS | 3 | 0.014 | | [test\_drop.R](testthat/test_drop.R#L52) | Testing drop method | pathway as character | PASS | 3 | 0.012 | | [test\_drop.R](testthat/test_drop.R#L59) | Testing drop method | dropRel | PASS | 4 | 0.013 | | [test\_duplicates.R](testthat/test_duplicates.R#L11) | Testing duplications | duplicatedPathways | PASS | 3 | 0.004 | | [test\_duplicates.R](testthat/test_duplicates.R#L17) | Testing duplications | duplicatedGenes | PASS | 3 | 0.005 | | [test\_estimate\_sizes.R](testthat/test_estimate_sizes.R#L4) | estimate.n\* | estimate.nPathways works | PASS | 1 | 0.002 | | [test\_estimate\_sizes.R](testthat/test_estimate_sizes.R#L9) | estimate.n\* | estimate.nGenes works | PASS | 1 | 0.001 | | [test\_fromSizeGenes\_sizePathways.R](testthat/test_fromSizeGenes_sizePathways.R#L8) | Testing fromSizeGenes\_sizePathways | fromSizeGenes\_sizePathways | PASS | 3 | 0.015 | | [test\_fromSizeGenes.R](testthat/test_fromSizeGenes.R#L6) | Testing fromSizeGenes method | fromSizeGenes | PASS | 2 | 0.046 | | [test\_fromSizePathways.R](testthat/test_fromSizePathways.R#L6) | Testing fromSizePathways method | fromSizePathways | PASS | 2 | 0.032 | | [test\_gene.R](testthat/test_gene.R#L5) | Testing gene method | gene | PASS | 3 | 0.008 | | [test\_gene.R](testthat/test_gene.R#L12) | Testing gene method | pathway | PASS | 4 | 0.012 | | [test\_genesPerGene.R](testthat/test_genesPerGene.R#L6) | Testing genePerGene | gene as character | PASS | 2 | 0.005 | | [test\_genesPerGene.R](testthat/test_genesPerGene.R#L14) | Testing genePerGene | gene as character | PASS | 2 | 0.006 | | [test\_genesPerGene.R](testthat/test_genesPerGene.R#L23) | Testing genePerGene | gene as character | PASS | 2 | 0.005 | | [test\_genesPerPathway.R](testthat/test_genesPerPathway.R#L5_L6) | genesPerPathway | default | PASS | 1 | 0.002 | | [test\_genesPerPathway.R](testthat/test_genesPerPathway.R#L12) | genesPerPathway | some pathway | PASS | 4 | 0.008 | | [test\_keep.R](testthat/test_keep.R#L5) | Estimating numbers | double.factorial works | PASS | 2 | 0.003 | | [test\_keeps.R](testthat/test_keeps.R#L3_L5) | keep.R | keepGPP works | SKIPPED | 1 | 0.001 | | [test\_keeps.R](testthat/test_keeps.R#L9) | keep.R | keepGPP works | FAILED | 1 | 0.003 | | [test\_modify.R](testthat/test_modify.R#L6) | Testing modify method | gene as character | PASS | 6 | 0.025 | | [test\_modify.R](testthat/test_modify.R#L22) | Testing modify method | pathway as character | PASS | 4 | 0.022 | | [test\_modify.R](testthat/test_modify.R#L36_L37) | Testing modify method | pathway as character | PASS | 2 | 0.019 | | [test\_nested.R](testthat/test_nested.R#L6) | Testing nested method | nested | PASS | 2 | 0.003 | | [test\_nested.R](testthat/test_nested.R#L12_L13) | Testing nested method | compare | PASS | 6 | 0.043 | | [test\_pathway.R](testthat/test_pathway.R#L5) | Testing pathway method | pathway | PASS | 3 | 0.010 | | [test\_pathway.R](testthat/test_pathway.R#L11) | Testing pathway method | pathway | PASS | 4 | 0.015 | | [test\_simulations.R](testthat/test_simulations.R#L5) | Testing simulations to create GeneSetCollections | fromGPP\_nGenes | PASS | 4 | 0.026 | | [test\_simulations.R](testthat/test_simulations.R#L16) | Testing simulations to create GeneSetCollections | fromPPG\_nPathways | PASS | 5 | 0.167 | | [test\_simulations.R](testthat/test_simulations.R#L42) | Testing simulations to create GeneSetCollections | fromPPG\_GPP | PASS | 4 | 0.109 | | [test\_sizes.R](testthat/test_sizes.R#L6) | Testing sizePathways and sizeGenes | Genes per Pathway and sizePathways | PASS | 3 | 0.007 | | [test\_sizes.R](testthat/test_sizes.R#L13) | Testing sizePathways and sizeGenes | Pathway per Genes and sizeGenes | PASS | 5 | 0.012 | | [test\_sizes.R](testthat/test_sizes.R#L24) | Testing sizePathways and sizeGenes | Pathway per Genes and sizeGenes | PASS | 3 | 0.009 | | [test\_sizes.R](testthat/test_sizes.R#L31) | Testing sizePathways and sizeGenes | Pathway per Genes and sizeGenes | PASS | 5 | 0.012 | | [test\_sizes.R](testthat/test_sizes.R#L43) | Testing sizePathways and sizeGenes | Same output sizeGenes | PASS | 3 | 0.010 | | [test\_sizes.R](testthat/test_sizes.R#L53) | Testing sizePathways and sizeGenes | Same output sizePathways | PASS | 3 | 0.010 | | [test\_sizesPer.R](testthat/test_sizesPer.R#L3) | | sizePathways | PASS | 2 | 0.005 | | [test\_sizesPer.R](testthat/test_sizesPer.R#L11) | | sizeGenes | PASS | 2 | 0.005 | | [test\_sizesPer.R](testthat/test_sizesPer.R#L18) | | sizePathways equivalent | PASS | 1 | 0.003 | | [test\_sizesPer.R](testthat/test_sizesPer.R#L25) | | sizePathways equivalent | PASS | 1 | 0.003 | | [test\_summary.R](testthat/test_summary.R#L5) | Testing summary method | summary | PASS | 1 | 0.005 | | [test\_utilities.R](testthat/test_utilities.R#L4) | Testing utilities method | collectionType | PASS | 2 | 0.003 | | [test\_utilities.R](testthat/test_utilities.R#L9) | Testing utilities method | genesPerPathway | PASS | 3 | 0.004 | | [test\_utilities.R](testthat/test_utilities.R#L19_L20) | Testing utilities method | pathwaysPerGene | PASS | 2 | 0.003 | | [test\_utilities.R](testthat/test_utilities.R#L25) | Testing utilities method | geneIdType | PASS | 2 | 0.002 | | [test\_utilities.R](testthat/test_utilities.R#L30) | Testing utilities method | nGenes | PASS | 1 | 0.001 | | [test\_utilities.R](testthat/test_utilities.R#L34) | Testing utilities method | nPathways | PASS | 1 | 0.002 | | [test\_utilities.R](testthat/test_utilities.R#L39) | Testing utilities method | h\_index | PASS | 4 | 0.005 | | [test\_utilities.R](testthat/test_utilities.R#L76) | Testing utilities method | IC | PASS | 4 | 0.006 | | [test\_utilities.R](testthat/test_utilities.R#L88) | Testing utilities method | inverseList | PASS | 1 | 0.001 | | [test\_utilities.R](testthat/test_utilities.R#L94) | Testing utilities method | names\_vec | PASS | 1 | 0.001 |
Session Info | Field | Value | | :------- | :---------------------------------- | | Version | R version 3.5.1 (2018-07-02) | | Platform | x86\_64-apple-darwin15.6.0 (64-bit) | | Running | macOS High Sierra 10.13.5 | | Language | en\_US | | Timezone | America/New\_York | | Package | Version | | :------- | :------ | | testthat | 2.0.0 | | covr | 3.1.0 | | covrpage | 0.0.52 |
yonicd commented 6 years ago

fork of your repo: https://github.com/yonicd/GSEAdv

llrs commented 6 years ago

Yes, on that commit there isn't any test failing. The problems are when doing test driven development: when I first make a test and the function or method fails a test, covrpage doesn't build the tests/README.md

yonicd commented 6 years ago

There are failing tests in that commit

llrs commented 6 years ago

The covrpage report says that there are some errors, but the CI reports that all the test are passing, and the covr report is generated.
In my local machine all the test are OK, there's only one test skipped, but it pass the R CMD check.

yonicd commented 6 years ago

Not sure I follow.

covrpage (which is local): 3 errors + 1 fail ci: no errors + no fails local: no errors + no fails + 1 skip

how are you running local tests? devtools::test or testthat::test_dir or testthat::test_check?

this is getting farther from the original issue. maybe for the sake of length i'll close this one and you can open a fresh one with the updated problem.

llrs commented 6 years ago

I was running the tests with devtools::test using Rstudio. I'll open a new issue then.