Closed flodel closed 5 years ago
The tests all pass as intended; the package would not be on CRAN if the tests on the master branch did not pass. See below.
> test_check("CVXR", filter="^g01")
══ testthat results ═══════════════════════════════════════════════════════════
[ OK: 264 | SKIPPED: 10 | WARNINGS: 0 | FAILED: 0 ]
What is true, however, is that we could clean up (e.g. skipped tests) and improve the tests to make them Rstudio-friendly and provide better messages. CVXR is a complex package and we welcome your participation, but please note that most action is happening on the 1.0 branch. This is significantly different from 0.99-x and is very actively being developed.
I am relying enough on this package (thank you so much for your work) that I would like to help. I have forked from master and was planning on working on these couple issues (https://github.com/cvxgrp/CVXR/issues/55 and https://github.com/cvxgrp/CVXR/issues/31) to eventually submit pull requests.
In the process, I wanted to run the unit tests to add my own tests, but also make sure that my changes would not break current behaviors. Unfortunately, I see that a lot of the unit tests are failing (one example below) and I can't tell if it is because of my specific environment, or if it is indeed the case that the unit tests need some rework.
Also, I note that there are a few
print()
and calls tosolver(..., verbose = TRUE)
within the tests, which make running them in interactive mode particularly difficult to read.I would hope that the master branch consistently pass all of its tests and create a clean output. Thank you if you can address.
Example output, after running
devtools::test(filter = "g01-atoms")
If it is useful, here is my
sessionInfo()
: