Closed mpadge closed 7 years ago
Oh but you don't need to really build the vignette on CRAN, see https://cran.r-project.org/web/packages/httr/vignettes/api-packages.html for seeing Jenny Bryan's strategy, that I use e.g. in this vignette https://raw.githubusercontent.com/ropensci/opencage/master/vignettes/opencage.Rmd
If we think timeout is due to a status of the API(s) that is reported somewhere there must be a way to consult the status, right? Although my googling wasn't too successful.
The problems are with timeouts when getting the status of both overpass
and nominatim
, which is why they can't really be avoided. I can deal with the tests easily enough, and I already use an IS_CRAN
variable, but i don't understand the comment in the httr
vignette that 'vignettes are built locally' - they're built by CRAN from the .Rmd
files, unless .html
files are included in /inst/doc
, which is of course not recommended practice. Vignettes have to be built by CRAN, but are they then rebuilt with each check? (Yes.) Is there some kind of CRAN environment variable that overrides !NOT_CRAN
when first building a package but not when subsequently checking it? I don't think so, in which case I don't see how that comment in the httr
vignette can actually generally apply.
I guess what has to be done for vignettes is to code two chunks for IS_CRAN
and !IS_CRAN
, with the former just dumping fake output of a result without actually making the calls.
@mpadge but there could be a thing not dependent on the API giving the status? The OpenAQ example I gave is independent from their other API.
Yes this API thing is not best practice but you do include pre-built vignettes and the chunk option eval is set to FALSE when IS_CRAN
is TRUE so you don't get error when checking on CRAN. Obviously only to be done with good conscience if tests etc. were passing on other systems :grin:
Do you have any contact with people working on overpass
or nominatim
that could give more info on the frequency of timeouts and the best way to deal with them?
The only ''true'' solution will be to implement proper web mocking, but that has to wait until SC finishes crul
, webmockr
, or vcr
. This issue will stay open until that time ...
it seems the current mocking using stub.R
is also not entirely working as it should, and particularly not for getbb
, which seems the likely cause for appveyor fails. I've not yet been able to fix it, but will hopefully do so soon ....
All tests finally successfully stubbed and the only point I can see a possible break is with the call from overpass_query
to overpass_status()
which is not stubbed. If the status call does not work, this may still break, but that's an extreme case that really ought not even have to be addressed, so closing now.
R CMD CHECK successful! :tada: Once you've accepted @Robinlovelace's PR I'll build the website.
See lots of discussion in #31. The problem is particularly pronounced in
appveyor
builds, but also plagues travis. Bothoverpass
andnominatim
are prone to timing out, which leads to tests failing.Current approach for
overpass
is in theoverpass_status
function, which tries 10 times with a 100second timeout, yet still occasionally fails. I've also implemented a URL status check fornominatim
calledurl_available
.Tests seem generally okay now (and even if they weren't, could technically be made okay with
if
clauses to perform tests only when timeout not reached), yet building vignettes still occasionally fails. This latter can not be avoided throughif !timeout
trickery, and so will always risk failing CRAN checks.This problem must be solved ...
cc @masalmon