r-quantities / units

Measurement units for R
https://r-quantities.github.io/units
175 stars 28 forks source link

Deploy pkgdown via CI and test on GitHub Actions #235

Closed pat-s closed 4 years ago

pat-s commented 4 years ago

pkgdown preview: https://pat-s.github.io/units/index.html

Steps to set up deployment

After merging this PR:

remotes::install_github("ropensci/tic")
tic:use_ghactions_deploy()

Issues

https://github.com/pat-s/units/actions/runs/72662835

codecov[bot] commented 4 years ago

Codecov Report

Merging #235 into master will not change coverage. The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #235   +/-   ##
=======================================
  Coverage   94.11%   94.11%           
=======================================
  Files          18       18           
  Lines         900      900           
=======================================
  Hits          847      847           
  Misses         53       53           

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 210b952...5864e68. Read the comment docs.

Enchufa2 commented 4 years ago

Sorry, I appreciate the effort, but where does this come from? Besides,

  1. I don't understand what's the advantage of using tic here.
  2. It's failing.

I mean, maybe @edzer asked for this and I'm simply not aware, but if not, it's a little bit intrusive such a change without any previous discussion.

pat-s commented 4 years ago

Iñaki,

this comes just from my motivation to make FOSS better ;) The pkgdown site of this repo is outdated and has no auto-deploy. This PR helps here - and I try to favor contributions which solve things over just pointing out the problem in an issue and say "hey, please fix...".

{tic} is a choice and you find all of its advantages reading its docs. Again, no need to use it or merge this PR, this is just an offer. Feel free to close it if its not what you want. (Edzer knows {tic} and uses it already in some repos, therefore I did not include more explanation.)

In addition, {tic} is not failing but R CMD Check of {units} for the reasons outlined above. This is really on {units} side and taking a closer look and distinguishing here would be appreciated.

In addition, this PR adds testing on GitHub Actions and with that adds value to the CI suite of this repo in general (previously there was no testing on macOS and the Windows checks might have missed some cases).

I mean, maybe @edzer asked for this and I'm simply not aware, but if not, it's a little bit intrusive such a change without any previous discussion.

No he did not ask for it. I am aware that some people may find such an action intrusive, sorry for that. My personal view on this is different and I am always happy if people contribute to a project with PRs which make the project better. But I got your point and won't do so again since you do not seem to like "PRs out of nowhere".

Enchufa2 commented 4 years ago

this comes just from my motivation to make FOSS better ;)

And this is appreciated, as I said, but

The pkgdown site of this repo is outdated and has no auto-deploy.

Which doesn't necessarily mean we want an auto-deploy. Certainly not every day with a cron job.

This PR helps here - and I try to favor contributions which solve things over just pointing out the problem in an issue and say "hey, please fix...".

I do too, but this PR does not solve any existing problem (it is an enhancement, not a fix) and brings more with it, so more work.

{tic} is a choice and you find all of its advantages reading its docs.

And homework. :)

In addition, {tic} is not failing but R CMD Check of {units} for the reasons outlined above. This is really on {units} side and taking a closer look and distinguishing here would be appreciated.

{units} installs and checks just fine on Linux, Solaris, Windows and MacOS on CRAN, Ubuntu on Travis and Windows on AppVeyor, so I tend to think that this is on the CI configuration side.

I would like to take a closer look to see what's the problem on the failing platforms as time permits.

In addition, this PR adds testing on GitHub Actions and with that adds value to the CI suite of this repo in general (previously there was no testing on macOS and the Windows checks might have missed some cases).

More CI adds value when it works flawlessly; if not, it adds more work.

No he did not ask for it. I am aware that some people may find such an action intrusive, sorry for that. My personal view on this is different and I am always happy if people contribute to a project with PRs which make the project better. But I got your point and won't do so again since you do not seem to like "PRs out of nowhere".

Note that the out-of-nowhere feeling is accentuated by the fact that there is no introductory paragraph motivating the PR, but it directly starts with "steps to set up deployment"... I hope you can understand that.

I haven't discussed this in particular with @edzer, so I don't know his opinion. Personally, small and focused PRs fixing specific issues are more than welcome, even if they come "out of nowhere". But more complicated fixes or enhancements (especially enhacements that add maintenance work, such as more CI) require some preliminary discussion.

In other words, I would have been nice if you opened an issue first asking "hey, would you be interested in x or y?". I would have given us the chance to ask questions, or to say "x would be very nice, but we think we are fine without y", which in turn would save you time too.

I suppose we'll have to do that directly in the PR this time. More questions to come as soon as I find some time for this.

pat-s commented 4 years ago

Which doesn't necessarily mean we want an auto-deploy. Certainly not every day with a cron job.

I see the point in theory but not in practice since I've never come across any persons that did not see the point in this. But maybe you're point is more on the theoretical side here.

{units} installs and checks just fine on Linux, Solaris, Windows and MacOS on CRAN, Ubuntu on Travis and Windows on AppVeyor, so I tend to think that this is on the CI configuration side.

Sure but the CRAN side is somewhat different to local user experiences, especially on macOS for which the CRAN test suite is run and focused on older OS versions (el capitan for < R4.0 and high sierra for > R4.0). That'ts why I am trying to stress the need for maintainers to test on the latest macOS release via CI to find potential issues. And yes, the issue described with linking against udunits might only apply to macOS Catalina.

More CI adds value when it works flawlessly; if not, it adds more work.

I do not like to view CI as "it adds more work". In the end its a service giving you easy check capabilities that save you time in the long run. If some checks fail this is surely work in the first place but in the end it just does its job -> telling you that something is not really proper on some OS. What is annoying is the setup process and if this causes trouble (which is not the case here).

Nobody forces anyone to use CI - there are still persons out there who prefer to do everything locally/manually and then eventually start complaining that they do not have access to different machines with all kind of OS on it.

Note that the out-of-nowhere feeling is accentuated by the fact that there is no introductory paragraph motivating the PR, but it directly starts with "steps to set up deployment"... I hope you can understand that.

I understand that - sorry for the wording. This was not the intention. I'll try to do better on this/the general process.

I did not wanted to cause such an extensive discussion on this - just simplify things. Sorry for the wall of text.

I'd suggest to just go ahead from here and ping me if you have questions - or if its too much work, feel free to close :) I'll open a dedicated issue on the udunits linking.

edzer commented 4 years ago

I maintain one package in particular with highly complex dependencies. Patrick and Kirill have helped here to set up tic, and also with follow-up help and maintenance in cases where it failed or needed updating. Roles may be somewhat confusing in this repo as I am listed as maintainer but Iñaki effectively does most maintenance.

There seems to be a tendency to move away from travis to ghactions; also having OSX as additional testing/CI platform makes sense, as does the auto pkgdown deployment. Let's give this a good look by the time you think it is finished, @pat-s , thanks for the efforts so far.

Enchufa2 commented 4 years ago

That'ts why I am trying to stress the need for maintainers to test on the latest macOS release via CI to find potential issues. And yes, the issue described with linking against udunits might only apply to macOS Catalina.

Are you a macOS user? From what I've heard Catalina has been a complete disaster in many ways, right?

Note that the out-of-nowhere feeling is accentuated by the fact that there is no introductory paragraph motivating the PR, but it directly starts with "steps to set up deployment"... I hope you can understand that.

I understand that - sorry for the wording. This was not the intention.

Thanks for this.

I'd suggest to just go ahead from here and ping me if you have questions

I'll do.

I'll open a dedicated issue on the udunits linking.

That would be great, thanks.

Enchufa2 commented 4 years ago

There seems to be a tendency to move away from travis to ghactions;

I know, in fact I was one of the first people to praise and give a try to Jim Hester's test repo when he was implementing current R actions back in October, and I use them myself e.g. to daily update CRAN builds for Fedora. I particularly like the fact that, for this workflow, GitHub secrets stay on GitHub.

There has been much movement in the last months around this, and for established projects with working CIs, I preferred to wait a little bit until workflows settle up. Maybe that time is now.

also having OSX as additional testing/CI platform makes sense, as does the auto pkgdown deployment. Let's give this a good look by the time you think it is finished, @pat-s , thanks for the efforts so far.

It does. But it makes sense too to bet for a single CI solution where possible, so I see this as an opportunity to drop AppVeyor first (when we figure out what's the issue on Windows here) and eventually Travis too (but for that I need to adapt the check we currently do with {quantities} to ghactions).

Enchufa2 commented 4 years ago

@pat-s Do you know why the "[Stage] Prepare & Install (macOS-devel)" phase here fails but it's marked as successful?

pat-s commented 4 years ago

Are you a macOS user? From what I've heard Catalina has been a complete disaster in many ways, right?

I am. For normal users there are not R-related issues if one just uses binaries. Other issues related to Catalina are also minor in my opinion and people make it more up than needed (I am talking about actual breakage or things not working).

When it comes to R-devel and source installations in general things are a bit different. The 10.15 SDK comes with some changes that cause some package requiring compilation to fail. However, part of this was also made worse by the recent Rcpp v1.0.4 release and the resistence to listen to user feedback of macOS users. The whole tidyverse is also removing Rcpp as a dep for this reason (if you wanna know more, read the latest issues there with some popcorn).

It is hard to track down why things stop working across macOS release and its hard to go back or just spin up a VM.

There has been much movement in the last months around this, and for established projects with working CIs, I preferred to wait a little bit until workflows settle up. Maybe that time is now.

Note that {tic} does a lot things different to Jims approach (FLAGS, caching, etc.). Just to be aware of that. It is your choice in the end which approach you follow. I am obviously biased since I am one of the authors of {tic}.

@pat-s Do you know why the "[Stage] Prepare & Install (macOS-devel)" phase here fails but it's marked as successful?

Good find. I don't know right now - actually it should fail. We are calling internally remotes::install_deps() which should fail the "action" if it errors. This might be an upstream issue in {remotes} but I might be able to workaround this in {tic}. However, I know that other installation errors in that stage successfully error that stage so it might be a tricky one.

The important part though is that you see that the linking to udunits fails. While you see the failure for {udunits2} here and not for {units} (because the latter is just installed locally), the same applies in other CI dep installs if {units} is a dep.

Enchufa2 commented 4 years ago

When it comes to R-devel and source installations in general things are a bit different. The 10.15 SDK comes with some changes that cause some package requiring compilation to fail. However, part of this was also made worse by the recent Rcpp v1.0.4 release and the resistence to listen to user feedback of macOS users. The whole tidyverse is also removing Rcpp as a dep for this reason (if you wanna know more, read the latest issues there with some popcorn).

Funny enough, all this story started with an out-of-nowhere PR that remodelled exception handling and that was merged too quickly IMHO (don't get me wrong, I'm not comparing this PR to that, because this is not changing any critical part of the package). If you still have some popcorn left, PR number 1043 is another interesting read. :)

Another angle in this matter is that Dirk announced repeatedly that the current release was available for testing in his drat (I suspect that he suspected that 1043 may cause some trouble, but maybe that's just an a posteriori perception), but nobody tested a thing (myself included, but, in my defense, I'm a Linux user and that's covered by Dirk). Then it got to CRAN and here we are. I believe that his reluctance to roll a patch release comes from there.

For what it's worth, a patch release was actually sent to CRAN 6 days ago. So hopefully all this mess ends in a few hours/days.

It is hard to track down why things stop working across macOS release and its hard to go back or just spin up a VM.

I understand. What it's frustrating is that they release a new SDK and that's it. Everybody, deal with it.

Note that {tic} does a lot things different to Jims approach (FLAGS, caching, etc.). Just to be aware of that. It is your choice in the end which approach you follow. I am obviously biased since I am one of the authors of {tic}.

I need to catch up then. Here goes one question: what's the difference between your pat-s/always-upload-cache compared to actions/cache?

edzer commented 4 years ago

I have popcorn. Where should I start?

pat-s commented 4 years ago

If you still have some popcorn left, PR number 1043 is another interesting read. :)

Good read for the after-lunch-espresso.

Another angle in this matter is that Dirk announced repeatedly that the current release was available for testing in his drat (I suspect that he suspected that 1043 may cause some trouble, but maybe that's just an a posteriori perception), but nobody tested a thing (myself included, but, in my defense, I'm a Linux user and that's covered by Dirk). Then it got to CRAN and here we are. I believe that his reluctance to roll a patch release comes from there.

I think there's a lot in this one and even more below the surface (what many people might not see). But this is the wrong place to discuss about this. I would just summarise it as "such things happen if single persons have too much power in projects which many others rely on".

I understand. What it's frustrating is that they release a new SDK and that's it. Everybody, deal with it.

Sure but that's the same for many other projects. It is just not surprising to run into such things if no checks are run against the current release (or even beta versions) but only against 3 year old SDKs which maybe 20% of the users are operating on. I mean this process (Apples SDK releases) exists for years and there is no foreseeable change on how R-core/CRAN will handle this in the future, so it might go on forever.. :/

I need to catch up then. Here goes one question: what's the difference between your pat-s/always-upload-cache compared to actions/cache?

By default cache uploads only happen on a successful run. My custom action always uploads the (daily) cache in a CRON build over night. So every build you start manually operates on the package cache of the day, even if the run failed. Otherwise you would have to wait for all the dependency installs for every run again until one succeeds. Also, r-lib/actions does no caching via CRON builds at all currently. (This is just one of many small differences between {tic} and r-lib/actions).

pat-s commented 4 years ago

I have popcorn. Where should I start?

I'd say #1060 is the most entertaining one.

pat-s commented 4 years ago

@Enchufa2 The builds are somewhat working now - the remaining issues in the windows and mac-devel builds seem to be related to the {units} package and are out of my scope.

Just FYI: I've never seen this one before

[warning] A complete check needs the 'checkbashisms' script.

Enchufa2 commented 4 years ago

@Enchufa2 The builds are somewhat working now - the remaining issues in the windows and mac-devel builds seem to be related to the {units} package and are out of my scope.

Thanks! I'll try to take a look this weekend

Just FYI: I've never seen this one before

[warning] A complete check needs the 'checkbashisms' script.

This seems like a new check. The package containing such a script is called devscripts-checkbashisms.

Enchufa2 commented 4 years ago

A couple of questions meanwhile:

pat-s commented 4 years ago

Is all the ccache stuff really necessary? It clutters the yaml file and it seems overkilling to me for this project, because the compilation takes very few seconds.

It's a template and should apply to all repos. You can of course remove it but installs will take a bit longer then, especially on macOS devel and ubuntu where we only install from source. In the end this varies across repos but since the cache is saved only once in a week this is actually always more efficient than having none. The YAML files are somewhat cluttered because they should cover a great variety of use cases without user modification needed. All detailed config changes to the builds should go to the CI-agnostic tic.R file in the philosophy of tic.

The current configuration deploys the site every day. I prefer to keep the pkgdown site in sync with the CRAN version. Is it possible to trigger a deployment just when a version is tagged?

It builds the site on every commit, it only deploys when there are changes. In my view this should happen in every build to make sure everything works (vignettes, etc). You can however have a devel & release site - see how we do this in mlr3 for example.

If you really only want to build and deploy the pkgdown site once for every release, you could condition the line in tic.R like

if (ci_on_ghactions() && ci_has_env("BUILD_PKGDOWN") && ci_is_tag()) {
  do_pkgdown()
}

(assuming you only tag CRAN releases)

pat-s commented 4 years ago

@Enchufa2 How do you want to proceed here?

Enchufa2 commented 4 years ago

Sorry, I'm overloaded and I didn't find the time yet for this, but it's in the backlog.

Enchufa2 commented 4 years ago

I found the time to give this a shot. Thank you for this PR, it's been of great help. But I've been investigating several setups out there, and I think I'm more comfortable with separate workflows for separate things. So finally I've adopted the setup in #246.

BTW, I managed to solve the Windows issue here.

pat-s commented 4 years ago

Ok, fair enough. Glad it helped somewhat though.