Open soetang opened 9 months ago
Have you tried activating the renv environment before using rsconnect? The challenge is that rsconnect needs to locate DESCRIPTION
files which correspond to the targeted package versions. The out-of-sync error is effectively telling you that rsconnect won't be able to sufficiently analyze your environment.
The rsconnect::writeManifest()
might be an option; it would need to be kept in sync with your renv.lock
file, but means that you can have a much more primitive deployment environment. For example, you could use rsconnect-python
to deploy from a pre-existing manifest.
# Use the rsconnect-python rsconnect command to deploy an existing manifest.
rsconnect deploy manifest ...
Could you share more information about your deployment script and its environment?
Hey Aron
Thanks for getting back to me. I used some time with the rsconnect code and can see the problem in terms of reading the DESCRIPTION file. As I see the rsconnect::writeManifest have the same problem - unless we force all our users to create it before hand, an idea that I am not entirely happy about (we have ~60-70 R/Python developers/analysts/data scientists publishing to Posit Connect, with varying skillset).
Secondly this would force all our applications to take a dependency on the rsconnect package. We are actually currently advising our developers to not take a rsconnect dependecy (unless it is for deployment tooling), as this caused a lot of issues last year. This is also the reason we currently enforce that the enviroment of our deployment scripts are seperate from the environment of the deployed content.
As I see it all the necessary information for creating the R environment should be in the renv.lock, and maintaining and continuously synchronizing with a manifest.json, would just lead to quite a few errors for our developers, prolonging development time. The fewer things to remember for them (and me) the better.
We generally package all our applications sort of like this example: https://github.com/sol-eng/plumbpkg (without the manifest.json). That is everything is a package, (good for testing). Our build/Ci pipelines publishes to a Posit Package Manager instance.
The deployment pipeline have the responsibility of creating the deployment from the renv.lock and app files under inst. The code for deployment scripts is in a separate repo that we link though a release.yml. So that our users don't all have to learn the intricacies of deployments.
I can try to create some code I can share - generally though i prefer the approach in rsconnect 1.0.0, over the current. It is fast enough for us with our renv cache.
https://github.com/rstudio/rsconnect/blob/v1.0.0/R/bundlePackageRenv.R:
# Generate a library from the lockfile
lib_dir <- dirCreate(file.path(bundleDir, "renv_library"))
renv::restore(bundleDir, library = lib_dir, prompt = FALSE)
defer(unlink(lib_dir, recursive = TRUE))
deps$description <- lapply(deps$Package, package_record, lib_dir = lib_dir)
However even better would be if we did not have to look in the description file at all. The renv.lock file should have all the information we need IMO. However I know this is probably a limitation in how Posit Connect works currently, at least i dont seem to be able to hack myself out of it.
Adding a lib_dir
argument to deployApp could also do it for us, tested it - and could get it to work with our setup.
I have no problem doing something like: renv::restore(bundleDir, library = lib_dir, prompt = FALSE)
before running deployapp, and providing a custom lib_dir:
deployApp(
.....
, lib_dir = lib_dir
)
Hi, @aronatkins,
we experience similar issues with rsconnect::writeManifest()
.
We are deploying content via a custom deployment pipeline to Posit Connect instances hosted within an Azure Cluster. This is controlled from a distinct devops server, that by itself must be restricted to contain only a minimum R installation, that is, containing only R dependencies needed for deployment tasks. Hence, I quote @soetang here
This is also the reason we currently enforce that the enviroment of our deployment scripts are seperate from the environment of the deployed content.
All we need is
rsconnect::writeManifest()
orthat circumvents the retrieval of meta information on installed packages and their subsequent comparison to the information stored within the renv.lock file (lines 46 till 58) in rsconnect::parseRenvDependencies()
. See also
Mentioned in zendesk issue #103908
Adding a
lib_dir
argument to deployApp could also do it for us, tested it - and could get it to work with our setup.I have no problem doing something like:
renv::restore(bundleDir, library = lib_dir, prompt = FALSE)
before running deployapp, and providing a custom lib_dir:deployApp( ..... , lib_dir = lib_dir )
I just tried to demonstrate what i mean with this: https://github.com/soetang/rsconnect/pull/1 - Had it working previously however don't know with the current version of rsconnect package
@soetang - thanks for the code demonstration. It appears that you have a library that is in sync with the renv.lock file. Could you explain the differences between the lib_dir
and the renv library that would be populated by renv::restore()
?
In an environment where you can construct a fully populated R library, is this a sufficient approach?
# Assuming the .Rprofile has already activated renv
renv::restore()
rsconnect::writeManifest() # or rsconnect::deployApp()
Alternatively, if you are constructing the library separately, have you tried to configure .libPaths()
ahead of writing the manifest / deploying?
.libPaths(c("/my/r/library", .libPaths()))
rsconnect::writeManifest() # or rsconnect::deployApp()
Does this second approach avoid the need to provide a custom lib_dir
argument?
Hey Aaron
Sorry I have not gotten back to you. I just wanted to re-test your suggestion to make sure that I had not missed anything. I tried what you suggested but unfortunately it does not work. Tried both as you suggested and with withr::with_libpaths()
with both replace and prefix. However continues to get:
Failure (test-deployapp.R:22:5): Test deployment of dashbord
Expected `{ ... }` to run without any errors.
i Actually got a <rlang_error> with text:
Library and lockfile are out of sync
i Use renv::restore() or renv::snapshot() to synchronise
i Or ignore the lockfile by adding to your .rscignore
Let me try to explain, basically we have to repos (a bit simplified):
A devops repo:
devops_repo
├── deploy_script.R
└── renv.lock
A app repo
app_repo
├── app.R # could also be plumber or markdown
└── renv.lock
Deployment happens in a pipeline in Azure devops, in which the deployment script takes the app_repo (an r packages) and deploys that to rsconnect (that is the code is not run by a user locally). Unfortunately the devops repo renv.lock
is not guaranteed to be compatible with the project renv.lock
. We have had repeating issues with devops dependencies interfering with the app dependencies and we want to decouple the two. This is what rsconnect 1.0.0 allowed us to do. However this was changes already in the next release.
So one solution could be to provide a libpath. Precedence is the rcmdcheck function that also allows this. We use this option for CI, where the R code running the CI pipeline is completely decoupled from the dependencies of the r package.
Another solution could be that deployment could be done without reading the description files, and just from the renv.lock, however I understand that this is not straighforward.
We dont want developers to create manifest files themselves ie. with: ´rsconnect::writeManifest()` since this means they would have to synchronize packages in 3 places (Description, renv.lock and the manifest file). It is easy to enforce description and renv.lock syncronizity through rcmdcheck and renv::snapshot(type = 'explicit'). Experience tells us that having them generate manifest files will decrease developer experience and increase risk of functionality not working as expected.
@aronatkins Could you have a look at my suggestion in the linked issue #1093? I don’t think restoring the renv project should be necessary if parseRenvDependencies()
uses the correct library dir (assuming, of course, that the project has previously been restored). In that case, it should be sufficient to grab the correct library path from ‘renv’:
deps$description <- lapply(deps$Package, package_record, lib_dir = renv::paths$library(project = bundleDir))
Or am I overlooking something?
Hey
With release of rsconnect 1.0.0 - i was really happy to learn that you could use a renv.lock file for specifying deployment dependencies.
This was a big help since the release script in our azure pipelines, had different dependencies from the actual apps that we were deploying. This helped us sort these problems. However with the functionality that enforce that the Library and lockfile can't be out of sync. We again have this problem. That is the deployment script need to have the same dependencies as specified in the app.
I understand that the change was made to avoid having different experiences locally and on the connect server. So therefore i would like to suggest that we might add an argument to deployApp and writeManifest that allows us to bypass the check.
I can also try to create a pull request my self if your are interested?
I am also interested in other ways to apporach the issue if you have any.