Open elliottsj opened 5 years ago
We are encountering the same issue.
Contrary to Lerna, it seems that Rush was not build around NPM. We rely on it to push applications to production directly, and we don't publish each package on NPM but instead build Docker image that we can use with Kubernetes.
Right now we are encountering the same issues. Static websites (ie. created using Create React App) compiled with webpack can live nicely in a nginx docker image, and the CI/CD for such a case is quite simple (I'm not listing linting and testing steps) :
rush install
, we then cache common
and each node_modules foldersrush rebuild
, again we cache the build folders for the next stepCOPY
command there, and then we build and push our image to our registryBut sometimes apps require a node_modules folder to be present, ie: Next.js with a custom express server, an express API, etc. We usually just use TSC for those use case (and with Next.js we don't have much choice in term of build).
In those cases the third step gets more complicated, as we need a fully functionnal node_modules folder in the Docker image too. Thanks to another issue here, we thought we could just
$ mv $APP_FOLDER/node_modules $APP_FOLDER/node_modules_old
$ rsync $APP_FOLDER/node_modules_old/ $APP_FOLDER/node_modules/ -a --copy-links -q
I read this too quickly and thought it was the silver bullet, however I quickly realise that this command only copy modules required by our app, but not their own dependencies, which makes sense by PNPM logic.
As we don't want to copy all the dependencies for all our projects in each Docker image it seems I'll have to create a shell script to build a working node_modules folder, but that's not convenient.
Any thoughts on this issue ?
@elliottsj @RDeluxe Sorry we didn't follow up on this earlier.
I'm not sure whether individual shrinkwrap files is the right solution, since it could be very inefficient. This Docker topic came up again on Gitter in this conversation. We discussed several other approaches to the problem. Let us know if those alternatives would work for your scenario.
I'm experiencing this issue too, as long as I have large files in git. Right now whole folder, without .git, but with dependencies, is over 1gb, however, some of buildables are not intended to be that large. I'm struggling with this so much I've writing lockfile builder from global on my own now(and discovered this issue during the coding of it)
Is 1GB considered large? The RushStack repo is 1.29GB, but if you look at a typical CI job:
git clone
: 8 secondsrush install
: 1 minute 6 secondsWhen I am working with slow internet (e.g. airplane wifi), I've had pretty good luck using verdaccio as a caching proxy.
Thing is not about npm; I'm building runnable docker images with node.js servers, and build & deployment time increases around 3-5 minutes more in total because of that. Also, as long as steps are non-cacheable layers, it makes my private docker-registry consume the space significantly faster
also - just a note - I would prefer not to pass babel, typescript, webpack etc as a part of node_modules in docker image too
Can this be achieved now using useWorkspaces: true
for pnpm?
Looking at this option: https://pnpm.js.org/en/4.6/workspaces#shared-workspace-lockfile
But sometimes apps require a node_modules folder to be present, ie: Next.js with a custom express server, an express API, etc. We usually just use TSC for those use case (and with Next.js we don't have much choice in term of build).
In those cases the third step gets more complicated, as we need a fully functionnal node_modules folder in the Docker image too. Thanks to another issue here, we thought we could just
$ mv $APP_FOLDER/node_modules $APP_FOLDER/node_modules_old $ rsync $APP_FOLDER/node_modules_old/ $APP_FOLDER/node_modules/ -a --copy-links -q
I read this too quickly and thought it was the silver bullet, however I quickly realise that this command only copy modules required by our app, but not their own dependencies, which makes sense by PNPM logic.
As we don't want to copy all the dependencies for all our projects in each Docker image it seems I'll have to create a shell script to build a working node_modules folder, but that's not convenient.
@RDeluxe your problem should be solved by the new rush deploy feature.
Can this be achieved now using
useWorkspaces: true
for pnpm?Looking at this option: https://pnpm.js.org/en/4.6/workspaces#shared-workspace-lockfile
Yes, I believe so! To see it in action:
.npmrc
and set shared-workspace-lockfile = false
pnpm-lock.yaml
pnpm install
It will create a separate lockfile for each project.
Seems like we could provide at least preliminary support for this by adding a Rush experiment that would append --shared-workspace-lockfile false
in pushConfigurationArgs() and then simply disable any Rush features that are incompatible with this mode.
If someone wants to prototype this, I'm interested to hear the results.
Before diving in to make this, what rush features would be incompatible with it?
I'm not super familiar with the new useWorkspaces
mode. (@D4N14L do you know?) I know that Rush reads and modifies the shrinkwrap file and makes copies of it. For example rush install
will fail if the shrinkwrap file is out of sync with the package.json
files, and it performs this check without actually invoking PNPM.
"Incompatible" is probably too strong of a word, though. What I meant was "features that will be broken unless we do some work to adapt them to work with split shrinkwrap files."
When using workspaces, we no longer modify the pnpm-lock file, however we do validate against it that packages are up to date (this is the check that @octogonz is referring to).
Off the top of my head there's a few things that would need to be tweaked to work as expected:
rush update
common/config/rush
would no longer work, and lockfiles would need to be kept in the root of the individual packages.gitignore
the ones in the root and copy per-project shrinkwraps into their target destinations when switching between variants? UnsureIt's certainly promising, though likely I won't have time to look into implementing this any time soon. But I think if done right, we could make this work with most features of Rush remaining in-tact.
Also before we invest too much in supporting this, we should evaluate the actual benefits that shared-workspace-lockfile = false
would provide. This thread mentioned several distinct scenarios:
node_modules
dependencies for the deployed projects. ("As we don't want to copy all the dependencies for all our projects in each Docker image it seems I'll have to create a shell script to build a working node_modules folder, but that's not convenient.") This problem is best solved by rush deploy in my opinion.rush install
to go faster by installing dependencies only for the subset of projects that he's working on. This is the so-called "filtered installs" feature. It is tracked separately by https://github.com/microsoft/rushstack/issues/1669 and probably would not be helped by shared-workspace-lockfile = false
. But then he later mentioned "I'm building runnable docker images with node.js servers" which sounds like the rush deploy
feature might be sufficient for that.So maybe before we get too deep into implementation details, we should clarify what we hope to achieve.
@octogonz for the last point from @Jabher, that actually is already supported using rush install --to
or rush install -t
in Rush. Not sure why that workitem is still open, as the feature does work as expected.
I agree on the point about using rush deploy
, this is likely the correct approach (without looking too much further into it) and it shouldn't be influenced by the install.
that actually is already supported using
rush install --to
orrush install -t
in Rush. Not sure why that workitem is still open, as the feature does work as expected.
@D4N14L Hmmm... I wasn't aware of that. 🤔 Looks like it shipped with 5.28.0, but it doesn't seem to be documented. I can make a PR to update the website.
So we actually have filtered installs? What's the limitation?
Here's a PR that updates the website docs to describe this feature: https://github.com/microsoft/rushjs.io-website/pull/78
Will the filtered install work if you change/add/remove a dependency?
No. As with running regular rush install
, if any dependency is added or removed, a full rush update
will need to be performed and install would be blocked. Additionally, the rush update
command does not allow filtering using -t
or --to
as the pnpm-lock.yaml file needs to be produced using all included projects.
One note about functionality: unless you enable the frozen lockfile, the filtered install will be performed using pnpm without the context of unfiltered packages, which may inform versioning decisions that pnpm makes. I'd recommend using the filter with the frozen lockfile feature enabled, so as to ensure a consistent install state.
@elliottsj seemed to want to use a private NPM registry, such that if one Rush project depends on another Rush project, it would be installed from the registry rather than via workspace symlinks. ("Cannot verify that applications can correctly install & use libs via the npm registry.") Is this a good idea? What would the workflow be like? What other features would Rush need to implement to make it actually work?
It's been a while, but if I recall correctly, my original concern was if the Rush repo both (1) publishes an npm library to the registry for use by 3rd parties and (2) consumes that library within the repo as well.
With a typical multi-repo setup, library@1.2.3 would first be published to npm, then the application depending on library@1.2.3 can be tested and built. If it succeeds, then we know library@1.2.3 was published correctly.
With Rush, the application can test and build successfully whether or not library@1.2.3 was published to the registry. So we don't get the same guarantee that we got for "free" with a multi-repo setup.
@RDeluxe instead wanted a way to build Docker containers that install only the minimal subset of node_modules dependencies for the deployed projects. ("As we don't want to copy all the dependencies for all our projects in each Docker image it seems I'll have to create a shell script to build a working node_modules folder, but that's not convenient.") This problem is best solved by rush deploy in my opinion.
I tried to set up using this approach, but it didn't produce a pnpm-lock.yaml anywhere in my deploy dir. Is this expected behavior? I had hoped it might, because if it does then I can use pnpm fetch
to enable offline cached builds.
I would love if Rush could output a shrinkwrap / lock file per project, next to the project's package.json file. This would enable the ability to build an individual project with pinned dependency versions, without requiring Rush or any file outside of the project folder.
I realize this is counter to the philosophy that Rush should be the sole entry point to any dependency installation / management task in the repo, but I'll describe my use case:
Use case
In my rush repo, several of my projects are Docker applications with an associated Dockerfile to build them, e.g.
My goal is to be able to build a Docker image for each of these projects, using Rush's pinned dependencies.
Current strategy
My current strategy to use Rush's pinned versions is to first build a "root" parent container as the base for each of the applications. The Dockerfile looks like this:
After building & tagging this root image (e.g.
docker build . -t root
), I can build each application. e.g.login-app/Dockerfile
looks like this:Drawbacks
If Rush could support a shrinkwrap per project, then these drawbacks can be avoided, and each application's Dockerfile can be simplified, e.g.
If this is a feature that Rush maintainers would be willing to support, I'm open to contributing this as a pull request.