Closed yovanoc closed 2 years ago
Will do, in the meantime... you can do this if you are using Yarn v1:
FROM node:alpine AS builder
RUN apk update
# Set working directory
WORKDIR /app
RUN yarn global add turbo
COPY . .
RUN turbo prune --scope=web --docker
# Add lockfile and package.json's of isolated subworkspace
FROM node:alpine AS installer
RUN apk update
WORKDIR /app
COPY --from=builder /app/out/json/ .
COPY --from=builder /app/out/yarn.lock ./yarn.lock
RUN yarn install
FROM node:alpine AS sourcer
RUN apk update
WORKDIR /app
COPY --from=installer /app/ .
COPY --from=builder /app/out/full/ .
COPY .gitignore .gitignore
RUN yarn turbo run build test --scope=web --includeDependencies --no-deps
I wrote a one-liner script couple of weeks ago to copy all package.json
from the monorepo workspaces. I used it with npm
and should also work for yarn
> v1 and pnpm
. The script filtering could be improved though:
FROM node:17.0.0-alpine AS sourcer
RUN apk update
WORKDIR /app
COPY . .
# Copies the root and all workspaces `package.json` to /json perserving their parent directories structure
RUN mkdir /json && find . -type f -name package.json -not -path '*/node_modules/*' | xargs -i cp --parents {} /json
FROM node:17.0.0-alpine AS installer
RUN apk update
WORKDIR /app
COPY --from=sourcer /app/package-lock.json .
COPY --from=sourcer /json .
RUN npm clean-install
FROM node:17.0.0-alpine AS builder
RUN apk update
WORKDIR /app
COPY --from=installer /app .
COPY --from=sourcer /app .
RUN npm run build -- --scope=web --includeDependencies
I also discovered recently on pnpm
docs that you can run pnpm fetch
ahead of time before copying everything to the container, which makes it much simpler. Only pnpm-lock.yaml
modifications invalidate the following layers cache:
FROM node:alpine
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
# pnpm fetch does require only lockfile
COPY pnpm-lock.yaml ./
RUN pnpm fetch
ADD . ./
RUN pnpm install --offline
RUN npm run build -- --scope=web --includeDependencies
For now turbo prune
is still your option if you don't want to expose all the monorepo code for one target. The good thing is that we could combine both pnpm fetch
and turbo prune
once it's supported across the other package managers.
Hope that helps!
Why would you create one image holding all the apps? The goal should be multiple images, each holding a certain application.
Edit: Just noticed that the example above just uses the whole project to build rather than adding the whole content. However, the problem remains in place when it comes to small image sizes and fast build times.
A good thing would be a base-image used by all applications in order to share the cache.
If i understand the example above correctly, both the builder
and the installer
are required in order to create the actual image.
Therefore, a base image like
FROM node:alpine AS builder
RUN apk update
WORKDIR /app
RUN yarn global add turbo
COPY . .
FROM node:alpine
RUN apk update
WORKDIR /app
COPY .gitignore .gitignore
COPY --from=builder /app/out/json/ .
COPY --from=builder /app/out/yarn.lock ./yarn.lock
RUN yarn install
can be created once for the entire project.
Each app inside the apps/ directory can use the base image from above like:
FROM node:alpine
RUN apk update
WORKDIR /app
COPY --from=<imageLocation> /app/ .
COPY --from=<imageLocation> /app/out/full/ .
RUN yarn turbo run build test --scope=web --includeDependencies --no-deps
An approach like this would reduce the build time. However, it also increases the complexity.
Will do, in the meantime... you can do this if you are using Yarn v1:
FROM node:alpine AS builder RUN apk update # Set working directory WORKDIR /app RUN yarn global add turbo COPY . . RUN turbo prune --scope=web --docker # Add lockfile and package.json's of isolated subworkspace FROM node:alpine AS installer RUN apk update WORKDIR /app COPY --from=builder /app/out/json/ . COPY --from=builder /app/out/yarn.lock ./yarn.lock RUN yarn install FROM node:alpine AS sourcer RUN apk update WORKDIR /app COPY --from=installer /app/ . COPY --from=builder /app/out/full/ . COPY .gitignore .gitignore RUN yarn turbo run build test --scope=web --includeDependencies --no-deps
I got error:
------
> [builder 6/6] RUN turbo prune --scope=web --docker:
#10 0.684 Generating pruned monorepo for web in /app/out
#10 0.781 ERROR Failed to copy web into /app/out/full/apps/web: open apps/web/node_modules/postcss/node_modules/.bin/nanoid: no such file or directory
#10 0.803 node:child_process:867
#10 0.803 throw err;
#10 0.803 ^
#10 0.803
#10 0.803 Error: Command failed: /usr/local/share/.config/yarn/global/node_modules/turbo-linux-64/bin/turbo prune --scope=web --docker
#10 0.803 at checkExecSyncError (node:child_process:826:11)
#10 0.803 at Object.execFileSync (node:child_process:864:15)
#10 0.803 at Object.<anonymous> (/usr/local/share/.config/yarn/global/node_modules/turbo/bin/turbo:5:26)
#10 0.803 at Module._compile (node:internal/modules/cjs/loader:1101:14)
#10 0.803 at Object.Module._extensions..js (node:internal/modules/cjs/loader:1153:10)
#10 0.803 at Module.load (node:internal/modules/cjs/loader:981:32)
#10 0.803 at Function.Module._load (node:internal/modules/cjs/loader:822:12)
#10 0.803 at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
#10 0.803 at node:internal/main/run_main_module:17:47 {
#10 0.803 status: 1,
#10 0.803 signal: null,
#10 0.803 output: [ null, null, null ],
#10 0.803 pid: 13,
#10 0.803 stdout: null,
#10 0.803 stderr: null
#10 0.803 }
------
Will do, in the meantime... you can do this if you are using Yarn v1:
FROM node:alpine AS builder RUN apk update # Set working directory WORKDIR /app RUN yarn global add turbo COPY . . RUN turbo prune --scope=web --docker # Add lockfile and package.json's of isolated subworkspace FROM node:alpine AS installer RUN apk update WORKDIR /app COPY --from=builder /app/out/json/ . COPY --from=builder /app/out/yarn.lock ./yarn.lock RUN yarn install FROM node:alpine AS sourcer RUN apk update WORKDIR /app COPY --from=installer /app/ . COPY --from=builder /app/out/full/ . COPY .gitignore .gitignore RUN yarn turbo run build test --scope=web --includeDependencies --no-deps
I got error:
------ > [builder 6/6] RUN turbo prune --scope=web --docker: #10 0.684 Generating pruned monorepo for web in /app/out #10 0.781 ERROR Failed to copy web into /app/out/full/apps/web: open apps/web/node_modules/postcss/node_modules/.bin/nanoid: no such file or directory #10 0.803 node:child_process:867 #10 0.803 throw err; #10 0.803 ^ #10 0.803 #10 0.803 Error: Command failed: /usr/local/share/.config/yarn/global/node_modules/turbo-linux-64/bin/turbo prune --scope=web --docker #10 0.803 at checkExecSyncError (node:child_process:826:11) #10 0.803 at Object.execFileSync (node:child_process:864:15) #10 0.803 at Object.<anonymous> (/usr/local/share/.config/yarn/global/node_modules/turbo/bin/turbo:5:26) #10 0.803 at Module._compile (node:internal/modules/cjs/loader:1101:14) #10 0.803 at Object.Module._extensions..js (node:internal/modules/cjs/loader:1153:10) #10 0.803 at Module.load (node:internal/modules/cjs/loader:981:32) #10 0.803 at Function.Module._load (node:internal/modules/cjs/loader:822:12) #10 0.803 at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) #10 0.803 at node:internal/main/run_main_module:17:47 { #10 0.803 status: 1, #10 0.803 signal: null, #10 0.803 output: [ null, null, null ], #10 0.803 pid: 13, #10 0.803 stdout: null, #10 0.803 stderr: null #10 0.803 } ------
same issue.
Will do, in the meantime... you can do this if you are using Yarn v1:
FROM node:alpine AS builder RUN apk update # Set working directory WORKDIR /app RUN yarn global add turbo COPY . . RUN turbo prune --scope=web --docker # Add lockfile and package.json's of isolated subworkspace FROM node:alpine AS installer RUN apk update WORKDIR /app COPY --from=builder /app/out/json/ . COPY --from=builder /app/out/yarn.lock ./yarn.lock RUN yarn install FROM node:alpine AS sourcer RUN apk update WORKDIR /app COPY --from=installer /app/ . COPY --from=builder /app/out/full/ . COPY .gitignore .gitignore RUN yarn turbo run build test --scope=web --includeDependencies --no-deps
I got error:
------ > [builder 6/6] RUN turbo prune --scope=web --docker: #10 0.684 Generating pruned monorepo for web in /app/out #10 0.781 ERROR Failed to copy web into /app/out/full/apps/web: open apps/web/node_modules/postcss/node_modules/.bin/nanoid: no such file or directory #10 0.803 node:child_process:867 #10 0.803 throw err; #10 0.803 ^ #10 0.803 #10 0.803 Error: Command failed: /usr/local/share/.config/yarn/global/node_modules/turbo-linux-64/bin/turbo prune --scope=web --docker #10 0.803 at checkExecSyncError (node:child_process:826:11) #10 0.803 at Object.execFileSync (node:child_process:864:15) #10 0.803 at Object.<anonymous> (/usr/local/share/.config/yarn/global/node_modules/turbo/bin/turbo:5:26) #10 0.803 at Module._compile (node:internal/modules/cjs/loader:1101:14) #10 0.803 at Object.Module._extensions..js (node:internal/modules/cjs/loader:1153:10) #10 0.803 at Module.load (node:internal/modules/cjs/loader:981:32) #10 0.803 at Function.Module._load (node:internal/modules/cjs/loader:822:12) #10 0.803 at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) #10 0.803 at node:internal/main/run_main_module:17:47 { #10 0.803 status: 1, #10 0.803 signal: null, #10 0.803 output: [ null, null, null ], #10 0.803 pid: 13, #10 0.803 stdout: null, #10 0.803 stderr: null #10 0.803 } ------
same issue.
add .dockerignore
file
eg:
node_modules
apps/web/node_modules
apps/docs/node_modules
packages/config/node_modules
You can catch them all like this instead
**/node_modules
I'm the Dockerfile above but building with TS and I get this error Error: Cannot find module '../lib/tsc.js'
, the file does exist though so thinking it's symlink issue or similar
yarn tsc -h
does work from the app directory but not build
Edit: nvm I did a new docker build with --no-cache
and then it worked, must have had a corrupt cache
@alexandernanberg I ran into a similar issue, and resolved it by adding a .dockerignore
that ignored node_modules
. Make sure it's in the same location as where you're calling docker build
from.
Just want to leave my full working example here in case it's useful for anyone
Just want to leave my full working example here in case it's useful for anyone
Example File structure
apps /api /Dockerfile /package.json .dockerignore
.dockerignore
**/node_modules **/build **/out
Dockerfile
# base node image FROM node:16.13-alpine AS base RUN apk update WORKDIR /app ENV YARN_CACHE_FOLDER=.yarn-cache # sourcer FROM base AS sourcer RUN yarn global add turbo COPY . . RUN turbo prune --scope=api --docker # deps FROM base AS deps COPY --from=sourcer /app/out/json/ . COPY --from=sourcer /app/out/yarn.lock ./yarn.lock RUN yarn install # prod deps FROM base AS prod-deps COPY --from=sourcer /app/out/json/ . COPY --from=sourcer /app/out/yarn.lock ./yarn.lock COPY --from=deps /app/ . RUN yarn install --production --ignore-scripts --prefer-offline RUN yarn cache clean # builder FROM base AS builder COPY --from=deps /app/ . COPY --from=sourcer /app/out/full/ . RUN yarn turbo run build --scope=api --include-dependencies --no-deps # runtime FROM base ENV NODE_ENV=production COPY --from=prod-deps /app/ . WORKDIR /app/apps/api COPY --from=builder /app/apps/api/build ./build CMD ["yarn", "start"]
package.json
{ "name": "api", "scripts": { ... "docker": "cd ../../ && DOCKER_BUILDKIT=1 docker build -f ./apps/api/Dockerfile .", }, ... }
I get the following issue https://github.com/vercel/turborepo/discussions/609
@Myrmod I'm using yarn v1, not berry
Here is my Dockerfile which works perfectly fine:
FROM node:lts-alpine AS base
RUN apk update
WORKDIR /app
ARG SCOPE
ENV SCOPE=${SCOPE}
ENV YARN_CACHE_FOLDER=.yarn-cache
FROM base AS pruner
RUN yarn global add turbo@1.1.2
COPY . .
RUN turbo prune --scope=${SCOPE} --docker
FROM base AS dev-deps
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/yarn.lock ./yarn.lock
RUN yarn install --frozen-lockfile
FROM base AS prod-deps
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/yarn.lock ./yarn.lock
COPY --from=dev-deps /app/${YARN_CACHE_FOLDER} /${YARN_CACHE_FOLDER}
RUN yarn install --frozen-lockfile --production --prefer-offline --ignore-scripts
RUN rm -rf /app/${YARN_CACHE_FOLDER}
FROM base AS builder
COPY --from=dev-deps /app/ .
COPY --from=pruner /app/out/full/ .
RUN yarn turbo run build --scope=${SCOPE} --include-dependencies --no-deps
RUN find . -name node_modules | xargs rm -rf
FROM base AS runner
COPY --from=prod-deps /app/ .
COPY --from=builder /app/ .
CMD yarn workspace ${SCOPE} start
I execute it using npm command in respective app package.json
:
{
"name": "@monorepo/awesome-app-api"
"scripts": {
"build:docker": "cd ../../../ && docker build . -f infrastructure/docker/Dockerfile -t my-awesome-app:latest --build-arg SCOPE=@monorepo/awesome-app-api"
}
}
Repo structure:
monorepo/
package.json
infrastructure/
docker/
Dockerfile
apps/
awesome-app/
api/
package.json
Has anyone made a good template with yarn v2/3?
I tried. The problem I had was that turbo prune outputs a yarn v1 lockfile. This would be fine except that my monorepo dependencies are in the form of "@project/lib": "workspace:*"
which is a yarn@>=2 syntax. Subsequent yarn installs fail after pruning because of this.
If you want to make this work you'll have to take some further steps to rewrite these dependencies as file dependencies ("@project/lib": "file://../lib"
) or use something like Lerna to link them correctly. Not sure what would work best.
Ideally turbo would provide for this. Or even more ideally, workspace dependencies would be standardized across the npm ecosystem 😊
I tried @b12k 's but still getting a couldn't construct graph error
error. In the meantime,I'm using the approach of manually copying configs and doing installations based on those first
=> [internal] load build definition from Dockerfile.development 0.0s
=> => transferring dockerfile: 1.03kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 35B 0.0s
=> [internal] load metadata for docker.io/library/node:lts-alpine 2.6s
=> [auth] library/node:pull token for registry-1.docker.io 0.0s
=> [base 1/3] FROM docker.io/library/node:lts-alpine@sha256:2c6c59cf4d34d4f937ddfcf33bab9d8bbad8658d1b9de7b97622566a52167f2b 0.0s
=> [internal] load build context 2.0s
=> => transferring context: 527.21kB 0.9s
=> CACHED [base 2/3] RUN apk update 0.0s
=> CACHED [base 3/3] WORKDIR /app 0.0s
=> CACHED [pruner 1/3] RUN yarn global add turbo@1.1.2 0.0s
=> [pruner 2/3] COPY . . 7.1s
=> ERROR [pruner 3/3] RUN turbo prune --scope=web --docker 1.1s
------
> [pruner 3/3] RUN turbo prune --scope=web --docker:
#11 1.044 ERROR could not construct graph: error hashing files. make sure that git has been initialized git hash-object exited with status: exec: "git": executable file not found in $PATH
------
executor failed running [/bin/sh -c turbo prune --scope=${SCOPE} --docker]: exit code: 1
My folder structure frontend-monorepo
Backend-workspace rust ....
Here is my Dockerfile which works perfectly fine:
FROM node:lts-alpine AS base RUN apk update WORKDIR /app ARG SCOPE ENV SCOPE=${SCOPE} ENV YARN_CACHE_FOLDER=.yarn-cache FROM base AS pruner RUN yarn global add turbo@1.1.2 COPY . . RUN turbo prune --scope=${SCOPE} --docker FROM base AS dev-deps COPY --from=pruner /app/out/json/ . COPY --from=pruner /app/out/yarn.lock ./yarn.lock RUN yarn install --frozen-lockfile FROM base AS prod-deps COPY --from=pruner /app/out/json/ . COPY --from=pruner /app/out/yarn.lock ./yarn.lock COPY --from=dev-deps /app/${YARN_CACHE_FOLDER} /${YARN_CACHE_FOLDER} RUN yarn install --frozen-lockfile --production --prefer-offline --ignore-scripts RUN rm -rf /app/${YARN_CACHE_FOLDER} FROM base AS builder COPY --from=dev-deps /app/ . COPY --from=pruner /app/out/full/ . RUN yarn turbo run build --scope=${SCOPE} --include-dependencies --no-deps RUN find . -name node_modules | xargs rm -rf FROM base AS runner COPY --from=prod-deps /app/ . COPY --from=builder /app/ . CMD yarn workspace ${SCOPE} start
I execute it using npm command in respective app
package.json
:{ "name": "@monorepo/awesome-app-api" "scripts": { "build:docker": "cd ../../../ && docker build . -f infrastructure/docker/Dockerfile -t my-awesome-app:latest --build-arg SCOPE=@monorepo/awesome-app-api" } }
Repo structure:
monorepo/ package.json infrastructure/ docker/ Dockerfile apps/ awesome-app/ api/ package.json
One problem I've encountered using Docker within monorepos is the need to load everything into Docker context to perform a build. I do not know how to solve this in a clean way yet, but I figured I would mention this issue here in case someone takes a stab at developing medium-large scale (many apps, libs, etc.) Docker tooling example.
I'm almost certain we need a mechanism to ensure the entire repo does not have to be loaded into the Docker context in order to perform builds inside the Docker execution environment.
I almost get the feeling that Turbo is not really helping in dealing with different docker containers. My approach currently is that I get all the package.json's of a specific app + the global package.json, install the deps and then build the whole thing in its separate application folder.
But I'm still very unsure about this whole approach.
I just came across some Docker issues and was surprised to find out thatprune
doesn't work when using NPM? This isn't stated anywhere in the docs.
Does anyone have a working Turborepo + NPM setup? Or is everyone using Yarn/PNPM?
I almost get the feeling that Turbo is not really helping in dealing with different docker containers. My approach currently is that I get all the package.json's of a specific app + the global package.json, install the deps and then build the whole thing in its separate application folder.
But I'm still very unsure about this whole approach.
Yep same here, I'm confused by the need to run COPY . .
very early in the dockerfile, as this really goes against docker best practices. As it means that upon any file change, running docker build will reinstall all packages, even if the lock files haven't changed.
I've posted my current Dockerfile in case it's helpful to someone else below. The main advantage being that it solves the above problem, and will cache package installations (assuming no package.jsons/lockfiles have changed). Although it'd be great if someone with more docker experience could come up with a better version with less verbose copying of individual apps/packages.
FROM node:16-alpine as base
RUN apk update
WORKDIR /app
# Install packages
# Only copy package.jsons, so don't have to reinstall deps on file changes
FROM base as installer
COPY package.json yarn.lock ./
COPY apps/api/package.json apps/api/package.json
COPY packages/types/package.json packages/types/package.json
COPY packages/shared/package.json packages/shared/package.json
RUN yarn --frozen-lockfile
FROM base as builder
RUN yarn global add turbo
COPY turbo.json package.json tsconfig.json yarn.lock ./
COPY apps/api apps/api
COPY packages packages
COPY --from=installer /app/node_modules ./node_modules
RUN turbo run build --scope=api --include-dependencies
Yep same here, I'm confused by the need to run
COPY . .
very early in the dockerfile, as this really goes against docker best practices. As it means that upon any file change, running docker build will reinstall all packages, even if the lock files haven't changed.
AFAIK the next line, ie.
RUN turbo prune --scope=web --docker
is "stable" (only changes on dependency changes for the scoped package) output regardless of the COPY all. The following stages just copies the output from that and therefore the cache will be used even though files have changed. I hope it's clear.
So yes COPY and RUN turbo prune --scope=web --docker
won't be cached but all following will use the cached content from builder stage as long as it hasn't changed.
So yes COPY and
RUN turbo prune --scope=web --docker
won't be cached but all following will use the cached content from builder stage as long as it hasn't changed.
Ah cool that's interesting, I didn't realise docker's cache invalidation worked things out that way, thanks for the info. In that case using prune makes way more sense after all.
Does this approach of using turbo prune --scope --docker
push us into a place where we either replicate base-image instructions across many docker files (per scope), or have a single monolith Dockerfile? For instance, one thing I ran into was trying to use a "monorepo base" Dockerfile to generate the images of individual workspaces:
# Dockerfile.base
FROM node:14-alpine
# ...
COPY . .
I'd then have to build this as perhaps docker build -t my-org/base -f Dockerfile.base .
. Then in a subproject of my turborepo:
# Dockefile.proj1
FROM my-org/base
# ...
RUN turbo prune --scope=proj1 --docker
# ...
The 'problem' is that in this approach, when changes are made to source code, you can't necessarily just run docker build -t my-org/proj1 -f Dockerfile.proj1 .
, because the isolated base image is isolated until running docker build -t my-org/base -f Dockerfile.base .
. Now, I guess you could actually stuff all this stuff into a turborepo dependency graph, and do like turbo run build:docker
. This could be set up to autodetect the need to run docker build -t my-org/base -f Dockerfile.base .
without needing to necessarily remember the command every time. But that reduces flexibility imo.
And if we don't go this route, the only other approach I can think of is a monolithic Dockerfile that colocates the monorepo base along with every workspace project, but that sounds terrible.
In any case, it would be superb to have some examples.
@b12k , thanks for sharing that dockerfile! It is an exceptionally effective turnkey solution, and I was able to connect it as-is to my own project. In my case, however, it is totally blowing up my image/layer cache; it managed to consume 70 GB of disk space in an evening and I ended up reinstalling docker because it bricked dockerd. Have you faced anything similar?
I am zipping the pruned directory (plus Dockerfile in root) and then pass it as docker context to docker build. Works fine so far.
I am zipping the pruned directory (plus Dockerfile in root) and then pass it as docker context to docker build. Works fine so far.
@weyert Can you share your script(s) for clarity?
@paynecodes My solution is heavily based on this article: https://dev.to/jonlauridsen/exploring-the-monorepo-5-perfect-docker-52aj I can't really explain it better than it was is described in this article
@paynecodes My solution is heavily based on this article: https://dev.to/jonlauridsen/exploring-the-monorepo-5-perfect-docker-52aj I can't really explain it better than it was is described in this article
This seems great. Something like this offered by the turborepo maintainers would be great (even if it just proxies pnpm
calls).
/cc @jaredpalmer
I'm getting this command is not yet implemented for nodejs-pnpm
when running turbo prune --scope=backend --docker
using turbo v1.2.1
. Anyone knows why?
I'm getting
this command is not yet implemented for nodejs-pnpm
when runningturbo prune --scope=backend --docker
usingturbo v1.2.1
. Anyone knows why?
Last time I checked, turbo prune
was only available for Yarn v1
https://github.com/vercel/turborepo/issues/215#issuecomment-991976190
Have we a vision of the support of prune for pnpm?
This is the config that worked for me with the latest version of turbo, which is 1.2.9, taking as an inspiration the code shown by @b12k :
´´´ FROM node:lts-alpine AS base RUN apk update && apk add git WORKDIR /app ENV SCOPE=node ENV YARN_CACHE_FOLDER=.yarn-cache
FROM base AS pruner RUN yarn global add turbo COPY . . RUN turbo prune --scope=${SCOPE} --docker
FROM base AS dev-deps COPY --from=pruner /app/out/json/ . COPY --from=pruner /app/out/yarn.lock ./yarn.lock RUN yarn install
FROM base AS prod-deps COPY --from=pruner /app/out/json/ . COPY --from=pruner /app/out/yarn.lock ./yarn.lock COPY --from=dev-deps /app/${YARN_CACHE_FOLDER} /${YARN_CACHE_FOLDER} RUN yarn install --production RUN rm -rf /app/${YARN_CACHE_FOLDER}
FROM base AS builder COPY --from=dev-deps /app/ . COPY --from=pruner /app/out/full/ . RUN yarn turbo run build --filter=${SCOPE}
FROM base AS runner COPY --from=prod-deps /app/ . COPY --from=builder /app/ . CMD yarn ${SCOPE}-start ´´´
You have to change the ENV SCOPE with your app, in my case is called node
.
Quick question for anyone who knows more about docker and experimental Nextjs features than I do. Would it be a bad idea to just create a standalone build using something like:
yarn workspace <workspace> build
with:
{
experimental: {
outputStandalone: true,
},
}
and then just copy that into an alpine docker file?
Does anyone have any Dockerfile examples that supports typescript hot reloading within the monorepo?
Maybe anyone can help out with a pnpm example? :) thanks
Maybe anyone can help out with a pnpm example? :) thanks
Have a look at the article I posted earlier: https://github.com/vercel/turborepo/issues/215#issuecomment-1089348765
Will do, in the meantime... you can do this if you are using Yarn v1:
FROM node:alpine AS builder RUN apk update # Set working directory WORKDIR /app RUN yarn global add turbo COPY . . RUN turbo prune --scope=web --docker # Add lockfile and package.json's of isolated subworkspace FROM node:alpine AS installer RUN apk update WORKDIR /app COPY --from=builder /app/out/json/ . COPY --from=builder /app/out/yarn.lock ./yarn.lock RUN yarn install FROM node:alpine AS sourcer RUN apk update WORKDIR /app COPY --from=installer /app/ . COPY --from=builder /app/out/full/ . COPY .gitignore .gitignore RUN yarn turbo run build test --scope=web --includeDependencies --no-deps
I slightly rewrote this and it's working fine for me on railway.app. This is my API dockerfile. My web file is similar but the SCOPE
and some of the environment variables are different. I'm an absolute noob at docker, this is actually my first attempt at it, so if any of you see things I can improve on lmk 👍 Otherwise this may help out those of you on Railway.
# configure a base stage and scope the build
FROM node:16-alpine AS base
RUN apk update
ENV SCOPE=api
WORKDIR /app
# generate a sparse/partial monorepo with a pruned lockfile for a target package
FROM base AS pruner
RUN yarn global add turbo
COPY . .
RUN turbo prune --scope=${SCOPE} --docker
# install npm dependencies
FROM base AS installer
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/yarn.lock ./yarn.lock
RUN yarn install --frozen-lockfile
# build and expose the application
FROM base AS builder
ARG \
PORT \
DATABASE_URL \
BROWSERLESS_API_KEY \
WEB_URI \
TURBO_TEAM \
TURBO_TOKEN
ENV \
NODE_ENV=production \
PORT=$PORT \
DATABASE_URL=$DATABASE_URL \
BROWSERLESS_API_KEY=$BROWSERLESS_API_KEY \
WEB_URI=$WEB_URI \
TURBO_TEAM=$TURBO_TEAM \
TURBO_TOKEN=$TURBO_TOKEN
COPY --from=installer /app/ .
COPY --from=pruner /app/out/full/ .
RUN yarn turbo run build --filter=${SCOPE} && yarn prisma migrate deploy
EXPOSE $PORT
CMD yarn workspace ${SCOPE} start
If anyone has a full fledged example utilizing pnpm
and turborepo
dockerized
for the server and would love to share that would be greatly appreciated. I'm new to all three technologies and I'm trying to learn
Hi,
I tried your Dockerfile.
But going to RUN yarn turbo run build --scope=${SCOPE} --include-dependencies --no-deps
getting error:
tsup src/index.tsx --format esm,cjs --dts --external react
#21 1.429 @mcp/constants:build: node:internal/modules/cjs/loader:936
#21 1.429 @mcp/constants:build: throw err;
#21 1.430 @mcp/constants:build: ^
#21 1.431 @mcp/constants:build:
#21 1.431 @mcp/constants:build: Error: Cannot find module './chunk-RYC2SNLG.js'
Do you know why?
Here is my Dockerfile which works perfectly fine:
FROM node:lts-alpine AS base RUN apk update WORKDIR /app ARG SCOPE ENV SCOPE=${SCOPE} ENV YARN_CACHE_FOLDER=.yarn-cache FROM base AS pruner RUN yarn global add turbo@1.1.2 COPY . . RUN turbo prune --scope=${SCOPE} --docker FROM base AS dev-deps COPY --from=pruner /app/out/json/ . COPY --from=pruner /app/out/yarn.lock ./yarn.lock RUN yarn install --frozen-lockfile FROM base AS prod-deps COPY --from=pruner /app/out/json/ . COPY --from=pruner /app/out/yarn.lock ./yarn.lock COPY --from=dev-deps /app/${YARN_CACHE_FOLDER} /${YARN_CACHE_FOLDER} RUN yarn install --frozen-lockfile --production --prefer-offline --ignore-scripts RUN rm -rf /app/${YARN_CACHE_FOLDER} FROM base AS builder COPY --from=dev-deps /app/ . COPY --from=pruner /app/out/full/ . RUN yarn turbo run build --scope=${SCOPE} --include-dependencies --no-deps RUN find . -name node_modules | xargs rm -rf FROM base AS runner COPY --from=prod-deps /app/ . COPY --from=builder /app/ . CMD yarn workspace ${SCOPE} start
I execute it using npm command in respective app
package.json
:{ "name": "@monorepo/awesome-app-api" "scripts": { "build:docker": "cd ../../../ && docker build . -f infrastructure/docker/Dockerfile -t my-awesome-app:latest --build-arg SCOPE=@monorepo/awesome-app-api" } }
Repo structure:
monorepo/ package.json infrastructure/ docker/ Dockerfile apps/ awesome-app/ api/ package.json
.
Really struggling with a few things here:
Will update if I hammer out a decent implementation
Hi guys,
I used the above-mentioned solution but I get this error:
`
[builder 3/4] RUN yarn turbo run build --filter=categorisation-app --include-dependencies --no-deps:
20 0.777 yarn run v1.22.19
20 0.893 $ /app/node_modules/.bin/turbo run build --filter=categorisation-app --include-dependencies --no-deps
20 1.248 WARNING cannot find a .git folder. Falling back to manual file hashing (which may be slower). If you are running this build in a pruned directory, you can ignore this message. Otherwise, please initialize a git repository in the root of your monorepo
20 1.500 • Packages in scope: categorisation-app
20 1.500 • Running build in 1 packages
20 1.501 categorisation-app:build: cache miss, executing daa02884e5ac7eee
20 1.846 categorisation-app:build: $ nuxt build
20 2.064 categorisation-app:build: node:internal/modules/cjs/loader:936
20 2.064 categorisation-app:build: throw err;
20 2.064 categorisation-app:build: ^
20 2.064 categorisation-app:build:
20 2.064 categorisation-app:build: Error: Cannot find module '../package.json'
20 2.064 categorisation-app:build: Require stack:
20 2.064 categorisation-app:build: - /app/apps/categorisation-app/node_modules/.bin/nuxt
20 2.064 categorisation-app:build: at Function.Module._resolveFilename (node:internal/modules/cjs/loader:933:15)
20 2.065 categorisation-app:build: at Function.Module._load (node:internal/modules/cjs/loader:778:27)
20 2.065 categorisation-app:build: at Module.require (node:internal/modules/cjs/loader:1005:19)
20 2.065 categorisation-app:build: at require (node:internal/modules/cjs/helpers:102:18)
20 2.065 categorisation-app:build: at Object.
(/app/apps/categorisation-app/node_modules/.bin/nuxt:5:16) 20 2.065 categorisation-app:build: at Module._compile (node:internal/modules/cjs/loader:1105:14)
20 2.065 categorisation-app:build: at Object.Module._extensions..js (node:internal/modules/cjs/loader:1159:10)
20 2.065 categorisation-app:build: at Module.load (node:internal/modules/cjs/loader:981:32)
20 2.065 categorisation-app:build: at Function.Module._load (node:internal/modules/cjs/loader:822:12)
20 2.065 categorisation-app:build: at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12) {
20 2.065 categorisation-app:build: code: 'MODULE_NOT_FOUND',
20 2.065 categorisation-app:build: requireStack: [ '/app/apps/categorisation-app/node_modules/.bin/nuxt' ]
20 2.065 categorisation-app:build: }
20 2.090 categorisation-app:build: error Command failed with exit code 1.
20 2.091 categorisation-app:build: info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
20 2.106 categorisation-app:build: ERROR: command finished with error: command (apps/categorisation-app) yarn run build exited (1)
20 2.106 command (apps/categorisation-app) yarn run build exited (1)
20 2.106
20 2.106 Tasks: 0 successful, 1 total
20 2.106 Cached: 0 cached, 1 total
20 2.106 Time: 1.068s
20 2.106
20 2.144 error Command failed with exit code 1.
20 2.145 info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
`
Does anyone have the same issue? And how did you fix it?
Thanks in advance
We just merged a new example demonstrating turbo + docker here, please try it out! https://github.com/vercel/turborepo/tree/main/examples/with-docker
I'm going to close this issue since the example has been addressed, but If anyone still has specific deployment questions, I would recommend hopping over to turborepo/discussions!
this is good but not cached node_modules
Running into the ERROR could not construct graph: We did not detect an in-use package manager for your project. Please set the "packageManager" property in your root package.json
when trying to deploy to Railway. Works fine locally.
Railway logs:
=========================
Using Detected Dockerfile
=========================
...
#9 DONE 6.2s
#10 [sourcer 3/3] RUN turbo prune --scope=bot --docker
#10 sha256:fb5f95d74484ffaad1aa78dd4cb8b55476f4b838edd3cdaa458ccfe7ba8e1113
#10 3.011 ERROR could not construct graph: We did not detect an in-use package manager for your project. Please set the "packageManager" property in your root package.json (https://nodejs.org/api/packages.html#packagemanager) or run `npx @turbo/codemod add-package-manager` in the root of your monorepo.
Package.json (Root)
...
"packageManager": "yarn@1.22.19",
...
Dockerfile
# base node image
FROM node:16.13-alpine AS base
RUN apk update
WORKDIR /app
ENV YARN_CACHE_FOLDER=.yarn-cache
# sourcer
FROM base AS sourcer
RUN yarn global add turbo
COPY . .
RUN turbo prune --scope=bot --docker
# deps
FROM base AS deps
COPY --from=sourcer /app/out/json/ .
COPY --from=sourcer /app/out/yarn.lock ./yarn.lock
RUN yarn install --frozen-lockfile
# prod deps
FROM base AS prod-deps
ARG \
DISCORD_TOKEN \
API_URL \
OWNERS \
COMMAND_GUILD_IDS \
CLIENT_PRESENCE_NAME \
CLIENT_PRESENCE_TYPE \
WEBHOOK_ERROR_ENABLED
ENV \
NODE_ENV=production \
DISCORD_TOKEN=$DISCORD_TOKEN \
API_URL=$API_URL \
OWNERS=$OWNERS \
COMMAND_GUILD_IDS=$COMMAND_GUILD_IDS \
CLIENT_PRESENCE_NAME=$CLIENT_PRESENCE_NAME \
CLIENT_PRESENCE_TYPE=$CLIENT_PRESENCE_TYPE \
WEBHOOK_ERROR_ENABLED=$WEBHOOK_ERROR_ENABLED
COPY --from=sourcer /app/out/json/ .
COPY --from=sourcer /app/out/yarn.lock ./yarn.lock
COPY --from=deps /app/ .
RUN yarn install --production --ignore-scripts --prefer-offline
RUN yarn cache clean
# builder
FROM base AS builder
COPY --from=deps /app/ .
COPY --from=sourcer /app/out/full/ .
RUN yarn turbo run build --scope=bot --include-dependencies --no-deps
# runtime
FROM base
ENV NODE_ENV=production
COPY --from=prod-deps /app/ .
WORKDIR /app/apps/bot
COPY --from=builder /app/apps/bot/src/.env src/.env
COPY --from=builder /app/apps/bot/dist ./dist
CMD ["yarn", "start"]
@multiplehats just ran into this myself. You'll need to build the image at the root of the monorepo to retain the proper context. You can then specify which Dockerfile you want to build, e.g. docker build . -f apps/api/Dockerfile
Alternatively, you can use docker-compose as in the provided example.
@multiplehats just ran into this myself. You'll need to build the image at the root of the monorepo to retain the proper context. You can then specify which Dockerfile you want to build, e.g.
docker build . -f apps/api/Dockerfile
Alternatively, you can use docker-compose as in the provided example.
Yeah found that out the hard way. Spent a good amount of time on that until I figured it out, totally forgot to post my solution.
Thanks for the follow up!
I think you could also do the pruning outside the Dockerfile
and then pass the generated output as the build context?
I've catching this error when using the with-docker.
=> CACHED [installer 9/11] COPY --from=builder /app/out/full/ . 0.0s
=> ERROR [installer 10/11] COPY turbo.json turbo.json 0.0s
------
> [installer 10/11] COPY turbo.json turbo.json:
------
failed to compute cache key: "/turbo.json" not found: not found
@volnei is turbo.json
specified in your .dockerignore
?
Describe the feature you'd like to request
Have Dockerfile examples.
Describe the solution you'd like
It would be a good idea for the kitchen sink starter or others to add Dockerfile examples on how to do this cleanly. To build only what needed etc.
Describe alternatives you've considered
There is no alternative?