Closed jvzaniolo closed 10 months ago
Here's my working version with pnpm
. I just copied the example with yarn
and replaced it with pnpm
, but it doesn't use pnpm fetch
FROM node:18-alpine AS base
FROM base AS builder
RUN apk add --no-cache libc6-compat
RUN apk update
# Set working directory
WORKDIR /app
RUN npm install -g turbo
COPY . .
RUN turbo prune --scope=docs --docker
# Add lockfile and package.json's of isolated subworkspace
FROM base AS installer
RUN apk add --no-cache libc6-compat
RUN apk update
WORKDIR /app
# First install the dependencies (as they change less often)
COPY --from=builder /app/out/json/ .
COPY --from=builder /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN corepack enable
RUN pnpm install --frozen-lockfile
# Build the project
COPY --from=builder /app/out/full/ .
RUN pnpm dlx turbo run build --filter=docs
FROM base AS runner
WORKDIR /app
# Don't run production as root
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
USER nextjs
COPY --from=installer /app/apps/docs/next.config.js .
COPY --from=installer /app/apps/docs/package.json .
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=installer --chown=nextjs:nodejs /app/apps/docs/.next/standalone ./
COPY --from=installer --chown=nextjs:nodejs /app/apps/docs/.next/static ./apps/docs/.next/static
COPY --from=installer --chown=nextjs:nodejs /app/apps/docs/public ./apps/docs/public
CMD node apps/docs/server.js
Hi, here is my version. Please let me know if there is any way to improve it.
FROM node:18.16.0-alpine3.17 AS base
ARG PNPM_VERSION=8.6.2
ENV HUSKY=0
ENV CI=true
ENV PNPM_HOME=/usr/local/bin
RUN corepack enable && corepack prepare pnpm@${PNPM_VERSION} --activate
WORKDIR /app
FROM base AS setup
RUN pnpm add -g turbo
COPY . .
RUN turbo prune --scope=@acme/docs --docker
FROM base AS builder
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --update --no-cache libc6-compat && rm -rf /var/cache/apk/*
COPY .gitignore .gitignore
COPY --from=setup /app/out/pnpm-workspace.yaml ./pnpm-workspace.yaml
COPY --from=setup /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
COPY --from=setup /app/out/full/patches ./patches
RUN --mount=type=cache,id=pnpm,target=/root/.local/share/pnpm/store pnpm fetch
# ↑ By caching the content-addressable store we stop downloading the same packages again and again
# First install dependencies (as they change less often)
COPY --from=setup /app/out/json/ ./
RUN --mount=type=cache,id=pnpm-store,target=/root/.pnpm-store \
pnpm install --filter=@acme/docs... -r --workspace-root --frozen-lockfile \
--unsafe-perm \
# ↑ Docker runs pnpm as root and then pnpm won't run package scripts unless we pass this arg
| grep -v "cross-device link not permitted\|Falling back to copying packages from store"
# ↑ Unfortunately using Docker's 'cache' mount type causes Docker to place the pnpm content-addressable store
# on a different virtual drive, which prohibits pnpm from symlinking its content to its virtual store
# (in node_modules/.pnpm), and that causes pnpm to fall back on copying the files.
# And that's fine!, except pnpm emits many warnings of this, so here we filter those out.
# Build the project and its dependencies
COPY --from=setup /app/out/full/ ./
COPY turbo.json turbo.json
RUN pnpm run build --filter=@acme/docs...
FROM base AS dev
COPY --from=builder /app/ ./
WORKDIR /app/apps/docs
FROM builder AS pruned
RUN --mount=type=cache,id=pnpm,target=/root/.local/share/pnpm/store \
pnpm --filter=@acme/docs --prod deploy pruned --config.ignore-scripts=true
FROM node:18.16.0-alpine3.17 AS runner
WORKDIR /app
COPY --from=pruned /app/pruned/ ./
# Don't run production as root
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nodejs
USER nodejs
EXPOSE 8080
CMD ["node", "./dist/server.js"]
I think this issue should be elevated, since the only reason I've chosen pnpm
is that the offical docs recommend using it:
So it's strange there aren't any examples.
One issue I'm having is this:
Using config file: /Users/nicu/.config/dive/dive.config.yaml
Building image...
[+] Building 3.8s (16/16) FINISHED
=> [internal] load .dockerignore 0.0s
=> => transferring context: 66B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.22kB 0.0s
=> [internal] load metadata for docker.io/library/node:20 0.8s
=> [base 1/2] FROM docker.io/library/node:20@sha256:b3ca7d32f0c12291df6e45a914d4ee60011a3fce4a978df5e609e356a4a2cb88 0.0s
=> [internal] load build context 0.6s
=> => transferring context: 1.98MB 0.5s
=> CACHED [base 2/2] RUN apt-get update && apt-get install -y build-essential libcairo2-dev libpango1.0-dev libjpeg-dev libgif-dev librsvg2-dev 0.0s
=> CACHED [installer 1/7] WORKDIR /app 0.0s
=> CACHED [builder 2/4] RUN npm install -g turbo 0.0s
=> [builder 3/4] COPY . . 2.0s
=> [builder 4/4] RUN turbo prune --scope=@robo/ai_server --docker 0.4s
=> CACHED [installer 2/7] COPY .gitignore .gitignore 0.0s
=> CACHED [installer 3/7] COPY --from=builder /app/out/json/ . 0.0s
=> CACHED [installer 4/7] COPY --from=builder /app/out/pnpm-lock.yaml ./pnpm-lock.yaml 0.0s
=> CACHED [installer 5/7] RUN npm install -g pnpm 0.0s
=> CACHED [installer 6/7] RUN pnpm install --frozen-lockfile 0.0s
=> ERROR [installer 7/7] COPY --from=builder /app/out/full/ . 0.0s
------
> [installer 7/7] COPY --from=builder /app/out/full/ .:
------
Dockerfile:34
--------------------
32 |
33 | # # Build the project and its dependencies
34 | >>> COPY --from=builder /app/out/full/ .
35 | # COPY turbo.json turbo.json
36 |
--------------------
ERROR: failed to solve: cannot copy to non-directory: /var/lib/docker/overlay2/qc5w7vfvoom7fgbpnz4wboevb/merged/app/apps/ai_server/node_modules/eslint-config-custom-server
cannot build image
exit status 1
FROM node:20 as base
# Update apt-get and install the necessary libraries
# This is mainly so that the `canvas` package can be installed
RUN apt-get update && \
apt-get install -y build-essential libcairo2-dev libpango1.0-dev libjpeg-dev libgif-dev librsvg2-dev
# RUN corepack enable && corepack prepare pnpm@8.6.5 --activate
FROM base AS builder
WORKDIR /app
RUN npm install -g turbo
COPY . .
RUN turbo prune --scope=@robo/ai_server --docker
FROM base as installer
WORKDIR /app
# First install dependencies (as they change less often)
COPY .gitignore .gitignore
COPY --from=builder /app/out/json/ .
COPY --from=builder /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN npm install -g pnpm
RUN pnpm install --frozen-lockfile
# # Build the project and its dependencies
COPY --from=builder /app/out/full/ .
# COPY turbo.json turbo.json
# # Uncomment and use build args to enable remote caching
# # ARG TURBO_TEAM
# # ENV TURBO_TEAM=$TURBO_TEAM
# # ARG TURBO_TOKEN
# # ENV TURBO_TOKEN=$TURBO_TOKEN
# RUN pnpm dlx turbo build --filter=@robo/ai_server...
I assume the problem is the way in which internal packages are symlinked, but this is just an idea.
@nicu-chiciuc I think this issue should be elevated, since the only reason I've chosen
pnpm
is that the official docs recommend using it:
That's exactly my point here. The Docker examples were made with yarn
but the rest of the docs and other examples use and recommend pnpm
.
For posterity sake, here's how I made it work for me:
The most important part was this comment: https://github.com/vercel/turbo/issues/1997#issuecomment-1273565773
Basically I had to also add
**/node_modules
to my .dockerignore
file (I already had node_modules
)
Before adding **/node_modules
I had issue on the COPY --from=builder /app/out/full/ .
step.
And here's my current Dockerfile
:
FROM node:20 as base
# Update apt-get and install the necessary libraries
# This is mainly so that the `canvas` package can be installed
RUN apt-get update && \
apt-get install -y build-essential libcairo2-dev libpango1.0-dev libjpeg-dev libgif-dev librsvg2-dev
FROM base AS builder
WORKDIR /app
ENV APP_NAME=my_secret_project
# This might be necessary when switching to alpine
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
# RUN apk add --no-cache libc6-compat
RUN npm install -g turbo
COPY . .
RUN turbo prune --scope=@my_secret_org/${APP_NAME} --docker
FROM base as installer
WORKDIR /app
ENV APP_NAME=my_secret_project
RUN npm install -g pnpm
RUN npm install -g turbo
# First install dependencies (as they change less often)
COPY .gitignore .gitignore
COPY --from=builder /app/out/json/ .
COPY --from=builder /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN pnpm install
# Build the project and its dependencies
COPY --from=builder /app/out/full/ .
COPY turbo.json turbo.json
# Uncomment and use build args to enable remote caching
# ARG TURBO_TEAM
# ENV TURBO_TEAM=$TURBO_TEAM
# ARG TURBO_TOKEN
# ENV TURBO_TOKEN=$TURBO_TOKEN
RUN turbo run build --filter=@my_secret_org/${APP_NAME}...
FROM base AS runner
WORKDIR /app
ENV APP_NAME=my_secret_project
RUN npm install -g pnpm
# Don't run production as root
# RUN addgroup --system --gid 1001 expressjs
# RUN adduser --system --uid 1001 expressjs
# USER expressjs
COPY --from=installer /app .
# TODO: Maybe use the npm script?
CMD pnpm --filter "@my_secret_org/${APP_NAME}" run start
I know that this dockerfile is much less optimized than the versions in other comments.
I've used node:20
instead of an alpine version since my plan is to (probably) make it work in a dev-container and I assume I'll need a full-fledged version.
And I also wanted to make it simple/clear enough so that I can change it if necessary.
I created https://github.com/vercel/turbo/pull/5536, I think it should solve the issue.
I came across this issue today looking for some insights around a missing lockfile in the out/json
directory where pnpm-workspace.yaml
is copied.
➜ tree out -L 3
out
├── full
│ ├── apps
│ │ └── frontend
│ ├── package.json
│ ├── packages
│ │ ├── constants
│ │ ├── discord
│ │ ├── features
│ │ ├── github
│ │ ├── prisma-client
│ │ ├── scripts
│ │ ├── support
│ │ └── type-helpers
│ ├── pnpm-workspace.yaml
│ └── turbo.json
├── json
│ ├── apps
│ │ └── frontend
│ ├── package.json
│ ├── packages
│ │ ├── constants
│ │ ├── discord
│ │ ├── features
│ │ ├── github
│ │ ├── prisma-client
│ │ ├── scripts
│ │ ├── support
│ │ └── type-helpers
│ └── pnpm-workspace.yaml
├── pnpm-lock.yaml
└── pnpm-workspace.yaml
I also noticed a few comments here show patches
getting captured as well. Are y'all adding that directory as a workspace?
I've been tinkering with a pnpm+turbo+docker setup for the past few days and here's what I've noticed:
pnpm prune
doesn't ignore lifecycle scripts. be careful how you structure your npm scripts for when you're pruning the virtual store for the final build imagepnpm fetch
, I originally did not think it supported --ignore-scripts
but apparently it does! I'd recommend adding that. If any of the installed dependencies have a postinstall
step that calls a transient dependencies bin then it will fail without --ignore-scripts
.pnpm dlx
which doesn't seem to cache well (10s+ on each build)
dlx
pnpm will unfortunately not load turbo from the virtual store but rather the registry which is likely the cause for the delay on buildIn my experience pnpm and turbo seem to have conflicting approaches to building a docker image.
pnpm fetch
is exclusive to pnpm and fetches all dependencies in the project, loading them into the virtual store. This caches well and has a negligible impact to build time.
turbo prune ... --docker
will generate a pruned lockfile, which we can then use pnpm fetch
on (but we need turbo
installed first`)
What I've landed on is to use turbo
for building only with a mix of pnpm deploy
(on 8.6.6):
#syntax=docker/dockerfile:1.4
ARG NODE_VERSION="18.15.0"
ARG ALPINE_VERSION="3.17"
FROM --platform=linux/amd64 node:${NODE_VERSION}-alpine${ALPINE_VERSION} as base
# for turbo - https://turbo.build/repo/docs/handbook/deploying-with-docker#example
RUN apk add --no-cache libc6-compat
RUN apk update
WORKDIR /workspace
# enable corepack for pnpm
RUN corepack enable
FROM base as fetcher
# pnpm fetch only requires lockfile, but we'll need to build workspaces
COPY pnpm*.yaml ./
COPY patches ./patches
# mount pnpm store as cache & fetch dependencies
RUN --mount=type=cache,id=pnpm-store,target=/root/.local/share/pnpm-store \
pnpm fetch --ignore-scripts
FROM fetcher as builder
# specify the app in apps/ we want to build
ARG APP_NAME="frontend"
ENV APP_NAME=${APP_NAME}
WORKDIR /workspace
COPY . .
RUN --mount=type=secret,id=env,required=true,target=/workspace/.env \
pnpm install --frozen-lockfile --offline --silent
# build app
RUN --mount=type=secret,id=env,required=true,target=/workspace/.env \
--mount=type=cache,target=/workspace/node_modules/.cache \
pnpm turbo run build --filter="${APP_NAME}"
FROM builder as deployer
WORKDIR /workspace
# deploy app
RUN pnpm --filter ${APP_NAME} deploy --prod --ignore-scripts ./out
FROM base as runner
WORKDIR /workspace
# Don't run production as root
RUN addgroup --system --gid 1001 mygroup
RUN adduser --system --uid 1001 myuser
USER myuser
# copy files needed to run the app
COPY --chown=myuser:mygroup --from=deployer /workspace/out/package.json .
COPY --chown=myuser:mygroup --from=deployer /workspace/out/node_modules/ ./node_modules
COPY --chown=myuser:mygroup --from=deployer /workspace/out/build/ ./build
# start the app
CMD pnpm run start
This results in a ~650MB build image in a relatively small monorepo and ~300s builds with no cache. Half of that docker build time is actually from turbo build
. Have y'all figured out how to use the .turbo
cache in the build effectively (with buildkit cache or otherwise -- afaict I'm not ignoring .turbo dirs)?
pnpm deploy
is particularly helpful in a monorepo where you have small, private packages as pnpm will load them into the virtual store and you only need to copy the node_modules dir from the deploy output rather than also copying the symlinked packages/**
Had same problem on COPY --from=builder /app/out/full/ .
line as everyone else using pnpm
.
Added **/node_modules
to root .dockerignore
to resolve.
Then ran into issues with the last 3 COPY lines. Needed to also copy the next.config.js from the with-docker example. tldr- I was missing output: standalone
Ended up with this. I like it because pnpm and turbo are pulled to "base", allowing me to just use bare alpine in the end result. copied from the with-docker example and updated to use pnpm.
# src Dockerfile: https://github.com/vercel/turbo/blob/main/examples/with-docker/apps/web/Dockerfile
FROM node:18-alpine AS alpine
# setup pnpm on the alpine base
FROM alpine as base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
RUN pnpm install turbo --global
FROM base AS builder
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
RUN apk update
# Set working directory
WORKDIR /app
COPY . .
RUN turbo prune --scope=web --docker
# Add lockfile and package.json's of isolated subworkspace
FROM base AS installer
RUN apk add --no-cache libc6-compat
RUN apk update
WORKDIR /app
# First install the dependencies (as they change less often)
COPY .gitignore .gitignore
COPY --from=builder /app/out/json/ .
COPY --from=builder /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
COPY --from=builder /app/out/pnpm-workspace.yaml ./pnpm-workspace.yaml
RUN pnpm install
# Build the project
COPY --from=builder /app/out/full/ .
COPY turbo.json turbo.json
# Uncomment and use build args to enable remote caching
# ARG TURBO_TEAM
# ENV TURBO_TEAM=$TURBO_TEAM
# ARG TURBO_TOKEN
# ENV TURBO_TOKEN=$TURBO_TOKEN
RUN turbo run build --filter=web
# use alpine as the thinest image
FROM alpine AS runner
WORKDIR /app
# Don't run production as root
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
USER nextjs
COPY --from=installer /app/apps/web/next.config.js .
COPY --from=installer /app/apps/web/package.json .
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=installer --chown=nextjs:nodejs /app/apps/web/.next/standalone ./
COPY --from=installer --chown=nextjs:nodejs /app/apps/web/.next/static ./apps/web/.next/static
COPY --from=installer --chown=nextjs:nodejs /app/apps/web/public ./apps/web/public
CMD node apps/web/server.js
Would any of you be interested in making a PR to update the with-docker
example? We're ok with switching away from yarn
Would any of you be interested in making a PR to update the
with-docker
example? We're ok with switching away from yarn
Hi, there is a closed PR here: https://github.com/vercel/turbo/pull/5536
https://github.com/vercel/turbo/pull/5536 got closed but not merged. So was the PR not correct and / or what else needs to happen to get the with-docker
example updated? Thanks.
Hi all, I've figure out a version for pnpm+turborepo+nextjs(standalone output), please take a look:
FROM node:18-alpine AS base
RUN apk add --no-cache libc6-compat
RUN npm install -g pnpm
RUN npm install -g turbo
FROM base AS pruner
WORKDIR /app
COPY . .
RUN turbo prune {your_app_name} --docker
FROM base as builder
WORKDIR /app
COPY .gitignore .gitignore
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN pnpm install --frozen-lockfile
COPY --from=pruner /app/out/full/ .
COPY turbo.json turbo.json
RUN pnpm --filter {your_app_name} --prod deploy full
WORKDIR /app/full
RUN pnpm run build
RUN rm -rf ./.next/cache
FROM base AS runner
WORKDIR /app
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
USER nextjs
COPY --from=builder /app/full/next.config.js .
COPY --from=builder /app/full/package.json .
COPY --from=builder --chown=nextjs:nodejs /app/full/.next .
COPY --from=builder --chown=nextjs:nodejs /app/full/.next/static ./standalone/.next/static
COPY --from=builder --chown=nextjs:nodejs /app/full/public ./standalone/public
CMD node ./standalone/server.js
This result a ~300MB build image.
Basically I just cd into pruned/full folder make by turbo prune
and build the app in the folder after executing pnpm deploy
(I think it is used to copy real dependency file from pnpm store), and then copy the production code to the root of workdir in my runner stage.
Привет всем, я нашел версию для pnpm+turborepo+nextjs (автономный вывод), пожалуйста, посмотрите:
FROM node:18-alpine AS base RUN apk add --no-cache libc6-compat RUN npm install -g pnpm RUN npm install -g turbo FROM base AS pruner WORKDIR /app COPY . . RUN turbo prune {your_app_name} --docker FROM base as builder WORKDIR /app COPY .gitignore .gitignore COPY --from=pruner /app/out/json/ . COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml RUN pnpm install --frozen-lockfile COPY --from=pruner /app/out/full/ . COPY turbo.json turbo.json RUN pnpm --filter {your_app_name} --prod deploy full WORKDIR /app/full RUN pnpm run build RUN rm -rf ./.next/cache FROM base AS runner WORKDIR /app RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs USER nextjs COPY --from=builder /app/full/next.config.js . COPY --from=builder /app/full/package.json . COPY --from=builder --chown=nextjs:nodejs /app/full/.next . COPY --from=builder --chown=nextjs:nodejs /app/full/.next/static ./standalone/.next/static COPY --from=builder --chown=nextjs:nodejs /app/full/public ./standalone/public CMD node ./standalone/server.js
В результате получается образ сборки размером около 300 МБ. По сути, я просто перехожу в обрезанную/полную папку
turbo prune
и собираю приложение в папке после выполненияpnpm deploy
(я думаю, что оно используется для копирования реального файла зависимостей из хранилища pnpm), а затем копирую рабочий код в корень рабочего каталога в моем бегуне. этап.
Hi! Great job! But for some reason my stats didn't load. Correctly the copying a bit
FROM node:18-alpine AS base
RUN apk add --no-cache libc6-compat
RUN npm install -g pnpm
RUN npm install -g turbo
FROM base AS pruner
WORKDIR /app
COPY . .
RUN turbo prune {your_app_name} --docker
FROM base as builder
WORKDIR /app
COPY .gitignore .gitignore
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN pnpm install --frozen-lockfile
COPY --from=pruner /app/out/full/ .
COPY .env.production ./apps/main/.env.production
COPY turbo.json turbo.json
RUN pnpm --filter {your_app_name} deploy full
WORKDIR /app/full
RUN pnpm run build
RUN rm -rf ./.next/cache
FROM base AS runner
WORKDIR /app
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
USER nextjs
COPY --from=builder /app/full/next.config.js .
COPY --from=builder /app/full/package.json .
COPY --from=builder --chown=nextjs:nodejs /app/full/.next .
COPY --from=builder --chown=nextjs:nodejs /app/full/.next/static ./standalone/full/.next/static
COPY --from=builder --chown=nextjs:nodejs /app/full/public ./standalone/full/public
CMD node ./standalone/full/server.js
We've recently updated our with-docker
example. Unfortunately, we don't have the bandwidth to create examples for every permutation of package managers (each package manager, each deployment platform, each packaging mechanism, etc.).
Seeing a lot of great chatter here, though, so kudos to the group! Will close as we have examples with both pnpm and Docker, separately, and won't make one with the cross-product.
We've recently updated our
with-docker
example. Unfortunately, we don't have the bandwidth to create examples for every permutation of package managers (each package manager, each deployment platform, each packaging mechanism, etc.).
Nobody is asking for all that though...just pnpm package manager. Echoing @jvzaniolo, pnpm recommended and used in other examples.
For anybody looking for a solution - I was able to make it work after 2 days of grind (ridiculous, I know). I hope it'll help you out.
I'm using TurboRepo and pnpm - and the difficulty is how pnpm and symlinks are working. Basically, pnpm stores node_modules at root level instead of package level and make references in your specific app to the packages at root level. If you want to understand, you have to look at pnpm docs and check out prune
, --filter
and --scope
.
I wasn't able to run the Dockerfile from the app level ie. /apps/api/Dockerfile
for ex. bc my Dockerfile needs to copy files such as turbo.json or pnpm-lock.yaml that are only available at root level. So I named my file like this for simplicity Dockerfile.api
. I don't even know how people are able to run Dockerfile
from nested apps, given that it always needs a lock
(wether it's pnpm or yarn) file and turbo.json
.
Then came the Dockerfile. Basically the difficulty is handling the way pnpm downloads and reference packages in a way that's simple for my specific app, the api
to have them available under node_modules
.
The end goal is to have a simple folder with api/**
where **
is all the necessary node_modules
flattened out (ie. no symlinks, just the pure code) and the JS compiled ready to execute. Sounds simple but with monorepo, it can be tough...
So here's my code out below with explanation at each steps
# Base image with Node.js
ARG NODE_VERSION=18.18.0
# Use a specific version of the Node.js Alpine image as the base. Alpine images are minimal and lightweight.
FROM node:${NODE_VERSION}-alpine AS base
# Update the package list and install libc6-compat. This package is often required for binary Node.js modules.
RUN apk update && apk add --no-cache libc6-compat
# Setup pnpm and turbo
# Start a new stage based on the base image for setting up pnpm (a package manager) and turbo (for monorepo management).
FROM base as setup
# Install pnpm and turbo globally using npm.
RUN npm install -g pnpm turbo
# Configure pnpm to use a specific directory for storing its package cache.
RUN pnpm config set store-dir ~/.pnpm-store
# Build argument for specifying the project
# Introduce a build argument 'PROJECT' to specify which project in the monorepo to build.
ARG PROJECT=api
# Install all dependencies in the monorepo
# Start a new stage for handling dependencies. This stage uses the previously setup image with pnpm and turbo installed.
FROM setup AS dependencies
WORKDIR /app
# Copy the essential configuration files and the specific project's files into the Docker image.
COPY packages/ ./packages/
COPY turbo.json ./
COPY package.json turbo.json packages ./
COPY apps/${PROJECT} ./apps/${PROJECT}
COPY pnpm-lock.yaml pnpm-workspace.yaml ./
# Install dependencies as per the lockfile to ensure consistent dependency resolution.
RUN pnpm install --frozen-lockfile
# Prune projects to focus on the specified project scope
# Start a new stage to prune the monorepo, focusing only on the necessary parts for the specified project.
FROM dependencies AS pruner
RUN turbo prune --scope=${PROJECT} --docker
# Remove all empty node_modules folders. This is a cleanup step to remove unnecessary directories and reduce image size.
RUN rm -rf /app/out/full/*/*/node_modules
# Build the project using turbo
# Start a new stage for building the project. This stage will compile and prepare the project for production.
FROM pruner AS builder
WORKDIR /app
# Copy pruned lockfile and package.json files
# This ensures that the builder stage has the exact dependencies needed for the project.
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
COPY --from=pruner /app/out/pnpm-workspace.yaml ./pnpm-workspace.yaml
COPY --from=pruner /app/out/json/ .
# Install dependencies for the pruned project
# Utilize BuildKit's cache to speed up the dependency installation process.
RUN --mount=type=cache,id=pnpm,target=~/.pnpm-store pnpm install --frozen-lockfile
# Copy pruned source code
# Bring in the necessary source code to the builder stage for compilation.
COPY --from=pruner /app/out/full/ .
# Build with turbo and prune dev dependencies
# Use turbo to build the project, followed by pruning development dependencies to minimize the final image size.
RUN turbo build --filter=${PROJECT}...
RUN --mount=type=cache,id=pnpm,target=~/.pnpm-store pnpm prune --prod --no-optional
# Remove source files to further reduce the image size, keeping only the compiled output and necessary runtime files.
RUN rm -rf ./**/*/src
# Final production image
# Start the final stage for the production-ready image.
FROM base AS runner
# Create a non-root user and group for better security.
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nodejs
# Switch to the non-root user.
USER nodejs
WORKDIR /app
# Copy the entire app directory, including node_modules and built code. This includes all necessary runtime files.
COPY --from=builder --chown=nodejs:nodejs /app .
WORKDIR /app/apps/${PROJECT}
# Specify the command to run the application. Adjust the path as needed for your project's start script.
CMD ["npm", "run", "start"]
I don't know your specific use case but know this - in the above file, I have wrote manually COPY
commands for my specific use case (as well as specifying PROJECT
env).
These are
COPY packages/ ./packages/
COPY turbo.json ./
COPY package.json turbo.json packages ./
COPY apps/${PROJECT} ./apps/${PROJECT}
COPY pnpm-lock.yaml pnpm-workspace.yaml ./
In my monorepo specifically, the api
app only needs some packages
nothing else. If you named your shared package like shared
change COPY packages/ ./packages/
to COPY shared/ ./shared/
if that makes sense.
This part RUN rm -rf /app/out/full/*/*/node_modules
is necessary as I'm running under macOS (see this issue https://github.com/vercel/turbo/issues/1997).
@nicu-chiciuc The pnpm example is working perfectly can someone guide me how can I pass the env variables to the docker images while running. I tried docker run -p 8080:8080 -e DATABASE_URL="<url-here>" web
. But doesn't seem working. Though the env variable is passed to the container. I can see them when I run process.env
in the node runtime in the container
This is my turbo.json
{
"$schema": "https://turbo.build/schema.json",
"pipeline": {
"codegen": { "outputs": [] },
"build": {
"outputs": ["dist/**", ".next/**"],
"outputMode": "new-only"
},
"lint": { "outputs": [], "outputMode": "errors-only" },
"lint:fix": { "outputs": [], "outputMode": "errors-only" },
"prettier": { "outputs": [], "outputMode": "errors-only" },
"prettier:fix": { "outputs": [], "outputMode": "errors-only" },
"typecheck": { "outputs": [], "outputMode": "errors-only" },
"dev": { "cache": false, "persistent": true },
"start": {
"dependsOn": ["^build"],
"cache": false,
"persistent": true
},
"test:dev": { "cache": false }
}
}
@ezhil56x it's not clear what specifically doesn't work. You mention
I can see them when I run
process.env
in the node runtime
is that not the goal of passing the envs?
For my project we use docker-compose and with env_file
and environment
to set up the necessary env variables, so I'm not sure what happens in other scenarios.
I wrote an article few months ago. It works perfectly for us.
You can check it here
https://fintlabs.medium.com/optimized-multi-stage-docker-builds-with-turborepo-and-pnpm-for-nodejs-microservices-in-a-monorepo-c686fdcf051f
Hope it will help someone 🙂
I wrote an article few months ago. It works perfectly for us. You can check it here
https://fintlabs.medium.com/optimized-multi-stage-docker-builds-with-turborepo-and-pnpm-for-nodejs-microservices-in-a-monorepo-c686fdcf051f Hope it will help someone 🙂
Have you tried running this dockerfile on GitHub actions?
K
I wrote an article few months ago. It works perfectly for us. You can check it here https://fintlabs.medium.com/optimized-multi-stage-docker-builds-with-turborepo-and-pnpm-for-nodejs-microservices-in-a-monorepo-c686fdcf051f Hope it will help someone 🙂
Have you tried running this dockerfile on GitHub actions?
Nope. I tried it with GitLab CI and locally.
I wrote an article few months ago. It works perfectly for us. You can check it here https://fintlabs.medium.com/optimized-multi-stage-docker-builds-with-turborepo-and-pnpm-for-nodejs-microservices-in-a-monorepo-c686fdcf051f Hope it will help someone 🙂
your article helped a ton. thank you for sharing !
The image on that Article is a bit crap went from 300MB image to 1.3GB image.
The image on that Article is a bit crap went from 300MB image to 1.3GB image.
For me it is 107 MB for a NodeJS microservice built using NestJS
I'm not sure where my issue is. Most of the dockerfiles posted here give me 1GB images. The only one that gave me a small image was this one:
Had same problem on
COPY --from=builder /app/out/full/ .
line as everyone else usingpnpm
.Added
**/node_modules
to root.dockerignore
to resolve.Then ran into issues with the last 3 COPY lines. Needed to also copy the next.config.js from the with-docker example. tldr- I was missing
output: standalone
Ended up with this. I like it because pnpm and turbo are pulled to "base", allowing me to just use bare alpine in the end result. copied from the with-docker example and updated to use pnpm.
# src Dockerfile: https://github.com/vercel/turbo/blob/main/examples/with-docker/apps/web/Dockerfile FROM node:18-alpine AS alpine # setup pnpm on the alpine base FROM alpine as base ENV PNPM_HOME="/pnpm" ENV PATH="$PNPM_HOME:$PATH" RUN corepack enable RUN pnpm install turbo --global FROM base AS builder # Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. RUN apk add --no-cache libc6-compat RUN apk update # Set working directory WORKDIR /app COPY . . RUN turbo prune --scope=web --docker # Add lockfile and package.json's of isolated subworkspace FROM base AS installer RUN apk add --no-cache libc6-compat RUN apk update WORKDIR /app # First install the dependencies (as they change less often) COPY .gitignore .gitignore COPY --from=builder /app/out/json/ . COPY --from=builder /app/out/pnpm-lock.yaml ./pnpm-lock.yaml COPY --from=builder /app/out/pnpm-workspace.yaml ./pnpm-workspace.yaml RUN pnpm install # Build the project COPY --from=builder /app/out/full/ . COPY turbo.json turbo.json # Uncomment and use build args to enable remote caching # ARG TURBO_TEAM # ENV TURBO_TEAM=$TURBO_TEAM # ARG TURBO_TOKEN # ENV TURBO_TOKEN=$TURBO_TOKEN RUN turbo run build --filter=web # use alpine as the thinest image FROM alpine AS runner WORKDIR /app # Don't run production as root RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs USER nextjs COPY --from=installer /app/apps/web/next.config.js . COPY --from=installer /app/apps/web/package.json . # Automatically leverage output traces to reduce image size # https://nextjs.org/docs/advanced-features/output-file-tracing COPY --from=installer --chown=nextjs:nodejs /app/apps/web/.next/standalone ./ COPY --from=installer --chown=nextjs:nodejs /app/apps/web/.next/static ./apps/web/.next/static COPY --from=installer --chown=nextjs:nodejs /app/apps/web/public ./apps/web/public CMD node apps/web/server.js
I think one of the main difference is that is doesn't use this pruning and caching stuff. RUN --mount=type=cache,id=pnpm,target=~/.pnpm-store pnpm prune --prod --no-optional
I have added to --frozen-lockfile to the dockerfile above and the size is still small.
I just recently started using pnpm
Does anyone know why this could be happening?
Are you building a NextJS project or NodeJS? For NextJS I haven’t tried yet, but planning to create a Dockerfile and write an article.
I'm building for NextJS. That would be great.
I guess that's the problem I just copying the whole build instead of just the NextJS built files.
I'll try and adapt it to NextJS and post my results here
Okay so I've editing this Dockerfile to work will with NextJS
I wrote an article few months ago. It works perfectly for us. You can check it here https://fintlabs.medium.com/optimized-multi-stage-docker-builds-with-turborepo-and-pnpm-for-nodejs-microservices-in-a-monorepo-c686fdcf051f Hope it will help someone 🙂
ARG NODE_VERSION=20
# Alpine image
FROM node:${NODE_VERSION}-alpine AS alpine
RUN apk update
RUN apk add --no-cache libc6-compat
# Setup pnpm and turbo on the alpine base
FROM alpine as base
RUN npm install pnpm turbo --global
RUN pnpm config set store-dir ~/.pnpm-store
# Prune projects
FROM base AS pruner
ARG PROJECT
WORKDIR /app
COPY . .
RUN turbo prune --scope=${PROJECT} --docker
# Build the project
FROM base AS builder
ARG PROJECT
WORKDIR /app
# Copy lockfile and package.json's of isolated subworkspace
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
COPY --from=pruner /app/out/pnpm-workspace.yaml ./pnpm-workspace.yaml
COPY --from=pruner /app/out/json/ .
# First install the dependencies (as they change less often)
RUN --mount=type=cache,id=pnpm,target=~/.pnpm-store pnpm install --frozen-lockfile
# Copy source code of isolated subworkspace
COPY --from=pruner /app/out/full/ .
RUN turbo build --filter=${PROJECT}
RUN --mount=type=cache,id=pnpm,target=~/.pnpm-store pnpm prune --prod --no-optional
RUN rm -rf ./**/*/src
# Final image
FROM alpine AS runner
ARG PROJECT
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nodejs
USER nodejs
WORKDIR /app
COPY --from=builder --chown=nodejs:nodejs /app/apps/${PROJECT}/next.config.js .
COPY --from=builder --chown=nodejs:nodejs /app/apps/${PROJECT}/package.json .
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nodejs:nodejs /app/apps/${PROJECT}/.next/standalone ./
COPY --from=builder --chown=nodejs:nodejs /app/apps/${PROJECT}/.next/static ./apps/${PROJECT}/.next/static
COPY --from=builder --chown=nodejs:nodejs /app/apps/${PROJECT}/public ./apps/${PROJECT}/public
WORKDIR /app/apps/${PROJECT}
ARG PORT=3000
ENV PORT=${PORT}
ENV NODE_ENV=production
EXPOSE ${PORT}
CMD ["node", "server.js"]
Also, avoid setting ENV NODE_ENV production
in your Dockerfile as this also sets pnpm install
to behave as pnpm install --prod
which may not be what you want.
Had same problem on
COPY --from=builder /app/out/full/ .
line as everyone else usingpnpm
.Added
**/node_modules
to root.dockerignore
to resolve.Then ran into issues with the last 3 COPY lines. Needed to also copy the next.config.js from the with-docker example. tldr- I was missing
output: standalone
Ended up with this. I like it because pnpm and turbo are pulled to "base", allowing me to just use bare alpine in the end result. copied from the with-docker example and updated to use pnpm.
# src Dockerfile: https://github.com/vercel/turbo/blob/main/examples/with-docker/apps/web/Dockerfile FROM node:18-alpine AS alpine # setup pnpm on the alpine base FROM alpine as base ENV PNPM_HOME="/pnpm" ENV PATH="$PNPM_HOME:$PATH" RUN corepack enable RUN pnpm install turbo --global FROM base AS builder # Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. RUN apk add --no-cache libc6-compat RUN apk update # Set working directory WORKDIR /app COPY . . RUN turbo prune --scope=web --docker # Add lockfile and package.json's of isolated subworkspace FROM base AS installer RUN apk add --no-cache libc6-compat RUN apk update WORKDIR /app # First install the dependencies (as they change less often) COPY .gitignore .gitignore COPY --from=builder /app/out/json/ . COPY --from=builder /app/out/pnpm-lock.yaml ./pnpm-lock.yaml COPY --from=builder /app/out/pnpm-workspace.yaml ./pnpm-workspace.yaml RUN pnpm install # Build the project COPY --from=builder /app/out/full/ . COPY turbo.json turbo.json # Uncomment and use build args to enable remote caching # ARG TURBO_TEAM # ENV TURBO_TEAM=$TURBO_TEAM # ARG TURBO_TOKEN # ENV TURBO_TOKEN=$TURBO_TOKEN RUN turbo run build --filter=web # use alpine as the thinest image FROM alpine AS runner WORKDIR /app # Don't run production as root RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs USER nextjs COPY --from=installer /app/apps/web/next.config.js . COPY --from=installer /app/apps/web/package.json . # Automatically leverage output traces to reduce image size # https://nextjs.org/docs/advanced-features/output-file-tracing COPY --from=installer --chown=nextjs:nodejs /app/apps/web/.next/standalone ./ COPY --from=installer --chown=nextjs:nodejs /app/apps/web/.next/static ./apps/web/.next/static COPY --from=installer --chown=nextjs:nodejs /app/apps/web/public ./apps/web/public CMD node apps/web/server.js
This setup was much better than what I had before, thanks a lot! I went from 1.7GB to 185MB with this Dockerfile after making some small adjustments to make it fit my project! 🥳
output: "standalone"
in next.config.js
**/node_modules
in .dockerignore
My Dockerfile skills aren't great so this helped a lot! Thank you!
I've been trying to get it to work for the last few days with pnpm and standalone output in next.js.
The file-output with dive looks like this:
This is my Dockerfile:
# We run this file from the root directory (see docker:build:next command in package.json)
ARG APP_DIRNAME=next
ARG PROJECT=@foundation/next
ARG NODE_VERSION=20.11
# 1. Alpine image
FROM node:${NODE_VERSION}-alpine AS alpine
RUN apk update
RUN apk add --no-cache libc6-compat
# Setup pnpm and turbo on the alpine base
FROM alpine as base
RUN corepack enable
RUN npm install turbo --global
RUN pnpm config set store-dir ~/.pnpm-store
# 2. Prune projects
FROM base AS pruner
ARG PROJECT
WORKDIR /app
COPY . .
RUN turbo prune --scope=${PROJECT} --docker
# 3. Build the project
FROM base AS builder
ARG PROJECT
WORKDIR /app
# Copy lockfile and package.json's of isolated subworkspace
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
COPY --from=pruner /app/out/pnpm-workspace.yaml ./pnpm-workspace.yaml
COPY --from=pruner /app/out/json/ .
# First install the dependencies (as they change less often)
RUN --mount=type=cache,id=pnpm,target=~/.pnpm-store pnpm install --frozen-lockfile
# Copy source code of isolated subworkspace
COPY --from=pruner /app/out/full/ .
RUN turbo build --filter=${PROJECT}
RUN --mount=type=cache,id=pnpm,target=~/.pnpm-store pnpm prune --prod --no-optional
RUN rm -rf ./**/*/src
# 4. Final image - runner stage to run the application
FROM alpine AS runner
ARG APP_DIRNAME
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 next
USER next
WORKDIR /app
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/next.config.mjs .
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/package.json .
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/.next/standalone ./
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/.next/static ./.next/static
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/public ./public
CMD ["node", "server.js"]
When trying to run the container i'm getting:
2024-05-27 15:37:27 node:internal/modules/cjs/loader:1145
2024-05-27 15:37:27 const err = new Error(message);
2024-05-27 15:37:27 ^
2024-05-27 15:37:27
2024-05-27 15:37:27 Error: Cannot find module 'next'
2024-05-27 15:37:27 Require stack:
2024-05-27 15:37:27 - /app/server.js
2024-05-27 15:37:27 at Module._resolveFilename (node:internal/modules/cjs/loader:1145:15)
2024-05-27 15:37:27 at Module._load (node:internal/modules/cjs/loader:986:27)
2024-05-27 15:37:27 at Module.require (node:internal/modules/cjs/loader:1233:19)
2024-05-27 15:37:27 at require (node:internal/modules/helpers:179:18)
2024-05-27 15:37:27 at file:///app/server.js:22:1
2024-05-27 15:37:27 at ModuleJob.run (node:internal/modules/esm/module_job:222:25)
2024-05-27 15:37:27 at async ModuleLoader.import (node:internal/modules/esm/loader:316:24)
2024-05-27 15:37:27 at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:123:5) {
2024-05-27 15:37:27 code: 'MODULE_NOT_FOUND',
2024-05-27 15:37:27 requireStack: [ '/app/server.js' ]
2024-05-27 15:37:27 }
2024-05-27 15:37:27
2024-05-27 15:37:27 Node.js v20.13.1
I run the build with turbo from root like so: (I have a remix application in this monorepo that has a very similar structure and runs perfectly like this from root)
// from root package.json
"docker:build:next": "turbo docker:build --filter=@foundation/next"
// in next-app package.json
"docker:build": "cd ../.. && docker build -t foundation/next -f apps/next/Dockerfile ."
If anyone has any idea to share how to solve this it's greatly appreciated! <3
@joakim-roos it looks like you are missing some dependencies and, therefore, I guess you would have some issues when running pnpm install
.
The way I debug this is to add --no-cache
and --progress=plain
to your docker build command. This will log every output of your multi staged build. From there you should see if pnpm install
runs as expected.
Thanks @edouardr!
I did find the issue. The problem was that I totally forgot to copy over the node_modules dependencies so I added an extra layer which installs these and then copy over them to the final build. Here's a working version:
# We run this file from the root directory (see docker:build:next command in package.json)
ARG APP_DIRNAME=next
ARG PROJECT=@foundation/next
ARG NODE_VERSION=20.11
# 1. Alpine image
FROM node:${NODE_VERSION}-alpine AS alpine
RUN apk update
RUN apk add --no-cache libc6-compat
# Setup pnpm and turbo on the alpine base
FROM alpine as base
RUN corepack enable
RUN npm install turbo --global
RUN pnpm config set store-dir ~/.pnpm-store
# 2. Prune projects
FROM base AS pruner
ARG PROJECT
WORKDIR /app
COPY . .
RUN turbo prune --scope=${PROJECT} --docker
# 3. Build the project
FROM base AS builder
ARG PROJECT
WORKDIR /app
# Copy lockfile and package.json's of isolated subworkspace
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
COPY --from=pruner /app/out/pnpm-workspace.yaml ./pnpm-workspace.yaml
COPY --from=pruner /app/out/json/ .
# First install the dependencies (as they change less often)
RUN --mount=type=cache,id=pnpm,target=~/.pnpm-store pnpm install --frozen-lockfile
# Copy source code of isolated subworkspace
COPY --from=pruner /app/out/full/ .
RUN turbo build --filter=${PROJECT}
RUN --mount=type=cache,id=pnpm,target=~/.pnpm-store pnpm prune --prod --no-optional
RUN rm -rf ./**/*/src
# 4. Production dependencies
FROM builder AS dependencies
WORKDIR /app
RUN pnpm --filter=$PROJECT deploy --prod --ignore-scripts --no-optional /dependencies
# 5. Final image - runner stage to run the application
FROM alpine AS runner
ARG APP_DIRNAME
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 next
USER next
WORKDIR /app
ENV NODE_ENV production
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/next.config.mjs .
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/package.json .
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/.next/standalone ./
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/.next/static ./.next/static
COPY --from=dependencies --chown=remix:nodejs /dependencies/node_modules ./node_modules
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/public ./public
CMD ["node", "/app/server.js"]
This setup was much better than what I had before, thanks a lot! I went from 1.7GB to 185MB with this Dockerfile after making some small adjustments to make it fit my project! 🥳
Hello, could anyone help me? I am having this problem in the running RUN turbo run build --filter=web
my dockerfile:
# We run this file from the root directory (see docker:build:next command in package.json)
ARG APP_DIRNAME=web
ARG PROJECT=web
ARG NODE_VERSION=20.11
# 1. Alpine image
FROM node:${NODE_VERSION}-alpine AS alpine
RUN apk update
RUN apk add --no-cache libc6-compat
# Setup pnpm and turbo on the alpine base
FROM alpine as base
RUN corepack enable
RUN npm install turbo --global
RUN pnpm config set store-dir ~/.pnpm-store
# 2. Prune projects
FROM base AS pruner
ARG PROJECT
WORKDIR /app
COPY . .
RUN turbo prune --scope=${PROJECT} --docker
# 3. Build the project
FROM base AS builder
ARG PROJECT
WORKDIR /app
# Copy lockfile and package.json's of isolated subworkspace
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
COPY --from=pruner /app/out/pnpm-workspace.yaml ./pnpm-workspace.yaml
COPY --from=pruner /app/out/json/ .
# First install the dependencies (as they change less often)
RUN --mount=type=cache,id=pnpm,target=~/.pnpm-store pnpm install
# Copy source code of isolated subworkspace
COPY --from=pruner /app/out/full/ .
RUN turbo build --filter=${PROJECT}
RUN --mount=type=cache,id=pnpm,target=~/.pnpm-store pnpm prune --prod --no-optional
RUN rm -rf ./**/*/src
# 4. Production dependencies
FROM builder AS dependencies
WORKDIR /app
RUN pnpm --filter=$PROJECT deploy --prod --ignore-scripts --no-optional /dependencies
# 5. Final image - runner stage to run the application
FROM alpine AS runner
ARG APP_DIRNAME
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 next
USER next
WORKDIR /app
ENV NODE_ENV production
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/next.config.mjs .
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/package.json .
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/.next/standalone ./
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/.next/static ./.next/static
COPY --from=dependencies --chown=remix:nodejs /dependencies/node_modules ./node_modules
COPY --from=builder --chown=next:nodejs /app/apps/${APP_DIRNAME}/public ./public
CMD ["node", "/app/server.js"]
Which project is this feature idea for?
Turborepo
Describe the feature you'd like to request
pnpm
is the package manager of the other examples so it's weird that thewith-docker
example usesyarn
. It would be nice to have awith-docker-pnpm
example too.Describe the solution you'd like
The solution should use
pnpm fetch
which is the recommended way to usepnpm
with dockerDescribe alternatives you've considered
I was able to create a version with
pnpm
but it doesn't usepnpm fetch
so I don't think it's the best way to use it