microsoft / vscode-dev-containers

NOTE: Most of the contents of this repository have been migrated to the new devcontainers GitHub org (https://github.com/devcontainers). See https://github.com/devcontainers/template-starter and https://github.com/devcontainers/feature-starter for information on creating your own!
https://aka.ms/vscode-remote
MIT License
4.72k stars 1.4k forks source link

Document best practice for multi-stage dockerfiles for dev, build, and prod #304

Open Chuxel opened 4 years ago

Chuxel commented 4 years ago

The containers in this repository do not make a strong stance on how dev containers should be used for container based applications because we want to:

However, we can document how the dev containers can be used in a multi-stage dockerfile (with the target property in devcontainer.json) as a "builder" for creating the production container image as well.

For example:

devcontainer.json snippet:

{
    "build": {
        "dockerfile": "Dockerfile",
        "target": "development"
    }
}

Dockerfile:

FROM mcr.microsoft.com/vscode/devcontainers/typescript-node:12 AS development

# Build steps go here
FROM development as builder
WORKDIR /app
COPY src/ *.json ./
RUN yarn install \
    && yarn compile \
    #  Just install prod dependencies
    && yarn install --prod

# Actual production environment setup goes here
FROM node:12-slim AS production
WORKDIR /app
COPY --from=builder /app/out/ ./out/
COPY --from=builder /app/node_modules/ ./node_modules/
COPY --from=builder /app/package.json .
EXPOSE 3000
ENTRYPOINT [ "/bin/bash", "-c" ]
CMD [ "npm start" ]

This allows the dev container image to be used for development inside the container and in building the application for production, but a "slim" image to be used for production with it's contents copied out of the previous "builder" stage. This creates the smallest possible prod image.

Furthermore, once this smaller image is deployed, the "attach" workflow can be used if there is something that only appears with this config.

PavelSosin-320 commented 4 years ago

@Chuxel How-to: correctly deploy dev container to managed Swarm cluster(*)

  1. Enable Swarm mode in Docker Desktop and init Swarm - docker swarm init
  2. Deploy registry;2 image as localhost:5000 from Dockerhub as a service - local images are not supported
  3. Clone VSCodeDevContainerbase to the local registry 3 Deploy the cloned image to the swarm as a service:
    docker service create --detach=false --name=devcontainerVol --network=host --tty --no-healthcheck  \
    --mount source=git,target=/var/git localhost:5000/devcontainerVol
  4. Find the task running locally - docker service ps devcontainerVol --- memorize task id === container id
  5. Test git clone docker exec 9621f482ce96 git clone https://github.com/microsoft/vscode-dev-containers.git` Expected: Cloning into 'vscode-dev-containers'... (*) WSL 2.0 terminal is used.
Chuxel commented 4 years ago

@PavelSosin-320 This is off topic for this issue. While Docker Compose is supported, there is not specific support for Docker Swarm today, so please raise your own issue at https://github.com/microsoft/vscode-remote-release or upvote https://github.com/microsoft/vscode-remote-release/issues/148

galah92 commented 2 years ago

I'll highly appreciate some official documentation on the topic, or even reference to other places to read through. My use case is being able to have a single Dockerfile both for development using "Remote Containers", and deploy via GCP Cloud Run.

arash-bizcover commented 1 year ago

An example from library (dev setup) point of view on python:

# Stage 1: Build
FROM python:3.10 AS build

# Install 
RUN apt update && \
    apt install -y sudo 

# Add non-root user
ARG USERNAME=nonroot
RUN groupadd --gid 1000 $USERNAME && \
    useradd --uid 1000 --gid 1000 -m $USERNAME
## Make sure to reflect new user in PATH
ENV PATH="/home/${USERNAME}/.local/bin:${PATH}"
USER $USERNAME

## Pip dependencies
# Upgrade pip
RUN pip install --upgrade pip
# Install production dependencies
COPY --chown=nonroot:1000 requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt && \
    rm /tmp/requirements.txt

# Stage 2: Development
FROM build AS development
# Install development dependencies
COPY --chown=nonroot:1000 requirements-dev.txt /tmp/requirements-dev.txt
RUN pip install -r /tmp/requirements-dev.txt && \
    rm /tmp/requirements-dev.txt

# Stage 3: Production
FROM build AS production
# No additional steps are needed, as the production dependencies are already installed

docker build --target development : build an image with both production and development dependencies while docker build --target production: build an image with only the production dependencies.

arash-bizcover commented 1 year ago

An example from library (dev setup) point of view on python:

# Stage 1: Build
FROM python:3.10 AS build

# Install 
RUN apt update && \
    apt install -y sudo 

# Add non-root user
ARG USERNAME=nonroot
RUN groupadd --gid 1000 $USERNAME && \
    useradd --uid 1000 --gid 1000 -m $USERNAME
## Make sure to reflect new user in PATH
ENV PATH="/home/${USERNAME}/.local/bin:${PATH}"
USER $USERNAME

## Pip dependencies
# Upgrade pip
RUN pip install --upgrade pip
# Install production dependencies
COPY --chown=nonroot:1000 requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt && \
    rm /tmp/requirements.txt

# Stage 2: Development
FROM build AS development
# Install development dependencies
COPY --chown=nonroot:1000 requirements-dev.txt /tmp/requirements-dev.txt
RUN pip install -r /tmp/requirements-dev.txt && \
    rm /tmp/requirements-dev.txt

# Stage 3: Production
FROM build AS production
# No additional steps are needed, as the production dependencies are already installed

docker build --target development : build an image with both production and development dependencies while docker build --target production: build an image with only the production dependencies.