Open Chuxel opened 4 years ago
@Chuxel How-to: correctly deploy dev container to managed Swarm cluster(*)
docker service create --detach=false --name=devcontainerVol --network=host --tty --no-healthcheck \
--mount source=git,target=/var/git localhost:5000/devcontainerVol
docker service ps devcontainerVol
--- memorize task id === container id@PavelSosin-320 This is off topic for this issue. While Docker Compose is supported, there is not specific support for Docker Swarm today, so please raise your own issue at https://github.com/microsoft/vscode-remote-release or upvote https://github.com/microsoft/vscode-remote-release/issues/148
I'll highly appreciate some official documentation on the topic, or even reference to other places to read through.
My use case is being able to have a single Dockerfile
both for development using "Remote Containers", and deploy via GCP Cloud Run.
An example from library (dev setup) point of view on python:
# Stage 1: Build
FROM python:3.10 AS build
# Install
RUN apt update && \
apt install -y sudo
# Add non-root user
ARG USERNAME=nonroot
RUN groupadd --gid 1000 $USERNAME && \
useradd --uid 1000 --gid 1000 -m $USERNAME
## Make sure to reflect new user in PATH
ENV PATH="/home/${USERNAME}/.local/bin:${PATH}"
USER $USERNAME
## Pip dependencies
# Upgrade pip
RUN pip install --upgrade pip
# Install production dependencies
COPY --chown=nonroot:1000 requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt && \
rm /tmp/requirements.txt
# Stage 2: Development
FROM build AS development
# Install development dependencies
COPY --chown=nonroot:1000 requirements-dev.txt /tmp/requirements-dev.txt
RUN pip install -r /tmp/requirements-dev.txt && \
rm /tmp/requirements-dev.txt
# Stage 3: Production
FROM build AS production
# No additional steps are needed, as the production dependencies are already installed
docker build --target development
: build an image with both production and development dependencies while
docker build --target production
: build an image with only the production dependencies.
An example from library (dev setup) point of view on python:
# Stage 1: Build
FROM python:3.10 AS build
# Install
RUN apt update && \
apt install -y sudo
# Add non-root user
ARG USERNAME=nonroot
RUN groupadd --gid 1000 $USERNAME && \
useradd --uid 1000 --gid 1000 -m $USERNAME
## Make sure to reflect new user in PATH
ENV PATH="/home/${USERNAME}/.local/bin:${PATH}"
USER $USERNAME
## Pip dependencies
# Upgrade pip
RUN pip install --upgrade pip
# Install production dependencies
COPY --chown=nonroot:1000 requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt && \
rm /tmp/requirements.txt
# Stage 2: Development
FROM build AS development
# Install development dependencies
COPY --chown=nonroot:1000 requirements-dev.txt /tmp/requirements-dev.txt
RUN pip install -r /tmp/requirements-dev.txt && \
rm /tmp/requirements-dev.txt
# Stage 3: Production
FROM build AS production
# No additional steps are needed, as the production dependencies are already installed
docker build --target development
: build an image with both production and development dependencies while
docker build --target production
: build an image with only the production dependencies.
The containers in this repository do not make a strong stance on how dev containers should be used for container based applications because we want to:
However, we can document how the dev containers can be used in a multi-stage dockerfile (with the
target
property indevcontainer.json
) as a "builder" for creating the production container image as well.For example:
devcontainer.json snippet:
Dockerfile:
This allows the dev container image to be used for development inside the container and in building the application for production, but a "slim" image to be used for production with it's contents copied out of the previous "builder" stage. This creates the smallest possible prod image.
Furthermore, once this smaller image is deployed, the "attach" workflow can be used if there is something that only appears with this config.