canonical / docker-snap

https://snapcraft.io/docker
MIT License
52 stars 27 forks source link

docker

Docker Snap

This repository contains the source for the docker snap package. The package provides a distribution of Docker Engine along with the Nvidia toolkit for Ubuntu Core and other snap-compatible systems. The Docker Engine is built from an upstream release tag with some patches to fit the snap format and is available on armhf, arm64, amd64, i386, and ppc64el architectures. The rest of this page describes installation, usage, and development.

[!NOTE] Docker's official documentation does not discuss the docker snap package.

Installation

To install the latest stable release of Docker CE using snap:

sudo snap install docker

This snap is confined, which means that it can access a limited set of resources on the system. Additional access is granted via snap interfaces.

Upon installation using the above command, the snap connects automatically to the following system interface slots:

If you are using Ubuntu Core 16, connect the docker:home plug as it's not auto-connected by default:

sudo snap connect docker:home

Running Docker as normal user

By default, Docker is only accessible with root privileges (sudo). If you want to use docker as a regular user, you need to add your user to the docker group. This isn't possible on Ubuntu Core because it disallows the addition of users to system groups [1, 2].

[!WARNING] If you add your user to the docker group, it will have similar power as the root user. For details on how this impacts security in your system, see Docker daemon attack surface.

If you would like to run docker as a normal user:

sudo addgroup --system docker
sudo adduser $USER docker
newgrp docker
sudo snap disable docker
sudo snap enable docker

Usage

Docker should function normally, with the following caveats:

Examples

NVIDIA support

If the system is found to have an nvidia graphics card available, and the host has the required nvidia libraries installed, the nvidia container toolkit will be setup and configured to enable use of the local GPU from docker. This can be used to enable use of CUDA from a docker container, for instance.

To enable proper use of the GPU within docker, the nvidia runtime must be used. By default, the nvidia runtime will be configured to use CDI mode, and a the appropriate nvidia CDI config will be automatically created for the system. You just need to specify the nvidia runtime when running a container.

Ubuntu Core 22

The required nvidia libraries are available in the nvidia-core22 snap.

This requires connection of the graphics-core22 content interface provided by the nvidia-core22 snap, which should be automatically connected once installed.

Ubuntu Server / Desktop

The required nvidia libraries are available in the nvidia container toolkit packages.

Instruction on how to install them can be found (here)

Custom NVIDIA runtime config

If you want to make some adjustments to the automatically generated runtime config, you can use the nvidia-support.runtime.config-override snap config to completely replace it.

snap set docker nvidia-support.runtime.config-override="$(cat cutom-nvidia-config.toml)"

CDI device naming strategy

By default, the device-name-strategy for the CDI config will use index. Optionally, you can specify an alternative from the currently supported:

snap set docker nvidia-support.cdi.device-name-strategy=uuid

Disable NVIDIA support

Setting up the nvidia support should be automatic the hardware is present, but you may wish to specifically disable it so that setup is not even attempted. You can do so via the following snap config:

snap set docker nvidia-support.disabled=true

Nvidia usage examples

Generic example usage would look something like:

docker run --rm --runtime nvidia --gpus all {cuda-container-image-name}

or

docker run --rm --runtime nvidia --env NVIDIA_VISIBLE_DEVICES=all {cuda-container-image-name}

If your container image already has appropriate environment variables set, may be able to just specify the nvidia runtime with no additional args required.

Please refer to this guide for mode detail regarding environment variables that can be used.

NOTE: library path and discovery is automatically handled, but binary paths are not, so if you wish to test using something like the nvidia-smi binary passed into the container from the host, you could either specify the full path or set the PATH environment variable.

e.g.

docker run --rm --runtime=nvidia --gpus all --env PATH="${PATH}:/var/lib/snapd/hostfs/usr/bin" ubuntu nvidia-smi

Development

Developing the docker snap package is typically performed on a "classic" Ubuntu distribution (Ubuntu Server / Desktop).

Install the snap tooling:

sudo snap install snapcraft --classic

Checkout and enter this repository:

git clone https://github.com/canonical/docker-snap
cd docker-snap

Build the snap:

snapcraft -v

Install the newly-created snap package:

sudo snap install --dangerous ./docker_[VER]_[ARCH].snap

Manually connect the relevant plugs and slots which are not auto-connected:

sudo snap connect docker:privileged :docker-support
sudo snap connect docker:support :docker-support
sudo snap connect docker:firewall-control :firewall-control
sudo snap connect docker:network-control :network-control
sudo snap connect docker:docker-cli docker:docker-daemon

sudo snap disable docker
sudo snap enable docker

Testing

The snap has various tests in place: