slimtoolkit / slim

Slim(toolkit): Don't change anything in your container image and minify it by up to 30x (and for compiled languages even more) making it secure too! (free and open source)
Apache License 2.0
18.78k stars 700 forks source link
apparmor containers docker go golang hacktoberfest minify minify-images seccomp seccomp-profile security slim

SK

Gitter chat Discord chat Follow Youtube

Gitpod ready-to-code

Install SlimToolkit Get Examples

Try Slim.AI SaaS

Optimize Your Experience with Containers. Make Your Containers Better, Smaller, More Secure and Do Less to Get There (free and open source!)

Note that DockerSlim is now just Slim (SlimToolkit is the full name, so it's easier to find it online) to show its growing support for additional container tools and runtimes in the cloud native ecosystem.

Slim is now a CNCF Sandbox project. It was created by Kyle Quest and it's been improved by many contributors. The project is supported by Slim.AI.

Overview

Slim allows developers to inspect, optimize and debug their containers using its xray, lint, build, debug, run, images, merge, registry, vulnerability (and other) commands. It simplifies and improves your developer experience building, customizing and using containers. It makes your containers better, smaller and more secure while providing advanced visibility and improved usability working with the original and minified containers.

Don't change anything in your container image and minify it by up to 30x making it secure too! Optimizing images isn't the only thing it can do though. It can also help you understand and author better container images.

Keep doing what you are doing. No need to change anything. Use the base image you want. Use the package manager you want. Don't worry about hand optimizing your Dockerfile. You shouldn't have to throw away your tools and your workflow to have small container images.

Don't worry about manually creating Seccomp and AppArmor security profiles. You shouldn't have to become an expert in Linux syscalls, Seccomp and AppArmor to have secure containers. Even if you do know enough about it wasting time reverse engineering your application behavior can be time-consuming.

Slim will optimize and secure your containers by understanding your application and what it needs using various analysis techniques. It will throw away what you don't need, reducing the attack surface of your container. What if you need some of those extra things to debug your container? You can use dedicated debugging side-car containers for that (more details below).

Understand your container image before and after you optimize it using the xray command in the slim app or the Slim.AI SaaS where you can get even more powerful insights including how your container image changed.

Slim has been used with Node.js, Python, Ruby, Java, Go, Rust, Elixir and PHP (some app types) running on Ubuntu, Debian, CentOS, Alpine and even Distroless.

Note that some application stacks do require advanced container probing to make sure that all dynamically loaded components are detected. See the --http-probe* flags for more details to know how you can define custom probe commands. In some cases you might also need to use the --include-path flag to make sure everything your application needs is included (e.g., ubuntu.com python SPA app container image example where the client side template files are explicitly included).

It's also a good idea to use your app/environment tests when you run the Slim app. See the --continue-after flag for more details about integrating your tests with the temporary container Slim creates when it's doing its dynamic analysis. Running tests in the target container is also an option, but it does require you to specify a custom ENTRYPOINT/CMD with a custom wrapper to start your app and to execute your tests.

Slim How

Interactive CLI prompt screencast:

asciicast

Watch this screencast to see how an application image is minified by more than 30x.

asciicast

When you run the build or profile commands in Slim it gives you an opportunity to interact with the temporary container it creates. By default, it will pause and wait for your input before it continues its execution. You can change this behavior using the --continue-after flag.

If your application exposes any web interfaces (e.g., when you have a web server or an HTTP API), you'll see the port numbers on the host machine you will need to use to interact with your application (look for the port.list and target.port.info messages on the screen). For example, in the screencast above you'll see that the internal application port 8000 is mapped to port 32911 on your host.

Note that Slim will interact with your application for you when HTTP probing is enabled (enabled by default; see the --http-probe* flag docs for more details). Some web applications built with scripting languages like Python or Ruby require service interactions to load everything in the application. Enable HTTP probing unless it gets in your way.

You can also interact with the temporary container via a shell script or snippet using --exec-file or --exec. For example, you can create a container which is only capable of using curl.

>> docker pull archlinux:latest
...

>> slim build --target archlinux:latest --tag archlinux:curl --http-probe=false --exec "curl checkip.amazonaws.com"
...

>> docker run archlinux:curl curl checkip.amazonaws.com
...

>> docker images
archlinux                 curl                ...        ...         17.4MB
archlinux                 latest              ...        ...         467MB
...

Community

Feel free to join any of these channels or just open a new Github issue if you want to chat or if you need help.

Slim on the Internet

Books:

Minification Examples

You can find the examples in a separate repository: https://github.com/slimtoolkit/examples

Node.js application images:

Python application images:

Ruby application images:

Go application images:

Rust application images:

Java application images:

PHP application images:

Haskell application images:

Elixir application images:


RECENT UPDATES

Latest version: 1.40.11 (2/2/2024)

The 1.40.11 version adds support for the latest Docker Engine version, improves xray reports and adds new build command flags (--include-dir-bins and --include-ssh-client).

For more info about the latest release see the CHANGELOG.

INSTALLATION

If you already have Slim installed use the update command to get the latest version:

slim update

Downloads

  1. Download the zip package for your platform.

    • Latest Mac binaries (curl -L -o ds.zip https://github.com/slimtoolkit/slim/releases/download/1.40.11/dist_mac.zip)

    • Latest Mac M1 binaries (curl -L -o ds.zip https://github.com/slimtoolkit/slim/releases/download/1.40.11/dist_mac_m1.zip))

    • Latest Linux binaries (curl -L -o ds.tar.gz https://github.com/slimtoolkit/slim/releases/download/1.40.11/dist_linux.tar.gz)

    • Latest Linux ARM binaries (curl -L -o ds.tar.gz https://github.com/slimtoolkit/slim/releases/download/1.40.11/dist_linux_arm.tar.gz)

    • Latest Linux ARM64 binaries (curl -L -o ds.tar.gz https://github.com/slimtoolkit/slim/releases/download/1.40.11/dist_linux_arm64.tar.gz)

  2. Unzip the package and optionally move it to your bin directory.

Linux (for non-intel replace dist_linux with the platform-specific extracted path):

tar -xvf ds.tar.gz
mv  dist_linux/slim /usr/local/bin/
mv  dist_linux/slim-sensor /usr/local/bin/

Mac:

unzip ds.zip
mv  dist_mac/slim /usr/local/bin/
mv  dist_mac/slim-sensor /usr/local/bin/
  1. Add the location where you unzipped the package to your PATH environment variable (optional).

If the directory where you extracted the binaries is not in your PATH then you'll need to run your Slim app binary from that directory.

Scripted Install

You can also use this script to install the current release of Slim on Linux (x86 and ARM) and macOS (x86 and Apple Silicon)

curl -sL https://raw.githubusercontent.com/slimtoolkit/slim/master/scripts/install-slim.sh | sudo -E bash -

Homebrew

brew install docker-slim

The Homebrew installer: https://formulae.brew.sh/formula/docker-slim

Docker

docker pull dslim/slim

See the RUNNING CONTAINERIZED section for more usage info.

SaaS

Powered by Slim. It will help you understand and troubleshoot your application containers and a lot more. If you use the xray command you'll want to try the SaaS. Understanding image changes is easy with its container diff capabilities. Connect your own registry and you can do the same with your own containers. Try it here without installing anything locally.

BASIC USAGE INFO

slim [global flags] [xray|build|profile|run|debug|lint|merge|images|registry|vulnerability|update|version|appbom|help] [command-specific flags] <IMAGE_ID_OR_NAME>

If you don't specify any command slim will start in the interactive prompt mode.

COMMANDS

Example: slim build my/sample-app

See the USAGE DETAILS section for more details. Run slim help to get a high level overview of the available commands. Run slim COMMAND_NAME without any parameters and you'll get more information about that command (e.g., slim build).

If you run slim without any parameters you'll get an interactive prompt that will provide suggestions about the available commands and flags. Tabs are used to show the available options, to autocomplete the parameters and to navigate the option menu (which you can also do with Up and Down arrows). Spaces are used to move to the next parameter and Enter is used to run the command. For more info about the interactive prompt see go-prompt.

USAGE DETAILS

slim [global options] command [command options] <target image ID or name>

Commands:

Global options:

To disable the version checks set the global --check-version flag to false (e.g., --check-version=false) or you can use the DSLIM_CHECK_VERSION environment variable.

LINT COMMAND OPTIONS

XRAY COMMAND OPTIONS

Change Types:

In the interactive CLI prompt mode you must specify the target image using the --target flag while in the traditional CLI mode you can use the --target flag or you can specify the target image as the last value in the command.

BUILD COMMAND OPTIONS

In the interactive CLI prompt mode you must specify the target image using the --target flag while in the traditional CLI mode you can use the --target flag or you can specify the target image as the last value in the command.

The --include-path option is useful if you want to customize your minified image adding extra files and directories. The --include-path-file option allows you to load multiple includes from a newline delimited file. Use this option if you have a lot of includes. The includes from --include-path and --include-path-file are combined together. You can also use the --exclude-pattern flag to control what shouldn't be included.

The --continue-after option is useful if you need to script the Slim app. If you pick the probe option then Slim will continue executing the build command after the HTTP probe is done executing. If you pick the exec options then Slim will continue executing the build command after the container exec shell commands (specified using the --exec-file or --exec flags) are done executing. If you pick the timeout option Slim will allow the target container to run for 60 seconds before it will attempt to collect the artifacts. You can specify a custom timeout value by passing a number of seconds you need instead of the timeout string. If you pick the signal option you'll need to send a USR1 signal to the Slim app process. The signal option is useful when you want to run your own tests against the temporary container Slim creates. Your test automation / CI/CD pipeline will be able to notify the Slim app that it's done running its test by sending the USR1 to it.

You can also combine multiple continue-after modes. For now only combining probe and exec is supported (using either probe&exec or exec&probe as the --continue-after flag value). Other combinations may work too. Combining probe and signal is not supported.

The --include-shell option provides a simple way to keep a basic shell in the minified container. Not all shell commands are included. To get additional shell commands or other command line utilities use the --include-exe and/or --include-bin options. Note that the extra apps and binaries might missed some of the non-binary dependencies (which don't get picked up during static analysis). For those additional dependencies use the --include-path and --include-path-file options.

The --dockerfile option makes it possible to build a new minified image directly from source Dockerfile. Pass the Dockerfile name as the value for this flag and pass the build context directory or URL instead of the docker image name as the last parameter for the build command: slim build --dockerfile Dockerfile --tag my/custom_minified_image_name . If you want to see the console output from the build stages (when the fat and slim images are built) add the --show-blogs build flag. Note that the build console output is not interactive and it's printed only after the corresponding build step is done. The fat image created during the build process has the .fat suffix in its name. If you specify a custom image tag (with the --tag flag) the .fat suffix is added to the name part of the tag. If you don't provide a custom tag the generated fat image name will have the following format: slim-tmp-fat-image.<pid_of_slim>.<current_timestamp>. The minified image name will have the .slim suffix added to that auto-generated container image name (slim-tmp-fat-image.<pid_of_slim>.<current_timestamp>.slim). Take a look at this python examples to see how it's using the --dockerfile flag.

The --use-local-mounts option is used to choose how the Slim sensor is added to the target container and how the sensor artifacts are delivered back to the master. If you enable this option you'll get the original Slim app behavior where it uses local file system volume mounts to add the sensor executable and to extract the artifacts from the target container. This option doesn't always work as expected in the dockerized environment where Slim itself is running in a Docker container. When this option is disabled (default behavior) then a separate Docker volume is used to mount the sensor and the sensor artifacts are explicitly copied from the target container.

DEBUG COMMAND OPTIONS

See the "Debugging Using the debug Command" section for more information about this command.

RUN COMMAND OPTIONS

Run one or more containers

USAGE: slim [GLOBAL FLAGS] run [FLAGS] [IMAGE]

Flags:

MERGE COMMAND OPTIONS

Merge two container images. Optimized to merge minified images.

Flags:

REGISTRY COMMAND OPTIONS

For the operations that require authentication you can reuse the registry credentials from Docker (do docker login first and then use the --use-docker-credentials flag with the registry command) or you can specify the auth info using the --account and --secret flags).

Current sub-commands: pull, push, image-index-create, server.

There's also a placeholder for copy, but it doesn't do anything yet. Great opportunity to contribute ;-)

Shared Command Level Flags:

PULL SUBCOMMAND OPTIONS

USAGE: slim [GLOBAL FLAGS] registry [SHARED FLAGS] pull [FLAGS] [IMAGE]

Flags:

PUSH SUBCOMMAND OPTIONS

USAGE: slim [GLOBAL FLAGS] registry [SHARED FLAGS] push [FLAGS] [IMAGE]

Flags:

Note that slim registry push LOCAL_DOCKER_IMAGE_NAME is a shortcut for slim registry push --docker LOCAL_DOCKER_IMAGE_NAME.

Normally you have to explicitly tag the target image to have a name that's appropriate for the destination registry. The --as flag is a convinient way to tag the image while you are pushing it. Here's an example pushing a local Docker nginx image to a local registry: slim registry push --docker nginx --as localhost:5000/nginx

You can create a local registry using the server subcommand. See the server sub-command section below for more details.

COPY SUBCOMMAND OPTIONS

USAGE: slim registry copy [SRC_IMAGE] [DST_IMAGE]

NOTE: Just a placeholder for now (TBD)

IMAGE-INDEX-CREATE SUBCOMMAND OPTIONS

USAGE: slim registry image-index-create --image-index-name [MULTI-ARCH_IMAGE_TAG] --image-name [IMAGE_ONE] --image-name [IMAGE_TWO]

Flags:

SERVER SUBCOMMAND OPTIONS

Starts a server which implements the OCI API spec on port 5000 by default.

USAGE: slim [GLOBAL FLAGS] registry server [FLAGS]

Flags:

VULNERABILITY COMMAND OPTIONS

USAGE: slim [GLOBAL FLAGS] vulnerability [SHARED FLAGS] [SUBCOMMAND] [FLAGS]

Current sub-commands:

Shared Command Level Flags:

EPSS SUBCOMMAND OPTIONS

USAGE: slim [GLOBAL FLAGS] vulnerability [SHARED FLAGS] epss [FLAGS]

Flags:

Examples:

RUNNING CONTAINERIZED

The current version of Slim is able to run in containers. It will try to detect if it's running in a containerized environment, but you can also tell Slim explicitly using the --in-container global flag.

You can run Slim in your container directly or you can use the Slim container image in your containerized environment. If you are using the Slim container image make sure you run it configured with the Docker IPC information, so it can communicate with the Docker daemon. The most common way to do it is by mounting the Docker unix socket to the Slim app container. Some containerized environments (like Gitlab and their dind service) might not expose the Docker unix socket to you, so you'll need to make sure the environment variables used to communicate with Docker (e.g., DOCKER_HOST) are passed to the Slim app container. Note that if those environment variables reference any kind of local host names those names need to be replaced or you need to tell the Slim app about them using the --etc-hosts-map flag. If those environment variables reference local files those local files (e.g., files for TLS cert validation) will need to be copied to a temporary container, so that temporary container can be used as a data container to make those files accessible by the Slim app container.

When Slim app runs in a container it will attempt to save its execution state in a separate Docker volume. If the volume doesn't exist it will try to create it (slim-state, by default). You can pick a different state volume or disable this behavior completely by using the global --archive-state flag. If you do want to persist the Slim app execution state (which includes the seccomp and AppArmor profiles) without using the state archiving feature you can mount your own volume that maps to the /bin/.slim-state directory in the Slim app container.

By default, the Slim app will try to create a Docker volume for its sensor unless one already exists. If this behavior is not supported by your containerized environment you can create a volume separately and pass its name to the Slim app using the --use-sensor-volume flag.

Here's a basic example of how to use the containerized version of the Slim app: docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock dslim/slim build your-docker-image-name

Here's a GitLab example for their dind .gitlab-ci.yml config file: docker run -e DOCKER_HOST=tcp://$(grep docker /etc/hosts | cut -f1):2375 dslim/slim build your-docker-image-name

Here's a CircleCI example for their remote docker .circleci/config.yml config file (used after the setup_remote_docker step):

docker create -v /dcert_path --name dcert alpine:latest /bin/true
docker cp $DOCKER_CERT_PATH/. dcert:/dcert_path
docker run --volumes-from dcert -e DOCKER_HOST=$DOCKER_HOST -e DOCKER_TLS_VERIFY=$DOCKER_TLS_VERIFY -e DOCKER_CERT_PATH=/dcert_path dslim/slim build your-docker-image-name

Different CI/CD services have different containerized environment designs that impose various restrictions that may impact the ability of the main app to communicate with the sensor app embedded in the temporary container Slim creates. Try adjusting the values for the --sensor-ipc-mode and --sensor-ipc-endpoint flags. This Google Cloud Build blog post by Márton Kodok is a good reference for both of those flags.

Using *-file Flags

CI/CD INTEGRATIONS

Integrating Slim in Jenkins

Prerequisites:

Integrating Slim in Github Actions

Docker-Slim Github Action

Integrating Slim in Github Actions in your CI/CD workflow involves using the Docker-Slim Github Action, this Action(snippet below) minifies a target docker image--IMAGE_NAME:latest in your workflow, making it smaller and adjusting the new slimmed image as IMAGE_NAME:slim.

# Slim it!
- uses: kitabisa/docker-slim-action@v1
  env:
    DSLIM_HTTP_PROBE: false
  with:
    target: IMAGE_NAME:latest
    tag: "slim"

Github Actions Slim Workflow

You can integrate the Docker-Slim Github Action in your workflow by inserting the Action after a Docker Build/Push Github Action, before Docker Login Github Action and docker tag/push commands, a customized example workflow is highlighted below. Note that the environment variable tag--{{github.run_number}} in the workflow represents a unique incremental number allocated by Github Actions each time your workflow runs.

# Build the Docker image first
- uses: docker/build-push-action@v4
  with:
    push: false
    tags: IMAGE_NAME:{{github.run_number}} 

# Slim the Image
- uses: kitabisa/docker-slim-action@v1
  env:
    DSLIM_HTTP_PROBE: false
  with:
    target: IMAGE_NAME:{{github.run_number}}
    tag: "slim-{{github.run_number}}"

# Docker Hub Login
  uses: docker/login-action@v2
  with:
    username: ${{ secrets.DOCKERHUB_USERNAME }}
    password: ${{ secrets.DOCKERHUB_TOKEN }}

# Push to the registry
- run: | 
   docker tag IMAGE_NAME:slim-{{github.run_number}} ${{ secrets.DOCKERHUB_USERNAME }}/IMAGE_NAME:slim-{{github.run_number}}
   docker push ${{ secrets.DOCKERHUB_USERNAME }}/IMAGE_NAME:slim-{{github.run_number}}

The workflow above indicates four steps:

DOCKER CONNECT OPTIONS

If you don't specify any Docker connect options the Slim app expects to find the Docker Unix socket (/var/run/docker.sock) or the following environment variables: DOCKER_HOST, DOCKER_TLS_VERIFY (optional), DOCKER_CERT_PATH (required if DOCKER_TLS_VERIFY is set to "1"). Note that the DOCKER_HOST environment variable can be used to point to a Unix socket address (in case the default Unix socket isn't there). This is useful when you use Docker Desktop and you haven't configured Docker Desktop to create the default Unix socket.

If the Docker environment variables are configured to use TLS and to verify the Docker cert (default behavior), but you want to disable the TLS verification you can override the TLS verification behavior by setting the --tls-verify to false:

slim --tls-verify=false build my/sample-node-app-multi

You can override all Docker connection options using these flags: --host, --tls, --tls-verify, --tls-cert-path. These flags correspond to the standard Docker options (and the environment variables). Note that you can also use the --host flag (similar to DOCKER_HOST) to point to a Unix socket (e.g., --host=unix:///var/run/docker.sock).

If you want to use TLS with verification:

slim --host=tcp://192.168.99.100:2376 --tls-cert-path=/Users/youruser/.docker/machine/machines/default --tls=true --tls-verify=true build my/sample-node-app-multi

If you want to use TLS without verification:

slim --host=tcp://192.168.99.100:2376 --tls-cert-path=/Users/youruser/.docker/machine/machines/default --tls=true --tls-verify=false build my/sample-node-app-multi

If the Docker environment variables are not set and if you don't specify any Docker connect options Slim will try to use the default unix socket.

DOCKER DESKTOP

You may not have the default Unix socket (/var/run/docker.sock) configured if you use Docker Desktop. By default, Docker Desktop uses ~/.docker/run/docker.sock as the Unix socket.

You can either use --host or DOCKER_HOST to point to the Docker Desktop's Unix socket or you can configure Docker Desktop to create the default/traditional Unix socket (creating the /var/run/docker.sock symlink manually is an option too).

To configure Docker Desktop to create the default Unix socket open its UI and go to Settings -> Advanced where you need to check the Enable default Docker socket (Requires password) option.

HTTP PROBE COMMANDS

If the HTTP probe is enabled (note: it is enabled by default) it will default to running GET / with HTTP and then HTTPS on every exposed port. You can add additional commands using the --http-probe-cmd and --http-probe-cmd-file options.

If you want to disable HTTP probing set the --http-probe flag to false (e.g., --http-probe=false). You can also use the --http-probe-off flag to do the same (simply use the flag without any parameters).

The --http-probe-cmd option is good when you want to specify a small number of simple commands where you select some or all of these HTTP command options: crawling (defaults to false), protocol, method (defaults to GET), resource (path and query string).

If you only want to use custom HTTP probe command and you don't want the default GET / command added to the command list you explicitly provided you'll need to set --http-probe to false when you specify your custom HTTP probe command. Note that this inconsistency will be addressed in the future releases to make it less confusing.

Possible field combinations:

Here are a couple of examples:

Adds two extra probe commands: GET /api/info and POST /submit (tries http first, then tries https): slim build --show-clogs --http-probe-cmd /api/info --http-probe-cmd POST:/submit my/sample-node-app-multi

Adds one extra probe command: POST /submit (using only http): slim build --show-clogs --http-probe-cmd http:POST:/submit my/sample-node-app-multi

The --http-probe-cmd-file option is good when you have a lot of commands and/or you want to select additional HTTP command options.

Available HTTP command options:

Here's a probe command file example:

slim build --show-clogs --http-probe-cmd-file probeCmds.json my/sample-node-app-multi

Commands in probeCmds.json:

{
  "commands":
  [
   {
     "resource": "/api/info"
   },
   {
     "method": "POST",
     "resource": "/submit"
   },
   {
     "procotol": "http",
     "resource": "/api/call?arg=one"
   },
   {
     "protocol": "http",
     "method": "POST",
     "resource": "/submit2",
     "body": "key=value"
   },
   {
     "protocol": "http",
     "method": "POST",
     "resource": "/submit3",
     "body_file": "mydata.json",
     "headers": ["Content-Type: application/json"]
   }
  ]
}

The HTTP probe command file path can be a relative path (relative to the current working directory) or it can be an absolute path.

For each HTTP probe call Slim will print the call status. Example: info=http.probe.call status=200 method=GET target=http://127.0.0.1:32899/ attempt=1 error=none.

You can execute your own external HTTP requests using the target.port.list field in the container info message Slim prints when it starts its test container: slim[build]: info=container name=<your_container_name> id=<your_container_id> target.port.list=[<comma_separated_list_of_port_numbers_to_use>] target.port.info=[<comma_separated_list_of_port_mapping_records>]. Example: slim[build]: info=container name=slimk_42861_20190203084955 id=aa44c43bcf4dd0dae78e2a8b3ac011e7beb6f098a65b09c8bce4a91dc2ff8427 target.port.list=[32899] target.port.info=[9000/tcp => 0.0.0.0:32899]. With this information you can run curl or other HTTP request generating tools: curl http://localhost:32899.

The current version also includes an experimental crawling capability. To enable it for the default HTTP probe use the --http-probe-crawl flag. You can also enable it for the HTTP probe commands in your command file using the crawl boolean field.

When crawling is enabled the HTTP probe will act like a web crawler following the links it finds in the target endpoint.

Probing based on the Swagger/OpenAPI spec is another experimental capability. This feature introduces two new flags:

You can use the --http-probe-exec and --http-probe-exec-file options to run the user provided commands when the http probes are executed. This example shows how you can run curl against the temporary container created by Slim when the http probes are executed.

slim build --http-probe-exec 'curl http://localhost:YOUR_CONTAINER_PORT_NUM/some/path' --publish-port YOUR_CONTAINER_PORT_NUM your-container-image-name

DEBUGGING MINIFIED CONTAINERS

Debugging Using the debug Command

The current version of the debug command is pretty basic and it lacks a number of useful capabilities. It will help you debug containers running in Docker or Kubernetes (use the --runtime flag and set it to k8s if you need to debug a container in Kubernetes).

By default the debug command will provide you with an interactive terminal when it attaches the debugger side-car image to the debugged target container. Future versions will allow you to have different interaction modes with the target.

The Debug Images

You can use any container image as a debug image, but there's a list of pre-selected debug images you can choose.

You can list all pre-selected debug images with the --list-debug-images and if you are using the interactive prompt mode there'll be an auto-complete dropdown menu for the --debug-image flag.

Here's the current list of debug images:

Steps to Debug Your Container (Kubernetes Runtime)

  1. Make sure the target environment you want to debug is up (the example k8s manifest creates a pod with the minimal nginx image from Chainguard and it has no shell):

kubectl apply -f examples/k8s_nginx_cgr/manifest.yaml

2. Run the debug command:

```bash

>> slim debug --runtime=k8s --pod=example-pod example-container

or


>> slim debug --runtime=k8s --pod=example-pod --target=example-container

Now you should have an interactive shell into the debug container started by slim and you can type your regular shell commands.

By default the debug command will connect the interactive terminal to the debugged container and it will run a shell as if it's running in the target container environment, so you will see the file system of the target container as if you are directly connected (you won't have to go through the proc file system). You can change this behavior by using the --run-as-target-shell (which is true by default). For example, this call will connect you to the debug container in a more traditional way: slim debug --runtime=k8s --run-as-target-shell=false example-container

Also note that if you use the interactive prompt mode (when you run slim with no command line parameters) you will get auto-complete behavior for a number of flags: --target, --namespace, --pod, --session.

Each time you try to debug an image slim will have a session that represents it. You'll be able to reconnect to the existing active debug sessions and you'll be able to get logs from all available sessions.

Steps to Debug Your Container (Docker Runtime)

  1. Start the target container you want to debug:

docker run -it --rm -p 80:80 --name mycontainer nginx

2. Run the debug command:

```bash

>> slim debug mycontainer

or


>> slim debug --target=mycontainer

Now you should have an interactive shell into the debug container started by slim and you can type your regular shell commands.

By default the debug command will connect the interactive terminal to the debugged container and it will run a shell as if it's running in the target container environment, so you will see the file system of the target container as if you are directly connected (you won't have to go through the proc file system). You can change this behavior by using the --run-as-target-shell (which is true by default). For example, this call will connect you to the debug container in a more traditional way: slim debug --run-as-target-shell=false mycontainer

Also note that if you use the interactive prompt mode (when you run slim with no command line parameters) you will get auto-complete behavior for a number of flags: --target, --session.

Each time you try to debug an image slim will have a session that represents it. You'll be able to reconnect to the existing active debug sessions and you'll be able to get logs from all available sessions.

Debugging the "Hard Way" (Docker Runtime)

You can create dedicated debugging side-car container images loaded with the tools you need for debugging target containers. This allows you to keep your production container images small. The debugging side-car containers attach to the running target containers.

Assuming you have a running container named node_app_alpine you can attach your debugging side-car with a command like this: docker run --rm -it --pid=container:node_app_alpine --net=container:node_app_alpine --cap-add sys_admin alpine sh. In this example, the debugging side-car is a regular alpine image. This is exactly what happens with the node_alpine app sample (located in the node_alpine directory of the examples repo) and the run_debug_sidecar.command helper script.

If you run the ps command in the side-car you'll see the application from the target container:

# ps
PID   USER     TIME   COMMAND
    1 root       0:00 node /opt/my/service/server.js
   13 root       0:00 sh
   38 root       0:00 ps

You can access the target container file system through /proc/<TARGET_PID>/root:

# ls -lh /proc/1/root/opt/my/service
total 8
drwxr-xr-x    3 root     root        4.0K Sep  2 15:51 node_modules
-rwxr-xr-x    1 root     root         415 Sep  8 00:52 server.js

Some of the useful debugging commands include cat /proc/<TARGET_PID>/cmdline, ls -l /proc/<TARGET_PID>/cwd, cat /proc/1/environ, cat /proc/<TARGET_PID>/limits, cat /proc/<TARGET_PID>/status and ls -l /proc/<TARGET_PID>/fd.

MINIFYING COMMAND LINE TOOLS

Unless the default CMD instruction in your Dockerfile is sufficient you'll have to specify command line parameters when you execute the build command in Slim. This can be done with the --cmd option.

Other useful command line parameters:

Note that the --entrypoint and --cmd options don't override the ENTRYPOINT and CMD instructions in the final minified image.

Here's a sample build command:

slim build --show-clogs=true --cmd docker-compose.yml --mount $(pwd)/data/:/data/ dslim/container-transform

It's used to minify the container-transform tool. You can get the minified image from Docker Hub.

QUICK SECCOMP EXAMPLE

If you want to auto-generate a Seccomp profile AND minify your image use the build command. If you only want to auto-generate a Seccomp profile (along with other interesting image metadata) use the profile command.

Step one: run Slim

slim build your-name/your-app

Step two: use the generated Seccomp profile

docker run --security-opt seccomp:<slim directory>/.images/<YOUR_APP_IMAGE_ID>/artifacts/your-name-your-app-seccomp.json <your other run params> your-name/your-app

Feel free to copy the generated profile :-)

You can use the generated Seccomp profile with your original image or with the minified image.

USING AUTO-GENERATED SECCOMP PROFILES

You can use the generated profile with your original image or with the minified image Slim created:

docker run -it --rm --security-opt seccomp:path_to/my-sample-node-app-seccomp.json -p 8000:8000 my/sample-node-app.slim

ORIGINAL DEMO VIDEO

DockerSlim demo

Demo video on YouTube

DEMO STEPS

The demo runs on Mac OS X, but you can build a linux version. Note that these steps are different from the steps in the demo video.

  1. Get the Slim app binaries:

Unzip them and optionally add their directory to your PATH environment variable if you want to use the app from other locations.

The extracted directory contains two binaries (and now it also contains a symlink for the old name):

  1. Clone the examples repo to use the sample apps (note: the examples have been moved to a separate repo). You can skip this step if you have your own app.

git clone https://github.com/slimtoolkit/examples.git

  1. Create a Docker image for the sample node.js app in examples/node_ubuntu. You can skip this step if you have your own app.

cd examples/node_ubuntu

docker build -t my/sample-node-app .

  1. Run the Slim app:

./slim build my/sample-node-app <- run it from the location where you extraced the Slim app binaries (or update your PATH env var to include the directory where the Slim app binaries are located)

Slim creates a special container based on the target image you provided. It also creates a resource directory where it stores the information it discovers about your image: <slim directory>/.images/<TARGET_IMAGE_ID>.

By default, the Slim app will run its http probe against the temporary container. If you are minifying a command line tool that doesn't expose any web service interface you'll need to explicitly disable http probing (by setting --http-probe=false).

  1. Use curl (or other tools) to call the sample app (optional)

curl http://<YOUR_DOCKER_HOST_IP>:<PORT>

This is an optional step to make sure the target app container is doing something. Depending on the application it's an optional step. For some applications it's required if it loads new application resources dynamically based on the requests it's processing (e.g., Ruby or Python).

You'll see the mapped ports printed to the console when the Slim app starts the target container. You can also get the port number either from the docker ps or docker port <CONTAINER_ID> commands. The current version of DockerSlim doesn't allow you to map exposed network ports (it works like docker run … -P).

  1. Press and wait until the Slim app says it's done

By default or when http probing is enabled explicitly the Slim app will continue its execution once the http probe is done running. If you explicitly picked a different continue-after option follow the expected steps. For example, for the enter continue-after option you must press the enter button on your keyboard.

If http probing is enabled (when http-probe is set) and if continue-after is set to enter and you press the enter key before the built-in HTTP probe is done the probe might produce an EOF error because the Slim app will shut down the target container before all probe commands are done executing. It's ok to ignore it unless you really need the probe to finish.

  1. Once Slim is done check that the new minified image is there

docker images

You should see my/sample-node-app.slim in the list of images. Right now all generated images have .slim at the end of its name.

  1. Use the minified image

docker run -it --rm --name="slim_node_app" -p 8000:8000 my/sample-node-app.slim

FAQ

Is it safe for production use?

Yes! Either way, you should test your Docker images.

How can I contribute if I don't know Go?

You don't need to read the language spec and lots of books :-) Go through the Tour of Go and optionally read 50 Shades of Go and you'll be ready to contribute!

What's the best application for Slim?

Slim will work for any containerized application; however, Slim automates app interactions for applications with an HTTP API. You can use Slim even if your app doesn't have an HTTP API. You'll need to interact with your application manually to make sure Slim can observe your application behavior.

Can I use Slim with dockerized command line tools?

Yes. The --cmd, --entrypoint, and --mount options will help you minify your image. The container-transform tool is a good example.

Notes:

You can explore the artifacts Slim generates when it's creating a slim image. You'll find those in <slim directory>/.images/<TARGET_IMAGE_ID>/artifacts. One of the artifacts is a "reverse engineered" Dockerfile for the original image. It'll be called Dockerfile.reversed.

If you don't want to create a minified image and only want to "reverse engineer" the Dockerfile you can use the info command.

What if my Docker images uses the USER command?

The current version of Slim does include support for non-default users (take a look at the non-default user examples (including the ElasticSearch example located in the 3rdparty directory) in the examples repo. Please open tickets if something doesn't work for you.

Everything should work as-is, but for the special cases where the current behavior don't work as expected you can adjust what Slim does using various build command parameters: --run-target-as-user, --keep-perms, --path-perms, --path-perms-file (along with the --include-* parameters).

The --run-target-as-user parameter is enabled by default and it controls if the application in the temporary container is started using the identity from the USER instruction in the container's Dockerfile.

The --keep-perms parameter is also enabled by default. It tells Slim to retain the permissions and the ownership information for the files and directories copied to the optimized container image.

The --path-perms and --path-perms-file parameters are similar to the --include-path and --include-path-file parameters. They are used to overwrite the permission and the user/group information for the target files and directories. Note that the target files/directories are expected to be in the optimized container image. If you don't know if the target files/directories will be in the optimized container you'll need to use one of the --include-* parameters (e.g., --include-path-file) to explicitly require those artifacts to be included. You can specify the permissions and the ownership information in the --include-* parameters too (so you don't need to have the --path-* parameters just to set the permissions).

The --path-* and --include-* params use the same format to communicate the permission/owernship info: TARGET_PATH_OR_NAME:PERMS_IN_OCTAL_FORMAT#USER_ID#GROUP_ID.

You don't have to specify the user and group IDs if you don't want to change them.

Here's an example using these parameters to minify the standard nginx image adding extra artifacts and changing their permissions: slim build --include-path='/opt:770#104#107' --include-path='/bin/uname:710' --path-perms='/tmp:700' nginx.

This is what you'll see in the optimized container image:

drwx------  0 0      0           0 Feb 28 22:15 tmp/
-rwx--x---  0 0      0       31240 Mar 14  2015 bin/uname
drwxrwx---  0 104    107         0 Feb 28 22:13 opt/

The uname binary isn't used by nginx, so the --include-path parameter is used to keep it in the optimized image changing its permissions to 710.

The /tmp directory will be included in the optimized image on its own, so the --path-perms parameter is used to change its permissions to 700.

When you set permissions/user/group on a directory the settings are only applied to that directory and not to the artifacts inside. The future versions will allow you to apply the same settings to everything inside the target directory too.

Also note that for now you have to use numeric user and group IDs. The future versions will allow you to use user and group names too.

Nginx fails in my minified image

If you see nginx: [emerg] mkdir() "/var/lib/nginx/body" failed it means your nginx setup uses a non-standard temporary directory. Nginx will fail if the base directory for its temporary folders doesn't exist (they won't create the missing intermediate directories). Normally it's /var/lib/nginx, but if you have a custom config that points to something else you'll need to add an --include-path flag as an extra flag when you run the Slim app.

Slim fails with a 'no permission to read from' error

This problem shouldn't happen anymore because the exported artifacts are saved in a tar file and the master app doesn't need to access the files directly anymore.

If you run older versions of Slim you can get around this problem by running Slim from a root shell. That way it will have access to all exported files.

Slim copies the relevant image artifacts trying to preserve their permissions. If the permissions are too restrictive the master app might not have sufficient priviledge to access these files when it's building the new minified image.

BUILD PROCESS

Build Options

Pick one of the build options that works best for you.

Containerized

Run make build_in_docker on linux or make build_m1_in_docker on Macs (or ./scripts/docker-builder.run.sh or click on ./scripts/mac/docker-builder.run.command on Macs) from the project directory (builds Slim in a Docker container; great if you don't want to install Go on your local machine and if you already have Docker).

Native

Run make build on linux or make build_m1 on Macs (or ./scripts/src.build.sh or click on ./scripts/mac/src.build.command on Macs) to build Slim natively (requires Go installed locally).

Note:

Try using the latest version of Go building the Slim app. The current version of Go used to build the Slim app is 1.21.

Gitpod

If you have a web browser, you can get a fully pre-configured development environment in one click:

Open in Gitpod

Additional Tools

You can install these tools using the tools.get.sh shell script in the scripts directory.

Notes:

CONTRIBUTING

If the project sounds interesting or if you found a bug see CONTRIBUTING.md and submit a PR or open an issue! Non-code contributions including docs are highly appreciated! Open an issue even if you have a question or something is not clear.

CORE CONCEPTS

  1. Inspect container metadata (static analysis)
  2. Inspect container data (static analysis)
  3. Inspect running application (dynamic analysis)
  4. Build an application artifact graph
  5. Use the collected application data to build small images
  6. Use the collected application data to auto-generate various security framework configurations.

DYNAMIC ANALYSIS OPTIONS

  1. Instrument the container image (and replace the entrypoint/cmd) to collect application activity data
  2. Use kernel-level tools that provide visibility into running containers (without instrumenting the containers)
  3. Disable relevant namespaces in the target container to gain container visibility (can be done with runC)

SECURITY

The goal is to auto-generate Seccomp, AppArmor, (and potentially SELinux) profiles based on the collected information.

CHALLENGES

Some of the advanced analysis options require a number of Linux kernel features that are not always included. The kernel you get with Docker Machine / Boot2docker is a great example of that.

ORIGINS

DockerSlim was a Docker Global Hack Day #dockerhackday project. It barely worked at the time, but it did get a win in Seattle and it took the second place in the Plumbing category overall :-)

DHD3

Since then it's been improved and it works pretty well for its core use cases. It can be better though. That's why the project needs your help! You don't need to know much about the container internals, container runtimes and you don't need to know anything about Go. You can contribute in many different ways. For example, use Slim on your images and open Github issues documenting your experience even if it worked just fine :-)

LICENSE

Apache License v2, see LICENSE for details.

CODE OF CONDUCT

The project follows the CNCF Code of Conduct.


We are a Cloud Native Computing Foundation sandbox project.


Go Report Card