Closed RothAndrew closed 11 months ago
+1 for Wolfi
$ grype cgr.dev/chainguard/wolfi-base
✔ Vulnerability DB [no update available]
New version of grype is available: 0.62.3 (currently running: 0.61.1)
✔ Loaded image
✔ Parsed image
✔ Cataloged packages [27 packages]
✔ Scanning image... [0 vulnerabilities]
├── 0 critical, 0 high, 0 medium, 0 low, 0 negligible
└── 0 fixed
No vulnerabilities found
@RothAndrew , going to take a stab at this issue.
Created branch 16-base-image-update. Unable to test locally on my Mac as it's an M1.
$ docker build -t test-build-harness .
[+] Building 0.5s (6/12)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 3.04kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for cgr.dev/chainguard/wolfi-base:latest 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 416B 0.0s
=> [1/8] FROM cgr.dev/chainguard/wolfi-base:latest 0.0s
=> ERROR [2/8] RUN ARCH_STRING=$(uname -m) && if [ "$ARCH_STRING" = "x86_64" ]; then SSM_PLUGIN_URL="https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/ 0.4s
------
> [2/8] RUN ARCH_STRING=$(uname -m) && if [ "$ARCH_STRING" = "x86_64" ]; then SSM_PLUGIN_URL="https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm"; elif [ "$ARCH_STRING" = "aarch64" ]; then SSM_PLUGIN_URL="https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_arm64/session-manager-plugin.rpm"; fi && dnf install -y --refresh bind-utils bzip2 bzip2-devel findutils gcc gcc-c++ gettext git iptables-nft jq libffi-devel libxslt-devel make nc ncurses-devel openldap-clients openssl-devel perl-Digest-SHA procps-ng python3-pip readline-devel sqlite-devel sshpass unzip wget which xz "${SSM_PLUGIN_URL}" && dnf clean all && rm -rf /var/cache/yum/:
#5 0.307 runc run failed: unable to start container process: exec: "/bin/bash": stat /bin/bash: no such file or directory
------
executor failed running [/bin/bash -euxo pipefail -c ARCH_STRING=$(uname -m) && if [ "$ARCH_STRING" = "x86_64" ]; then SSM_PLUGIN_URL="https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm"; elif [ "$ARCH_STRING" = "aarch64" ]; then SSM_PLUGIN_URL="https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_arm64/session-manager-plugin.rpm"; fi && dnf install -y --refresh bind-utils bzip2 bzip2-devel findutils gcc gcc-c++ gettext git iptables-nft jq libffi-devel libxslt-devel make nc ncurses-devel openldap-clients openssl-devel perl-Digest-SHA procps-ng python3-pip readline-devel sqlite-devel sshpass unzip wget which xz "${SSM_PLUGIN_URL}" && dnf clean all && rm -rf /var/cache/yum/]: exit code: 1
Will attempt to build on a Linux machine at home before creating a PR.
@RothAndrew, a quirk I found is that Wolfi only supports latest
as a public image tag. I'd be fine being wrong about this, but this and this seem to prove it correct. We'll have to think about how often to refresh the build-harness image. I don't think Renovate will be able to help us here.
This site lays it out pretty clearly, access to other tags is a paid-for item:
Images Catalogs and Tags The Public Chainguard Images Catalog is available at no cost to users, and does not require authentication. It gives access to the latest and latest-dev tags of our public images. Other versions and tags are available through subscription to our paid catalogs, featuring enterprise-grade patching SLAs and customer support.
To learn more about our image catalogs and the difference between tiers, check our Images FAQ page about catalog tiers.
Given that this image is intended to be used in build/test only (not prod) and it pulls everything fresh without caching whenever it builds, I'm okay with using the latest tag here. Given the frequent updates that we make to everything else the base image will get updated frequently, even if Renovate isn't doing it explicitely.
@blancharda what are your thoughts?
I agree with @RothAndrew... since Renovate would have been updating it nightly anyways I don't think this is a huge issue. Only potential problem I see is an inability to roll back to a previous versioned tag if we run into an issue.
Hypothetically, we could mirror it somewhere and keep our own tags.. but I don't think we have anything setup to do that, and it feels like overkill for the moment.
Users of Build Harness will always be able to roll back to a previous version of Build Harness. I think if we do this we are trusting that the latest
tag of wolfi is stable enough that there won't be issues going from one version to the other. If that is not the case we should likely not do it.
This is the same concept as many of our dnf packages that don't specify a version. We trust that the dnf package maintainers keep their packages in a stable enough state that we can just always take the latest version.
:face_palm: That's a very fair point lol
I have no issues with this.
@TheFutonEng are you ready to put in the PR for this?
@TheFutonEng are you ready to put in the PR for this?
@RothAndrew, not yet. Hoping to spend some cycles early tomorrow morning working through the build.
It looks like asdf
is expecting bash which is not present in wolfi-base
. Will dig further.
$ docker run -it --rm cgr.dev/chainguard/wolfi-base /bin/sh -l
Unable to find image 'cgr.dev/chainguard/wolfi-base:latest' locally
latest: Pulling from chainguard/wolfi-base
7d3b21ecf4b6: Pull complete
Digest: sha256:5c15a6e5c0bf02e6c0eaa939cb543c41d7725453064c920b9a4faeea7c357506
Status: Downloaded newer image for cgr.dev/chainguard/wolfi-base:latest
18a40766a36d:/# apk add git
fetch https://packages.wolfi.dev/os/x86_64/APKINDEX.tar.gz
(1/7) Installing libbrotlicommon1 (1.0.9-r3)
(2/7) Installing libbrotlidec1 (1.0.9-r3)
(3/7) Installing libnghttp2-14 (1.54.0-r0)
(4/7) Installing libcurl-openssl4 (8.1.2-r0)
(5/7) Installing expat (2.5.0-r3)
(6/7) Installing libpcre2-8-0 (10.42-r2)
(7/7) Installing git (2.41.0-r0)
OK: 29 MiB in 20 packages
18a40766a36d:/# apk add curl
(1/1) Installing curl (8.1.2-r0)
OK: 29 MiB in 21 packages
18a40766a36d:/# git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.12.0
Cloning into '/root/.asdf'...
remote: Enumerating objects: 8446, done.
remote: Counting objects: 100% (359/359), done.
remote: Compressing objects: 100% (244/244), done.
remote: Total 8446 (delta 129), reused 283 (delta 109), pack-reused 8087
Receiving objects: 100% (8446/8446), 2.80 MiB | 8.56 MiB/s, done.
Resolving deltas: 100% (4971/4971), done.
Note: switching to '816195d615427b033a7426a4fb4d7fac4cf2d791'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
18a40766a36d:/# chmod +x /root/.asdf/asdf.sh
18a40766a36d:/# export ASDF_DIR=/root/.asdf/
18a40766a36d:/# . "$HOME/.asdf/asdf.sh"
18a40766a36d:/# . "$HOME/.asdf/completions/asdf.bash"
/bin/sh: /root/.asdf/completions/asdf.bash: line 24: syntax error: unexpected "(" (expecting "}")
18a40766a36d:/# echo $PATH
/root/.asdf/shims:/root/.asdf//bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
18a40766a36d:/# PATH=/root/.asdf/shims:/root/.asdf/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
18a40766a36d:/# asdf version
env: can't execute 'bash': No such file or directory
18a40766a36d:/#
Installing bash
via apk
seems to be an option to complete this build, but that feels like it negates the point of using Wolfi. Will build an image and see how many vulnerabilities it has compared to the previous image.
FYI @blancharda , @RothAndrew
Docker build was successful using the Dockerfile in the 16-base-image-update branch.
$ docker build -t test-build-harness .
[+] Building 128.6s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 3.14kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for cgr.dev/chainguard/wolfi-base:latest 0.0s
=> CACHED [1/8] FROM cgr.dev/chainguard/wolfi-base:latest 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 36B 0.0s
=> [2/8] RUN apk add bash git curl py3-pip bzip2 findutils gcc 15.1s
=> [3/8] RUN git clone https://github.com/asdf-vm/asdf.git --branch v0.12.0 --depth 1 "${HOME}/. 1.2s
=> [4/8] COPY .tool-versions /root/.tool-versions 0.0s
=> [5/8] RUN asdf plugin add zarf https://github.com/defenseunicorns/asdf-zarf.git 1.0s
=> [6/8] RUN cat /root/.tool-versions | cut -d' ' -f1 | grep "^[^\#]" | grep -v "zarf" | xargs 11.8s
=> [7/8] RUN asdf install 69.3s
=> [8/8] RUN pip install --force-reinstall -v "sshuttle==1.1.1" 1.6s
=> exporting to image 28.5s
=> => exporting layers 28.5s
=> => writing image sha256:6e9da4bfafcdd6204f47549a7e5fa19c266360b363bc297947234ca65516765e 0.0s
=> => naming to docker.io/library/test-build-harness
Was not able to get the session manager installed properly. But as is, it has 91 vulnerabilities.
$ grype test-build-harness
✔ Vulnerability DB [updated]
✔ Loaded image
✔ Parsed image
✔ Cataloged packages [2421 packages]
✔ Scanning image... [91 vulnerabilities]
├── 1 critical, 54 high, 36 medium, 0 low, 0 negligible
└── 42 fixed
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
docker 6.1.3 python CVE-2018-10892 Medium
docker 6.1.3 python CVE-2019-13139 High
docker 6.1.3 python CVE-2019-13509 High
docker 6.1.3 python CVE-2019-16884 High
docker 6.1.3 python CVE-2019-5736 High
docker 6.1.3 python CVE-2020-27534 Medium
docker 6.1.3 python CVE-2021-21284 Medium
docker 6.1.3 python CVE-2021-21285 Medium
github.com/cloudflare/circl v1.1.0 go-module CVE-2023-1732 High
github.com/cloudflare/circl v1.1.0 1.3.3 go-module GHSA-2q89-485c-9j2x Medium
github.com/cloudflare/circl v1.3.2 go-module CVE-2023-1732 High
github.com/cloudflare/circl v1.3.2 1.3.3 go-module GHSA-2q89-485c-9j2x Medium
github.com/docker/distribution v2.8.1+incompatible 2.8.2-beta.1 go-module GHSA-hqxw-f8mx-cpmw High
github.com/docker/docker v20.10.20+incompatible 20.10.24 go-module GHSA-232p-vwff-86mp High
github.com/docker/docker v20.10.20+incompatible 20.10.24 go-module GHSA-33pg-m6jh-5237 Medium
github.com/docker/docker v20.10.20+incompatible 20.10.24 go-module GHSA-6wrf-mxfj-pf5p Medium
github.com/hashicorp/go-getter v1.6.2 go-module CVE-2023-0475 Medium
github.com/hashicorp/go-getter v1.6.2 1.7.0 go-module GHSA-jpxj-2jvg-6jv9 Medium
github.com/hashicorp/terraform v1.5.1 go-module CVE-2018-9057 Critical
github.com/hashicorp/terraform v1.5.1 go-module CVE-2021-36230 High
github.com/sigstore/rekor v0.12.1-0.20220915152154-4bb6f441c1b2 1.1.1 go-module GHSA-2h5h-59f5-c5x9 High
github.com/sigstore/rekor v0.12.1-0.20220915152154-4bb6f441c1b2 1.2.0 go-module GHSA-frqx-jfcm-6jjr Medium
go 1.20.5 binary CVE-2020-29509 Medium
go 1.20.5 binary CVE-2020-29511 Medium
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 0.0.0-20201216223049-8b5274cf687f go-module GHSA-3vm4-22fp-5rfm High
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 0.0.0-20211202192323-5770296d904e go-module GHSA-gwc9-m7rh-j2ww High
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 0.0.0-20220314234659-1baeb1ce4c0b go-module GHSA-8c26-wmh5-6g9v High
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4 0.0.0-20210428140749-89ef3d95e781 go-module GHSA-h86h-8ppg-mxmh Medium
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4 0.0.0-20210520170846-37e1c6afe023 go-module GHSA-83g2-8m93-v3w7 High
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4 0.0.0-20220906165146-f3363e06e74c go-module GHSA-69cg-p879-7622 High
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4 0.7.0 go-module GHSA-vvpx-j8f3-3w6h High
golang.org/x/net v0.0.0-20220420153159-1850ba15e1be 0.0.0-20220906165146-f3363e06e74c go-module GHSA-69cg-p879-7622 High
golang.org/x/net v0.0.0-20220420153159-1850ba15e1be 0.7.0 go-module GHSA-vvpx-j8f3-3w6h High
golang.org/x/net v0.0.0-20220909164309-bea034e7d591 0.1.1-0.20221104162952-702349b0e862 go-module GHSA-fxg5-wq6x-vr4w High
golang.org/x/net v0.0.0-20220909164309-bea034e7d591 0.7.0 go-module GHSA-vvpx-j8f3-3w6h High
golang.org/x/sys v0.0.0-20210510120138-977fb7262007 0.0.0-20220412211240-33da011f77ad go-module GHSA-p782-xgp4-8hr8 Medium
golang.org/x/text v0.3.5 0.3.7 go-module GHSA-ppp9-7jff-5vj2 High
golang.org/x/text v0.3.5 0.3.8 go-module GHSA-69ch-w2m2-3vjp High
golang.org/x/text v0.3.7 0.3.8 go-module GHSA-69ch-w2m2-3vjp High
pip 23.1.2 python CVE-2018-20225 High
Just for grins, I saved a tar of the image locally and reran grype against it to make sure the data was the same, it is.
$ docker save -o test-build-harness.tar test-build-harness
$ grype docker-archive:test-build-harness.tar
✔ Vulnerability DB [no update available]
✔ Parsed image
✔ Cataloged packages [2421 packages]
✔ Scanning image... [91 vulnerabilities]
├── 1 critical, 54 high, 36 medium, 0 low, 0 negligible
└── 42 fixed
<<OMMITED>>
The above is better in terms of raw number of vulnerabilities compared to the current build-harness image:
[rmengert@oakridge:~/projects/build-harness]
$ grype ghcr.io/defenseunicorns/build-harness/build-harness:1.4.1
✔ Vulnerability DB [no update available]
✔ Pulled image
✔ Loaded image
✔ Parsed image
✔ Cataloged packages [2597 packages]
✔ Scanning image... [470 vulnerabilities]
├── 1 critical, 70 high, 244 medium, 151 low, 0 negligible (4 unknown)
└── 45 fixed
@RothAndrew , I'll spend a little more time trying to get session manager integrated and finding alternatives for the packages that are commented out in the Dockerfile before submitting a PR.
Prefacing by saying I'm all for this change if we can get the same level of functionality and have less CVEs, but:
Is the juice worth the squeeze? In my experience only criticals and highs have been of enough concern to warrant addressing immediately, so we aren't really talking about going from 470 vulns to 91 vulns, we are talking about going from:
1 critical, 70 high
to
1 critical, 54 high
and that's before we have parity in the packages that are installed (the wolfi branch currently has 19 of the packages commented out).
Given that this image is used in an ephemeral manner, and is not intended to ever be long-lived, I wonder whether it is worth the effort to make this change. We should perhaps first focus on ways to reduce the number of CVEs in the existing Rocky-based image.
@RothAndrew , for the packages that are commented out, I found analogous packages in the Wolfi APK repo for all packages except procps-ng. The updated image has 2 more mediums:
docker build
$ docker build -t test-build-harness:v0.1.0 .
[+] Building 146.5s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 4.05kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for cgr.dev/chainguard/wolfi-base:latest 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 36B 0.0s
=> CACHED [1/8] FROM cgr.dev/chainguard/wolfi-base:latest 0.0s
=> [2/8] RUN apk add bash git curl py3-pip bind-tools bzip2 bzip2-dev findutils gcc libstdc++ gettext iptables jq libffi-dev libxslt-dev make 26.1s
=> [3/8] RUN git clone https://github.com/asdf-vm/asdf.git --branch v0.12.0 --depth 1 "${HOME}/.asdf" && echo -e '\nsource $HOME/.asdf/asdf.sh' >> "${HOME}/.bashrc" && echo -e '\nsource $HOME/.asdf/asdf. 1.1s
=> [4/8] COPY .tool-versions /root/.tool-versions 0.0s
=> [5/8] RUN asdf plugin add zarf https://github.com/defenseunicorns/asdf-zarf.git 1.1s
=> [6/8] RUN cat /root/.tool-versions | cut -d' ' -f1 | grep "^[^\#]" | grep -v "zarf" | xargs -i asdf plugin add {} 10.8s
=> [7/8] RUN asdf install 76.4s
=> [8/8] RUN pip install --force-reinstall -v "sshuttle==1.1.1" 1.6s
=> exporting to image 29.3s
=> => exporting layers 29.3s
=> => writing image sha256:b6966c9fbe2ed2696fcf5c02a72eb7673e3fb83b0ab577edbdc09ad648ba7884 0.0s
=> => naming to docker.io/library/test-build-harness:v0.1.0
docker save and grype
$ docker save -o test-build-harness-v0.1.0.tar test-build-harness:v0.1.0
$ grype docker-archive:test-build-harness-v0.1.0.tar
✔ Vulnerability DB [updated]
New version of grype is available: 0.63.1 (currently running: 0.63.0)
✔ Parsed image
✔ Cataloged packages [2487 packages]
✔ Scanning image... [93 vulnerabilities]
├── 1 critical, 54 high, 38 medium, 0 low, 0 negligible
└── 42 fixed
<<OMITTED>>
Given the number of critical and highs, I agree that perhaps we should focus on reducing vulnerabilities in the Rocky image rather than pivoting to Wolfi.
@RothAndrew , I'm going to close this issue and the draft PR since Wolfi didn't significantly reduce the number of vulnerabilities.
As a user of Build Harness that works in a secure environment, I want BH to have fewer vulnerabilities than it has now, so that my environment may be more secure.
As a user of Build Harness that works in a regulated environment, I want vulnerability scanners like Grype to report that Build Harness has fewer CVEs than it does now, so that I can use it without having to justify so many vulnerabilities in my compliance paperwork.
Notes: