kubernetes / dns

Kubernetes DNS service
Apache License 2.0
930 stars 468 forks source link

CVE-2023-5363 and CVE-2023-5528 in 1.22.28 #620

Closed brecht82 closed 6 months ago

brecht82 commented 10 months ago

Our Sysdig Secure complains about vulnerabilities in the newest k8s-dns-node-cache image: CVE-2023-5363 and CVE-2023-5528. Can someone bump these components versions?

vuln

danielhartnell commented 8 months ago

We are also finding the following in our security scans:

If they could be patched as well, that would be appreciated.

golibali commented 7 months ago

The CVE-2023-5363 is also appeared in the 1.23.0 release. As I see a fix is already merged, what is the proposed date for the next release? Thank you!

ricbartm commented 6 months ago

Reusing this issue because even it's for version 1.23.0, it's the same issue.

    Name: libc6, Version: 2.36-9+deb12u3
        CVE-2023-6246, Severity: HIGH, Source: https://security-tracker.debian.org/tracker/CVE-2023-6246
            CVSS score: 7.8, CVSS exploitability score: 1.8
            🩹 Fixed version: 2.36-9+deb12u4
            💥 Has public exploit
        CVE-2023-6779, Severity: HIGH, Source: https://security-tracker.debian.org/tracker/CVE-2023-6779
            CVSS score: 7.5, CVSS exploitability score: 3.9
            🩹 Fixed version: 2.36-9+deb12u4

    Name: libssl3, Version: 3.0.11-1~deb12u1
        CVE-2023-5363, Severity: HIGH, Source: https://security-tracker.debian.org/tracker/CVE-2023-5363
            CVSS score: 7.5, CVSS exploitability score: 3.9
            🩹 Fixed version: 3.0.11-1~deb12u2

When a new versions is released we are not using the latest base image, neither running apt-get dist-upgrade. I assume, because it's a distroless, that apt-get/dpkg commands are not available there, so we need to update the base image SHA when new versions are released, so when this repository has a new releases published the container image is built on top of latest distroless base image version with, presumably, latest security patches. As of today we are building on top of a base image released on Oct 31st 2023.

@DamianSawicki @irLinja tagging you here because I saw your last commit on upgrading dnsmasq to version 2.90 AND also bumping the debian BASEIMAGE, even though it looks like the node-local-cache image uses the IPTIMAGE instead.

How can we make sure every new releases uses the latest version of gcr.io/distroless/base-debian11 and registry.k8s.io/build-image/distroless-iptables respectively with the engineer in charge of releasing the new versions not having to remember that?

I see we use a dependabot config here but it looks like because the container image SHA is in the makefile and not in the Dockerfile as a string dependabot is not catching it and not updating it.

I'm willing to try spend some time on this area, but I'd appreciate some feedback on this topic from upstream maintainers before getting hands-on.

aojea commented 6 months ago

How can we make sure every new releases uses the latest version of gcr.io/distroless/base-debian11 and registry.k8s.io/build-image/distroless-iptables respectively with the engineer in charge of releasing the new versions not having to remember that?

I see we use a dependabot config here but it looks like because the container image SHA is in the makefile and not in the Dockerfile as a string dependabot is not catching it and not updating it.

@saschagrunert @cpanato @BenTheElder do we have some kind of automation for this?

irLinja commented 6 months ago

Using dependabot makes sense. We can manually update the image hash or remove it and use the latest tag (I know it's not the best solution) then find a proper solution to automate this. I'm willing to work on the automation as well.

BenTheElder commented 6 months ago

How can we make sure every new releases uses the latest version of gcr.io/distroless/base-debian11 and registry.k8s.io/build-image/distroless-iptables respectively with the engineer in charge of releasing the new versions not having to remember that?

This shouldn't be waiting for releases, the project maintainers should be updating these regularly.

I don't know if dependabot can support our images.

We can manually update the image hash or remove it and use the latest tag (I know it's not the best solution) then find a proper solution to automate this.

there is no :latest tag for registry.k8s.io images, tags are immutable. if there is a :latest tag it is an old mistake and is not accurate. If you mean "latest" as in "most recent" then yes, using the most recent tag is reasonable, but you should really prefer to pin the digest for security reasons (this protects against the registry being compromised)

DamianSawicki commented 6 months ago

There was this suggestion https://github.com/kubernetes/dns/pull/610 from @jingyuanliang to use latest for BASEIMAGE (gcr.io), but it hasn't been merged so far. Of course, this solution comes with the limitations and issues mentioned by @BenTheElder, so we should consider the pros and cons and decide.

How can we make sure every new releases uses the latest version of gcr.io/distroless/base-debian11 and registry.k8s.io/build-image/distroless-iptables respectively with the engineer in charge of releasing the new versions not having to remember that?

Not a perfect solution, but for start, we can add a point about bumping base images to the release process instructions in the readme.

sakshisharma84 commented 6 months ago

Guys! thanks for taking up this discussion. As mentioned in this comment, our scanners are now reporting critical vulnerability CVE-2024-2961 for package libc6 - 2.36-9+deb12u3 for both dns cache version 1.22.28 and 1.23.0.

Would really appreciate if IPTIMAGE can be bumped to the latest version. Do we have any upcoming plans for this?

Screenshot 2024-05-07 at 9 45 43 PM
irLinja commented 6 months ago

Is there a reason to keep the base image on Debian 11? Debian 12 is the stable version, released almost a year ago.

aojea commented 6 months ago

these are community images, everybody is welcome to send a PR

irLinja commented 6 months ago

I created #629 to bump images to the latest available as a temporary measure. I looked into dependabot configurations, and unlike renovatebot, it's incapable of generic package upgrades.

sakshisharma84 commented 6 months ago

Thanks for PR-629. When can we expect a new release with these changes?

irLinja commented 6 months ago

There was this suggestion #610 from @jingyuanliang to use latest for BASEIMAGE (gcr.io), but it hasn't been merged so far.

@BenTheElder any objections or considerations on using debian:latest instead of manual bumping? Looking at #610, it was ready to merge with no objections.

BenTheElder commented 6 months ago

@BenTheElder any objections or considerations on using debian:latest instead of manual bumping?

Normally our base images are hosted at registry.k8s.io which doesn't have this, and we encourage image pinning (digests) for security + reproducibility reasons. In e.g. core kuebrnetes we avoid having releases depend directly on third party hosts.

irLinja commented 6 months ago

This issue can be closed then. Vulnerability fixes will be shipped with the next release.

aojea commented 6 months ago

This issue can be closed then. Vulnerability fixes will be shipped with the next release.

/close

Thanks

k8s-ci-robot commented 6 months ago

@aojea: Closing this issue.

In response to [this](https://github.com/kubernetes/dns/issues/620#issuecomment-2106376543): >> This issue can be closed then. Vulnerability fixes will be shipped with the next release. > >/close > >Thanks Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
golibali commented 6 months ago

@irLinja Thanks for the fix. Do we know when will be the next release/image, where this fix will be?

DamianSawicki commented 5 months ago

@irLinja Thanks for the fix. Do we know when will be the next release/image, where this fix will be?

@golibali I've just cut a new release: https://github.com/kubernetes/dns/releases/tag/1.23.1.

golibali commented 5 months ago

@DamianSawicki Awesome, thanks! One more thing, what is the process for creating a docker image from this release.

$> docker pull registry.k8s.io/dns/k8s-dns-node-cache:1.23.1
Error response from daemon: manifest for registry.k8s.io/dns/k8s-dns-node-cache:1.23.1 not found: manifest unknown: Failed to fetch "1.23.1"
DamianSawicki commented 5 months ago

@DamianSawicki Awesome, thanks! One more thing, what is the process for creating a docker image from this release.

$> docker pull registry.k8s.io/dns/k8s-dns-node-cache:1.23.1
Error response from daemon: manifest for registry.k8s.io/dns/k8s-dns-node-cache:1.23.1 not found: manifest unknown: Failed to fetch "1.23.1"

@golibali Thanks for flagging this! Currently, the images are in gcr.io/k8s-staging-dns, and they will be promoted to registry.k8s.io/dns after https://github.com/kubernetes/k8s.io/pull/6861 is merged.

golibali commented 5 months ago

@DamianSawicki Awesome, thanks! One more thing, what is the process for creating a docker image from this release.

$> docker pull registry.k8s.io/dns/k8s-dns-node-cache:1.23.1
Error response from daemon: manifest for registry.k8s.io/dns/k8s-dns-node-cache:1.23.1 not found: manifest unknown: Failed to fetch "1.23.1"

@golibali Thanks for flagging this! Currently, the images are in gcr.io/k8s-staging-dns, and they will be promoted to registry.k8s.io/dns after kubernetes/k8s.io#6861 is merged.

Thanks for the info!

DamianSawicki commented 5 months ago

@golibali The images are already in registry.k8s.io (in the end, https://github.com/kubernetes/k8s.io/pull/6861 was closed as a duplicate of an identical yet earlier PR).

golibali commented 5 months ago

@golibali The images are already in registry.k8s.io (in the end, kubernetes/k8s.io#6861 was closed as a duplicate of an identical yet earlier PR).

Yes, thanks @DamianSawicki for the work!