carlossg / docker-maven

Official Docker image with Maven
Apache License 2.0
524 stars 423 forks source link

Authenticate binaries before installation: #304

Closed jerome-benoit closed 1 year ago

jerome-benoit commented 1 year ago

All images should authenticate maven binaries.

And:

Signed-off-by: Jérôme Benoit jerome.benoit@sap.com

carlossg commented 1 year ago

before you spend time adding more changes, what's the goal?

set -e is not needed

Downloading the gpg keys at the same time when doing the build is less secure than the checksum as if apache.org gets compromised they can also change the KEYS file

Adding gpg to the image and removing it in separate layers increases the size of the image

jerome-benoit commented 1 year ago

before you spend time adding more changes, what's the goal?

Authenticate with asymmetric crypto the maven binary. It's a security requirement for most of the (serious) company when it comes to build system. Integrity check is not enough, it only guarantees the non-corruption. It's the same reasons that have made all distro package managers used asymmetric crypto: authenticate the binaries source.

set -e is not needed

It's needed for smoke testing, and explicitly set the shell desired behaviour make the code more readable and understandable, and avoid to have surprise with shell settings on distro.

Downloading the gpg keys at the same time when doing the build is less secure than the checksum as if apache.org gets compromised they can also change the KEYS file

You're right. Relying on a list of keys id and get them from well known key servers is the secure way to do it.

Adding gpg to the image and removing it in separate layers increases the size of the image

I do not think so. I've carefully used package manager options to ensure only the needed packages with their runtime dependencies are installed, and totally purged including their runtime dependencies (binaries, configuration, etc.). The issue I see is at purging: It was not done before in all Dockerfile and maybe gpg was already installed by default (except for slim variant) and required, but now removed.

carlossg commented 1 year ago

It's needed for smoke testing

why ? if mvn --version fails the build fails without set -e

there are already tests under tests

I've carefully used package manager options to ensure only the needed packages with their runtime dependencies are installed, and totally purged including their runtime dependencies

when you remove the packages in a separate RUN it doesn't reduce the image size as it happens in a new image layer and the previous one is not modified

Changing the ordering to reuse the envvars and the url to download look good to me

carlossg commented 1 year ago

the gpg validation could be done with multistage builds so that gpg doesn't end in the resulting image

the checksum could be something to keep anyway to ensure the package hasn't changed

jerome-benoit commented 1 year ago

It's needed for smoke testing

why ? if mvn --version fails the build fails without set -e

If the distro shell settings default to -e behaviour.

there are already tests under tests

I will remove -e. Ok to keep -x to have debug logs at build when using RUN?

I've carefully used package manager options to ensure only the needed packages with their runtime dependencies are installed, and totally purged including their runtime dependencies

when you remove the packages in a separate RUN it doesn't reduce the image size as it happens in a new image layer and the previous one is not modified

I see. I wanted to keep the code doing the maven installation the same everywhere to ease maintenance (no distro dependencies here) and separate packages handling, that is distro dependent.

Do you have an idea on how maven install + packages work can be done without having to maintain a specific RUN in each Dockerfile?

jerome-benoit commented 1 year ago

the checksum could be something to keep anyway to ensure the package hasn't changed

gpg do also the integrity check, as well as the identification.

carlossg commented 1 year ago

yes, but you are just checking that it is valid, not that it hasn't changed since the images were switched to that maven version when a new version is available I update the version and checksum, so if the package changes the image would fail, even if it has a valid gpg signature. It doesn't add a lot of value if you do gpg

jerome-benoit commented 1 year ago

yes, but you are just checking that it is valid, not that it hasn't changed since the images were switched to that maven version when a new version is available I update the version and checksum, so if the package changes the image would fail, even if it has a valid gpg signature. It doesn't add a lot of value if you do gpg

I fail to understand the point here. If the tarball is changed by an attacker for a specific version by smart means including the pgp signatures, gpg will also fail. It might succeeded only if the attacker has access to the project original private keys. But then the project has a deep issue and the keys shall be invalidated on keys server.

But I can re-add the check on the checksum.

jerome-benoit commented 1 year ago

@carlossg: The PR is finished and waiting for review, could you please allow the CI workflow on it?

carlossg commented 1 year ago

I wouldn't merge it as is, all the download and checks should be done with multi-stage, so the tools don't end in the image

jerome-benoit commented 1 year ago

I wouldn't merge it as is, all the download and checks should be done with multi-stage, so the tools don't end in the image

multi-stage installation and secure images are two differents beasts. The goal is first to close a security hole in the binaries installation process. The size of the resulting images is exactly the same as before, everything is thoroughly cleaned up at the end before the layer is generated.

Then in another PR, I can add a cleaner way of doing the download, the integrity check, the signature check in a separate stage. And I really would like to find a way to factorize that code between all Dockerfile files ...

carlossg commented 1 year ago

The goal is first to close a security hole in the binaries installation process

there is no hole right now as the sha is the one published by the Maven project. The only issue would be if apache.org was compromised when maven was released and it was changed, but that also applies to the gpg key id that you added

jerome-benoit commented 1 year ago

The goal is first to close a security hole in the binaries installation process

there is no hole right now as the sha is the one published by the Maven project. The only issue would be if apache.org was compromised when maven was released and it was changed, but that also applies to the gpg key id that you added

As the source of the SHA is not authenticated, there's a security issue. A secure installation process must include a signature check of any downloaded files by an asymmetric crypto system well trusted. A secure installation does not rely on a string but manually into a GitHub repo to do integrity checks.

carlossg commented 1 year ago

The source of SHA is apache.org, and I pull it on each Maven upgrade, as long as you trust me. Checking that releases are signed by a specific GPG key in the Dockerfile is trusted as long as you trust who put the key id in there.

jerome-benoit commented 1 year ago

The source of SHA is apache.org, and I pull it on each Maven upgrade, as long as you trust me. Checking that releases are signed by a specific GPG key in the Dockerfile is trusted as long as you trust who put the key id in there.

Do you really want to compare the level of trust and security features a crypto system such as OpenPGP offers to the ones given by a simple integrity check on structured binaries in that PR comments? ;-)

In the meantime, I've ensured nothing is leftover in the image in the lastest PR commit.

jerome-benoit commented 1 year ago

I think I've addressed all the comments raised on that PR. The windows images are still missing but I do not have any knowledge on powershell, nor experience.

carlossg commented 1 year ago

sorry for the delay, some images are good, some are removing packages. I'm adding some tests for package backwards compatibility in #308

jerome-benoit commented 1 year ago

sorry for the delay, some images are good, some are removing packages.

I've missed some images? Tell me which one and I will fix the matching Dockerfile.

I'm adding some tests for package backwards compatibility in #308

Will have a look at it.

Thanks.

jerome-benoit commented 1 year ago

The test issue with one image looks like a race condition between gpg socket usage and rm (but I do not understand why rm error out with -fr options ...). I can try to explicitly kill gpg before.

carlossg commented 1 year ago
gpgconf: invalid option "--kill"
jerome-benoit commented 1 year ago
gpgconf: invalid option "--kill"

I've reverted the change on matching images. But now there's a bunch of conflicts to solve ...

carlossg commented 1 year ago

maybe do a || true in the meantime ?

you need contributor rights for the actions to run, but maybe they would run the same in your fork

jerome-benoit commented 1 year ago

maybe do a || true in the meantime ?

Looks ok for now. Will do it ASAP.

carlossg commented 1 year ago

finally, thanks!