appropriate / docker-jetty

Formerly the location of the Docker official image for Jetty
https://registry.hub.docker.com/_/jetty/
46 stars 46 forks source link

Issue 113 keys multi build and 103 slim JDKs #121

Closed gregw closed 4 years ago

gregw commented 4 years ago

Fix #113 by creating multistage builds that first download keys. Also took opportunity to reorder Dockerfiles to reduce complexity and size.

gregw commented 4 years ago

@md5 this works OK... although it still fetches the keys once per image for a build with a clean local repository. But subsequent builds are good.

gregw commented 4 years ago

I have also extended/validated the approach to use another multistage intermediary to download and validate jetty. This allows for slim images that do not have gpg available, so this fixes #103 also

The docker files for 9.4 now have a lot in common, so it may be that we are best to generate them rather than copy/paste

gregw commented 4 years ago

@tianon @md5 with this PR the 9.4 images (jre8, jre11, jre11-slim, jdk13, jdk13-slim) now vary by only a single line in the Dockerfile and all share the same first two stages of the multistage build. Surely it would be better to use a Makefile dependency to create a common/local 9.4 image and then generate each java variant from that? Unfortunately I don't have the makefile-foo to do this.

gregw commented 4 years ago

Hmm the update script was a) out of date; b) had not been run for 9.2 and 9.3 (oops I'd been doing it manually). So this PR is now getting rather large. I have a review to commit, but would like another opinion for @md5 and/or @tianon

yosifkit commented 4 years ago

The new multi-stage build doesn't fit either of the guidelines in the Official Images FAQ: https://github.com/docker-library/faq/#multi-stage-builds.

Also note that the problem this is trying to solve is non-existent on the Official Image build servers: https://github.com/docker-library/faq/#openpgp--gnupg-keys-and-verification. See also https://github.com/docker-library/php/pull/666 for how to add the same resiliency to builds on TravisCI and see the hack-my-builds.sh for the details of how that is accomplished.

gregw commented 4 years ago

@yosifkit I do not see the pgp-happy-eyeballs approach as viable. Firstly it depends on a script loaded from a private repository, which is worse than our initial proposed solution of depending on an image that was built by the jetty project and contained all the required pgp signatures. Then it is only a fix for travis and will not help local builds.

gregw commented 4 years ago

I have updated this PR to use only a single multistage to be more inline with the guidelines. I still do not like the fact that we essentially build the Jetty94 base image multiple times, even though it is identical in all the variants.

Ideally we should just be able to build the jdk13 image and then have all the other variants import from that.

gregw commented 4 years ago

See #122 for an alternative approach where the other 9.4 variants are based on the 9.4-jdk13 official image. I have a slight concern that perhaps this might not work with the official image library as it appears to want to build each image from a tag... perhaps it will be OK if we well order the generated stackbrew file?

gregw commented 4 years ago

@yosifkit @vutny can you have a look at #122 as an alternative?

vutny commented 4 years ago

I have added my comment there.

gregw commented 4 years ago

AS the #122 approach does not play nice with Travis, it is my intention to merge this PR unless there are objections accompanied with a concrete proposal for something better: @md5, @vutny, @yosifkit ?

vutny commented 4 years ago

This is too much stuff here to grasp at once, but since it works, that's fine, I guess.

yosifkit commented 4 years ago

Let's summarize what I see in the current changes:

  1. 9.2-jre8 and 9.3-jre8 have been split into a 2 stage build: stage 0 downloads a list of gpg keys; stage 1 copies in those keys and verifies/installs jetty
  2. 9.4-* images have also been split into a 2 stage build: stage 0 is always from openjdk:13-jdk and download gpg keys and verifies/installs jetty (identical across 9.4 Dockerfiles); stage 1 copies jetty from the previous stage.

Number one still won't fit in the Official Images (https://github.com/docker-library/faq/#multi-stage-builds) and if replicating the setup in hack-my-builds is not viable, then I recommend a loop like Node. So something like this:

RUN set -xe; \
    mkdir /jetty-keys; \
    export GNUPGHOME=/jetty-keys; \
    for key in $JETTY_GPG_KEYS; do \
        for server in \
            ha.pool.sks-keyservers.net \
            p80.pool.sks-keyservers.net:80 \
            ipv4.pool.sks-keyservers.net \
            pgp.mit.edu \
        do \
            if gpg --batch --keyserver "$server" --recv-keys "$key"; then \
                break; \
            fi; \
        done; \
    done; \
    # count that we have all the keys?
    # or just let the verify fail so that we see a key fully failed on all servers
    mkdir -p "$JETTY_HOME"; \
    cd "$JETTY_HOME"; \
    curl -fL "$JETTY_TGZ_URL" -o jetty.tar.gz; \
    curl -fL "$JETTY_TGZ_URL.asc" -o jetty.tar.gz.asc; \
    gpg --batch --verify jetty.tar.gz.asc jetty.tar.gz \
    ...

Number two is a clever solution for docker cache sharing in local builds but won't affect TravisCI (since it builds them in parallel). On the official images build infrastructure, "we don't have a clean way to preserve the cache for the intermediate stages", so the layers will get deleted when we clean up images that are "ripe". The practical implication of this is that since the build cache for these untagged stages could be deleted at any time, we will end up spending extra time rebuilding them and users will pull "new" images that are unchanged.

See https://github.com/docker-library/faq/#multi-stage-builds and https://github.com/docker-library/official-images/pull/5929#discussion_r285316415.

gregw commented 4 years ago

@yosifkit I like that loop you suggested and have changed this PR to use that approach, even with the staged Dockerfile. I've also modified 9.2 and 9.3 to be in the same way as 9.4

I have however left this a multistage Dockerfile because the slim images simply do not have gpg nor curl, so I need the full image to do the fetch and verify anyway.

So I'm happy with this approach except with regards to what you say about "untagged stages could be deleted at any time". I'm trying to get my head around the implications of this. I understand that these build files do create untagged images... but then we copy out of them all the files we need and they do not form a layer that is needed by the final image. So I don't see why a rebuild would be necessary if those untagged images are deleted? I am able to delete all my untagged images and still run the images locally.

If this is not acceptable, then I think I'm stumped how to make this work?

Perhaps we could break this into two different projects: 1) build the main 9.2, 9.3 and 9.4 images; 2) build the jdk and slim variants based off the main images ?

gregw commented 4 years ago

OK - I'm going to merge this in a few hours unless there are objections.

gregw commented 4 years ago

Dang! this merge has broken the architectures for official images.... more work to do!

gregw commented 4 years ago

https://github.com/docker-library/official-images/pull/7134