containers / skopeo

Work with remote images registries - retrieving information, images, signing content
Apache License 2.0
8.24k stars 780 forks source link

make build-container fails to build registry-v2 binary #1271

Closed lsm5 closed 3 years ago

lsm5 commented 3 years ago
$ make build-container
/usr/bin/podman build  -t "skopeo-dev:master" .
STEP 1: FROM fedora
STEP 2: RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-md2man   btrfs-progs-devel   device-mapper-devel     libassuan-devel gpgme-devel     gnupg   httpd-tools     which tar wget hostname util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof docker iproute         bats jq podman runc  golint  openssl     && dnf clean all
--> Using cache 64595ec19473e7f5d656181f94ad71761acb87cfdeccc3cb4fb9ada7d083dfdf
--> 64595ec1947
STEP 3: RUN set -x  && REGISTRY_COMMIT_SCHEMA1=ec87e9b6971d831f0eff752ddb54fb64693e51cd     && REGISTRY_COMMIT=47a064d4195a9b56133891bbb13620c3ac83a827     && export GOPATH="$(mktemp -d)"     && git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution"    && (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT")  && GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"        go build -o /usr/local/bin/registry-v2 github.com/docker/distribution/cmd/registry  && (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1")  && GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"        go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry  && rm -rf "$GOPATH"
+ REGISTRY_COMMIT_SCHEMA1=ec87e9b6971d831f0eff752ddb54fb64693e51cd
+ REGISTRY_COMMIT=47a064d4195a9b56133891bbb13620c3ac83a827
++ mktemp -d
+ export GOPATH=/tmp/tmp.CY3C2JPFdT
+ GOPATH=/tmp/tmp.CY3C2JPFdT
+ git clone https://github.com/docker/distribution.git /tmp/tmp.CY3C2JPFdT/src/github.com/docker/distribution
Cloning into '/tmp/tmp.CY3C2JPFdT/src/github.com/docker/distribution'...
+ cd /tmp/tmp.CY3C2JPFdT/src/github.com/docker/distribution
+ git checkout -q 47a064d4195a9b56133891bbb13620c3ac83a827
+ GOPATH=/tmp/tmp.CY3C2JPFdT/src/github.com/docker/distribution/Godeps/_workspace:/tmp/tmp.CY3C2JPFdT
+ go build -o /usr/local/bin/registry-v2 github.com/docker/distribution/cmd/registry
no required module provides package github.com/docker/distribution/cmd/registry: go.mod file not found in current directory or any parent directory; see 'go help modules'
Error: error building at STEP "RUN set -x   && REGISTRY_COMMIT_SCHEMA1=ec87e9b6971d831f0eff752ddb54fb64693e51cd     && REGISTRY_COMMIT=47a064d4195a9b56133891bbb13620c3ac83a827     && export GOPATH="$(mktemp -d)"     && git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution"    && (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT")  && GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"        go build -o /usr/local/bin/registry-v2 github.com/docker/distribution/cmd/registry  && (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1")  && GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"        go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry  && rm -rf "$GOPATH"": error while running runtime: exit status 1
make: *** [Makefile:140: build-container] Error 125
lsm5 commented 3 years ago

@vrothberg is this a bug or am I doing it wrong here?

vrothberg commented 3 years ago

Could be a bug. If so, we're not executing it in CI which would bring up the question: do we still need/want it?

mtrmac commented 3 years ago

AFAICS the tests do include several instances of setupRegistryV2At which need that binary.

We can probably upgrade this build to a later (latest?) docker/distribution easily enough; the later build of REGISTRY_COMMIT_SCHEMA1 is worse, it’s only only useful if we don’t upgrade much. So exploring how to turn off (enough of) the module support in Go to build the old versions seems preferable.

cevich commented 3 years ago

Could be a bug. If so, we're not executing it in CI which would bring up the question: do we still need/want it?

In CI we use the pre-built container image quay.io/skopeo/ci:${DEST_BRANCH} (so "master" in this case). That should be building the same container image, so I suspect there should be build failures showing in quay as well...

...indeed they've been failing for the last 13 days.. Notifications in quay are really bad, and are turned off by default, which explains why nobody noticed :disappointed_relieved:

In any case, it all pokes at the issue/recommendation I made a while ago: Why are we forcing testing to run in a --privileged container at all?

So my preference would be to completely kill the requirement for this container at all levels.

rant It's the job of documentation and the user (or automation) to ensure a compatible build/test environment. Forcing it on users with make + podman is ripe for causing all kinds of problems (clearly including, maintenance headaches). /rant

cevich commented 3 years ago

Okay, I've had the requisite "calming deep-breaths" now. Idea regarding the build problem: Is there some reason we can't simply use quay.io/libpod/registry:2? We use that image all over the place in containers CI, it's very stable/static. Nobody dares to touch it :grin:

mtrmac commented 3 years ago

If nothing major changed in the meantime (I’m not up-to-date), we create a single container with both the servers (the v2 registry, the v1-only registry, the awfully-old-OpenShift that still actually can run basically completely inside a single container), and the Skopeo binary to test and the test code. We only rely on networking inside a container, and the test Go code is creating config files for the registry servers and the like.

Hence also the tension between needing a fairly fresh base image (for a fresh Go version to be relevant to test the codebase) and a fairly conservative base image (to keep the old servers running) — we can’t just freeze the infrastructure on a 5-year-old container with all the servers because the test subject wouldn’t build/run in a relevant environment.


So a separate container to run a registry is not immediately beneficial — we would have to move a lost of the test setup code into a multi-container-creation step run… in yet another container? That would definitely have some benefits (we could just never rebuild the old OpenShift again), OTOH it’s also a non-trivial amount of work and more importantly places much larger demands on the test environment; it would not be just a make check on any developer’s workstation building/running a single container in Podman. Is this practical to do/fully automate using Podman for individual laptops? I’d rather not make access to a K8s cluster a prerequisite for working on Skopeo CI, for example.


Or do you mean we should extract the server binary from quay.io/libpod/registry:2 (assuming it is statically linked) and run it inside the current test container? That could work…

… or maybe we should “just” be using multistage builds, building all the servers as static binaries in old environments, and importing them into the test container. Is that practical to do on individual laptops and the CI?

cevich commented 3 years ago

Yes, @mtrmac you clearly know/understand the low-level nuts/bolts best, thanks for replying. Primarily the container image I was ranting about is the make build-container one. The other ones used for the system-tests seem to be working okay for now(?).

can’t just freeze the infrastructure on a 5-year-old container with all the servers because the test subject wouldn’t build/run in a relevant environment.

Yep, I understand the tension with that. What I was thinking is more of if we could separate the concerns? Run skopeo (and tests) in a modern environment, but use the old registry container as a stable resource. i.e. decouple the client and server environments.

Is that practical to do on individual laptops and the CI?

Fortunately I have lots of experience (good and bad) in this area. I'm generally not in favor of building containers for laptops or CI "on demand". It's better to offload that work to the registry. However (clearly) the quay auto-builds are also no-longer ideal, and are causing surprises from several perspectives. I've seen this before too.

What seems to work well in a multi-developer + CI environment, is for completely separate automation (from developing and testing) to handle building and pushing images. If you also use tags (instead of "latest"), then this provides for predictable updates in the repository (for developers and CI). i.e. bumping a statically defined tag reference in a PR. Like how we do for the VM images.

There are lots of ways to automate this. Github-actions is an option, but I really dislike working with it (it's poorly designed IMHO). Cirrus-CI has support for running cron-like jobs, so all we need is a build script/Makefile/command. There's also the containers/automation_images repo. where builds are simply done by opening a PR. I'm happy to help however I can, with any of these or other options not mentioned.

mtrmac commented 3 years ago

Getting rid of the 20 minutes to build the OpenShift+registries on many CI runs, by rebuilding the servers only if necessary, would certainly be great.

The laptop concern was more of “do we require a specific tool set / networking setup for tests to work?” / “is there a risk that whatever the CI does with networking could break unrelated software on the laptop?”. I suppose one answer would be to just automate this in GitHub and tell everyone to do WIP commits to invoke the tests, and never run them locally, OTOH it is really nice to be able to break into a failing test with a debugger or a shell subprocess from time to time.

cevich commented 3 years ago

Naw, I'm going to disagree. My experience has been that developers prefer it both ways, local and CI. Certainly pulling a pre-built container image is faster and more reliable than building locally.

As for environment setup needed to work and test locally, networking included. I think this can be solved by documentation and deliberately trying to make it as simple as possible.

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

lsm5 commented 3 years ago

Closing as we don't have the build-container anymore.