kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.06k stars 4.86k forks source link

Multiarch support for registry addon #10780

Open medyagh opened 3 years ago

medyagh commented 3 years ago

if yes add integration test for multi arch

ilya-zuyev commented 3 years ago

/assign

afbjorklund commented 3 years ago

Hi @ilya-zuyev

You will find that the registry itself is multi-arch (well, amd64/arm/arm64) but that the registry-proxy needs updating... It probably did that anyway, but I don't think it will be a major problem since nginx is multi-arch (being debian-based)

https://hub.docker.com/_/registry?tab=tags https://hub.docker.com/_/nginx?tab=tags

It hasn't seen any updates since it was abandoned (in 2017)

https://github.com/kubernetes/kubernetes/commit/6f48d86f0fde19ac71c234fcbb4917b6a1318014 https://github.com/kubernetes/kubernetes/commit/d6918bbbc0402fc81a53479f4b61b836d7c33a29

FROM nginx:1.11

RUN apt-get update \
    && apt-get install -y \
        curl \
        --no-install-recommends \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/man /usr/share/doc

COPY rootfs /

CMD ["/bin/boot"]

The registry version is the latest available (from 2019)

https://github.com/docker/distribution

I think it is on the same kind of "life support" as machine ?

https://www.docker.com/blog/donating-docker-distribution-to-the-cncf/


It would be great if we could have a proper registry deployment one day, with storage and with certificates. The current hack with the localhost:5000 proxy to get around the "insecure" daemon settings isn't great...

See the old README

https://docs.docker.com/registry/deploying/

But for now, we will continue to promote just using the container runtime on the control plane directly. This is similar to using hostpath as the default PV storage, it is simpler for a single-node deployment...

ilya-zuyev commented 3 years ago

Hi @afbjorklund! Thanks for info. We also want to test in this issue how our registry addon handles multi-arch images, including if it's possible to use it with docker buildx --push ... and docker manifest push

medyagh commented 3 years ago

@ilya-zuyev lets update the findings, with logs and current blockers

ilya-zuyev commented 3 years ago

It looks like we have work to do here:

tested on Ubuntu 20.10 x86_64

ilyaz@skeletron --- g/minikube ‹master› » m version                                                                                                                                                                 130 ↵
minikube version: v1.18.1
commit: a05f887651bd65102c6559f3c30439af3e792427
ilyaz@skeletron --- g/minikube ‹master› » 
ilyaz@skeletron --- g/minikube ‹master› » docker version
Client: Docker Engine - Community
 Version:           20.10.5
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        55c4c88
 Built:             Tue Mar  2 20:17:52 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.5
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       363e9a8
  Built:            Tue Mar  2 20:15:47 2021
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          1.4.4
  GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc:
  Version:          1.0.0-rc93
  GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
ilyaz@skeletron --- g/minikube ‹master› » m start --driver=docker --addons=registry
* minikube v1.18.1 on Ubuntu 20.10
* Using the docker driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Downloading Kubernetes v1.20.2 preload ...
    > preloaded-images-k8s-v9-v1....: 491.22 MiB / 491.22 MiB  100.00% 6.39 MiB
* Creating docker container (CPUs=2, Memory=8000MB) ...

X Docker is nearly out of disk space, which may cause deployments to fail! (88% of capacity)
* Suggestion: 

    Try one or more of the following to free up space on the device:

    1. Run "docker system prune" to remove unused Docker data (optionally with "-a")
    2. Increase the storage allocated to Docker for Desktop by clicking on:
    Docker icon > Preferences > Resources > Disk Image Size
    3. Run "minikube ssh -- docker system prune" if using the Docker container runtime
* Related issue: https://github.com/kubernetes/minikube/issues/9024

* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Verifying Kubernetes components...
  - Using image registry:2.7.1
  - Using image gcr.io/google_containers/kube-registry-proxy:0.4
  - Using image gcr.io/k8s-minikube/storage-provisioner:v4
* Verifying registry addon...
* Enabled addons: storage-provisioner, default-storageclass, registry

! /home/ilyaz/google-cloud-sdk/bin/kubectl is version 1.17.17-dispatcher, which may have incompatibilites with Kubernetes 1.20.2.
  - Want kubectl v1.20.2? Try 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

ilyaz@skeletron --- g/minikube ‹master› » m addons list
|-----------------------------|----------|--------------|
|         ADDON NAME          | PROFILE  |    STATUS    |
|-----------------------------|----------|--------------|
| ambassador                  | minikube | disabled     |
| auto-pause                  | minikube | disabled     |
| csi-hostpath-driver         | minikube | disabled     |
| dashboard                   | minikube | disabled     |
| default-storageclass        | minikube | enabled ✅   |
| efk                         | minikube | disabled     |
| freshpod                    | minikube | disabled     |
| gcp-auth                    | minikube | disabled     |
| gvisor                      | minikube | disabled     |
| helm-tiller                 | minikube | disabled     |
| ingress                     | minikube | disabled     |
| ingress-dns                 | minikube | disabled     |
| istio                       | minikube | disabled     |
| istio-provisioner           | minikube | disabled     |
| kubevirt                    | minikube | disabled     |
| logviewer                   | minikube | disabled     |
| metallb                     | minikube | disabled     |
| metrics-server              | minikube | disabled     |
| nvidia-driver-installer     | minikube | disabled     |
| nvidia-gpu-device-plugin    | minikube | disabled     |
| olm                         | minikube | disabled     |
| pod-security-policy         | minikube | disabled     |
| registry                    | minikube | enabled ✅   |
| registry-aliases            | minikube | disabled     |
| registry-creds              | minikube | disabled     |
| storage-provisioner         | minikube | enabled ✅   |
| storage-provisioner-gluster | minikube | disabled     |
| volumesnapshots             | minikube | disabled     |
|-----------------------------|----------|--------------|

ilyaz@skeletron --- g/minikube ‹master› » kck port-forward svc/registry 5000:80                                                                                                                                                
Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 5000

Then:

ilyaz@skeletron --- tmp/img » curl -Li localhost:5000/v2/_catalog
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
X-Content-Type-Options: nosniff
Date: Tue, 23 Mar 2021 21:52:07 GMT
Content-Length: 20

{"repositories":[]}

ok, registry is started and serves API on local port 5000

Let's build some images:

ilyaz@skeletron --- tmp/img » cat Dockerfile 
FROM alpine

CMD "echo boom"

ilyaz@skeletron --- tmp/img » docker build -t localhost:5000/foo:bar .                                                                      
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM alpine
 ---> a24bb4013296
Step 2/2 : CMD "echo boom"
 ---> Using cache
 ---> 007fcd1efad4
Successfully built 007fcd1efad4
Successfully tagged localhost:5000/foo:bar
ilyaz@skeletron --- tmp/img » docker -D push localhost:5000/foo:bar                                                                         
The push refers to repository [localhost:5000/foo]
50644c29ef5a: Pushed 
bar: digest: sha256:0f6e5d9bac509123c0d6e6179ca068747dfd5d6f324c2bb3b2276efda8a0abe9 size: 528
ilyaz@skeletron --- tmp/img » curl -Li localhost:5000/v2/_catalog  
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
X-Content-Type-Options: nosniff
Date: Tue, 23 Mar 2021 21:52:50 GMT
Content-Length: 25

{"repositories":["foo"]}

Single arch works. But:

ilyaz@skeletron --- tmp/img » docker -D  manifest create localhost:5000/march-foo localhost:5000/foo:bar   
DEBU[0000] endpoints for localhost:5000/foo:bar: [{false https://localhost:5000 v2 false false true 0xc000502d80} {false http://localhost:5000 v2 false false true 0xc000502d80}] 
DEBU[0000] skipping non-tls registry endpoint: http://localhost:5000 
DEBU[0000] skipping non-tls registry endpoint: http://localhost:5000 
no such manifest: localhost:5000/foo:bar

docker manifest create doesn't :(

Let's try buildx:

ilyaz@skeletron --- tmp/img » docker buildx create --name zbuilder --use
zbuilder

ilyaz@skeletron --- tmp/img » docker -D buildx build --push --builder zbuilder --platform linux/amd64,linux/arm64 -t localhost:5000/foo-m .                                                                                    

DEBU[0000] using default config store "/home/ilyaz/.docker/buildx" 
DEBU[0000] serving grpc connection                      
[+] Building 0.0s (0/1)                                                                                                                                                                                                              
[+] Building 0.1s (4/4) FINISHED                                                                                                                                                                                                     
 => [internal] load build definition from Dockerfile                                                                                                                                                                            0.0s
 => => transferring dockerfile: 31B                                                                                                                                                                                             0.0s
 => [internal] load .dockerignore                                                                                                                                                                                               0.0s
 => => transferring context: 2B                                                                                                                                                                                                 0.0s

error: failed to solve: rpc error: code = Unknown desc = failed to do request: Head http://localhost:5000/v2/foo-m/blobs/sha256:069a56d6d07f6b186fbb82e4486616b9be9a37ce32a63013af6cddcb65898182: dial tcp 127.0.0.1:5000: connect: connection refused
1 v0.8.2 buildkitd
github.com/containerd/containerd/remotes/docker.(*request).do
        /src/vendor/github.com/containerd/containerd/remotes/docker/resolver.go:544
github.com/containerd/containerd/remotes/docker.(*request).doWithRetries
        /src/vendor/github.com/containerd/containerd/remotes/docker/resolver.go:551
github.com/containerd/containerd/remotes/docker.dockerPusher.Push
        /src/vendor/github.com/containerd/containerd/remotes/docker/pusher.go:88
github.com/containerd/containerd/remotes.push
        /src/vendor/github.com/containerd/containerd/remotes/handlers.go:154
github.com/containerd/containerd/remotes.PushHandler.func1
        /src/vendor/github.com/containerd/containerd/remotes/handlers.go:146
github.com/moby/buildkit/util/resolver/retryhandler.New.func1
        /src/util/resolver/retryhandler/retry.go:20
github.com/moby/buildkit/util/push.updateDistributionSourceHandler.func1
        /src/util/push/push.go:266
github.com/moby/buildkit/util/push.dedupeHandler.func1.1
        /src/util/push/push.go:295
github.com/moby/buildkit/util/flightcontrol.(*call).run
        /src/util/flightcontrol/flightcontrol.go:121
sync.(*Once).doSlow
        /usr/local/go/src/sync/once.go:66
sync.(*Once).Do
        /usr/local/go/src/sync/once.go:57
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357

116816 v0.5.1-docker /usr/libexec/docker/cli-plugins/docker-buildx -D buildx build --push --builder zbuilder --platform linux/amd64,linux/arm64 -t localhost:5000/foo-m .
github.com/docker/buildx/vendor/google.golang.org/grpc.(*ClientConn).Invoke
        /go/src/github.com/docker/buildx/vendor/google.golang.org/grpc/call.go:35
github.com/docker/buildx/vendor/github.com/moby/buildkit/api/services/control.(*controlClient).Solve
        /go/src/github.com/docker/buildx/vendor/github.com/moby/buildkit/api/services/control/control.pb.go:1321
github.com/docker/buildx/vendor/github.com/moby/buildkit/client.(*Client).solve.func2
        /go/src/github.com/docker/buildx/vendor/github.com/moby/buildkit/client/solve.go:201
github.com/docker/buildx/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1
        /go/src/github.com/docker/buildx/vendor/golang.org/x/sync/errgroup/errgroup.go:57
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357

116816 v0.5.1-docker /usr/libexec/docker/cli-plugins/docker-buildx -D buildx build --push --builder zbuilder --platform linux/amd64,linux/arm64 -t localhost:5000/foo-m .
github.com/docker/buildx/vendor/github.com/moby/buildkit/client.(*Client).solve.func2
        /go/src/github.com/docker/buildx/vendor/github.com/moby/buildkit/client/solve.go:214
github.com/docker/buildx/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1
        /go/src/github.com/docker/buildx/vendor/golang.org/x/sync/errgroup/errgroup.go:57
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357

although:

ilyaz@skeletron --- tmp/img » curl --head  -Li  http://localhost:5000/v2/foo-m/blobs/sha256:ba3557a56b150f9b813f9d02274d62914fd8fce120dd374d9ee17b87cf1d277d                                                                     1 ↵
HTTP/1.1 404 Not Found
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
X-Content-Type-Options: nosniff
Date: Tue, 23 Mar 2021 21:56:12 GMT
Content-Length: 157
ilya-zuyev commented 3 years ago

Probably, we need to serve HTTPS registry endpoint to make buildx happy. Currently, addon supports only HTTP

medyagh commented 3 years ago

this PP is availavble to pick up anyone interested I would accept A PR

medyagh commented 3 years ago

this issue is available to pick up

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

pilhuhn commented 2 years ago

Does the 1.26 milestone assignment mean this may be fixed in 1.26?

spowelljr commented 2 years ago

That would be correct, but it was something we planned on doing for this milestone, but other things took priority, I've removed the milestone from this issue.

zjx20 commented 9 months ago

Probably, we need to serve HTTPS registry endpoint to make buildx happy. Currently, addon supports only HTTP

There seems to be another problem. I've tried adding an https reverse proxy to the registry addon using stunnel, but buildx still reports errors. (which works fine for docker push)

#19 ERROR: failed to push 192.168.44.28:5001/testimage:v0.0.1: failed to do request: Head "https://192.168.44.28:5001/v2/open-local/blobs/sha256:c3c0e0e9df293d62b09b768b9179a4d876c39faabd6cdd40c0a4d26cb6881742": dial tcp 192.168.44.28:5001: i/o timeout

The stunnel command is:

docker run --network=host -itd --name minikube-registry-proxy \
        -e STUNNEL_SERVICE=registry \
        -e STUNNEL_ACCEPT=5001 \
        -e STUNNEL_CONNECT=$(minikube ip):5000 \
        -p 5001:5001 \
    dweomer/stunnel