kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.3k stars 1.54k forks source link

Move dockerhub kindest/node to a non rate limited registry #1895

Closed howardjohn closed 1 year ago

howardjohn commented 3 years ago

What would you like to be added: A mirror of kindest/node to GCR or another registry

Why is this needed: On Nov 1, dockerhub will introduce rate limiting on pulls. I am fairly sure this will break our CI, since we pull a lot of kind images. This is not at all a blocker for us, as we can mirror them into our own registries without issue, but may be useful for the broader community.

BenTheElder commented 3 years ago

I think we may even want to primarily move off of it going forward, TBD.

We've been discussing dockerhub mitigations at the SIG Testing level but haven't managed to move on much yet.

This is a higher priority for me though*, other subprojects we don't own can themselves move away from dockerhub and if any projects are using it Kubernetes doesn't officially support them in that and they would have received notice from dockerhub themselves.

* as sig-testing we also need to mitigate dockerhub usage in e2e.test in kubernetes itself, and will still try to provide mitigations and guidance for CI / subprojects ..

BenTheElder commented 3 years ago

Kubernetes provides k8s.gcr.io to subprojects but it's a bit problematic for us:

the latter may be a good plan anyhow but the former is ... problematic. the promotion system also makes image pushes a bit onerous, even if you have automated pushes you must manually craft a yaml PR to request that an image be promoted from the staging registry to production.

https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io

quay.io and the github package registry are obvious alternatives to look into.

BenTheElder commented 3 years ago

https://github.com/kubernetes/test-infra/issues/19477#issuecomment-720782177 this may be the simple route πŸ™ƒ

BenTheElder commented 3 years ago

Mirroring and/or switching registries going forward is still on the table. We can't do a mirror of existing imagds to k8s.gcr.io due to the promotion process and one tag one image though.

I've applied to the docker OSS program. EDIT: first form response 2:40AM pacific dec 2nd, we're under review.

BenTheElder commented 3 years ago

I totally misread this and then went OOO. I'm not clear if we should move forward on this approach, particularly these constraints:

While the Publisher retains the Open Source project status, the Publisher agrees to -

Become a Docker public reference for press releases, blogs, webinars, etc
Create joint blogs, webinars and other marketing content 
Create explicit links to their Docker Hub repos, with no β€˜wrapping’ or hiding sources of their images
Include information about Docker on the website and in documentation  

Given kind does not currently even control a blog or press releases (versus the kubernetes project at large), the last two points here we should already be more or less compliant on.

On the plus side: I've not heard any additional user concern about this yet, which might be because a typical workflow does not involve pulling node / kind images often outside of CI, and CI workflows are more generally impacted than just by our images.

BenTheElder commented 3 years ago

cc @tao12345666333 I'm a bit concerned about breaking our chinese user base while we're at it (I think we have a number of users that depend on dockerhub being available), though perhaps the new github package registry could work πŸ€”

tao12345666333 commented 3 years ago

@BenTheElder The new github package registry does not work perfectly and is not available in some regions.

If needed, I can provide a container registry mirror (in China). We only need to introduce it in the document.

BenTheElder commented 3 years ago

I'm not sure how we could provide one, exactly.

BenTheElder commented 3 years ago

I forgot to mention that I was scheduled to meet with Docker this week to discuss KIND and their program. Just had that meeting, very pleasant chat. For now the plan is to move forward with their OSS partner program, which I'll be following up on later today.

If that doesn't work out, I will reach out about mirroring images from k8s.gcr.io before we move there with the rest of the subprojects @tao12345666333 πŸ™

dims commented 3 years ago

@BenTheElder are there terms and conditions that we as a project need to adhere to?

tao12345666333 commented 3 years ago

I forgot to mention that I was scheduled to meet with Docker this week to discuss KIND and their program. Just had that meeting, very pleasant chat. For now the plan is to move forward with their OSS partner program, which I'll be following up on later today.

If that doesn't work out, I will reach out about mirroring images from k8s.gcr.io before we move there with the rest of the subprojects @tao12345666333 πŸ™

ok, good news!

BenTheElder commented 3 years ago

@BenTheElder are there terms and conditions that we as a project need to adhere to?

So far upon revisiting I'm not concerned by anything yet, and we were quite clear that this is only the kind project, not Kubernetes, not SIG testing, not any of our employers, etc. But we shall see as I continue along with the process πŸ˜…

dims commented 3 years ago

@BenTheElder Ack. if any paperwork shows up, please holler.

boldandbusted commented 2 years ago

Just a lowly KinD desktop user here. I seem to be getting rate-limited when downloading images. Got plenty of bandwidth, but since I'm doing a lot of work with K8s Operators and Helm that download a lot of images, KinD is adding to Docker, Inc.'s quota decrement for my network's NAT'd IP. Cheers, and happy new year! :)

BenTheElder commented 1 year ago

Well, this may be more urgent https://github.com/kubernetes-sigs/kind/issues/3124

Just a lowly KinD desktop user here. I seem to be getting rate-limited when downloading images. Got plenty of bandwidth, but since I'm doing a lot of work with K8s Operators and Helm that download a lot of images, KinD is adding to Docker, Inc.'s quota decrement for my network's NAT'd IP. Cheers, and happy new year! :)

Kind should only pull once per node image itself, and generally we expect that to be relatively rare, once you've used a node image version it is pulled and you should not need to re-pull it unless you delete it. A typical kind cluster involves a single image that we host, all other images needed are either packed into it or coming from your workloads.

If you mean images pulled to run within the nodes, you can use kind load ... instead to avoid pulling them to the cluster once per cluster, and that's not related to this issue.

In the meantime hosting your own mirrors is also an option, or persisting a copy of the images as a tarball and loading.

BenTheElder commented 1 year ago

Currently thinking between:

registry.k8s.io/kind/v0.17.0/node:v1.26.1

registry.k8s.io/kind/node:v0.17.0-v1.26.1

I think the former is more obvious, but it will make it harder to list images available with skewed kind versions, which mostly works OK today but isn't guaranteed.

The second is more complex to parse, e.g. if we have pre-release versions like registry.k8s.io/kind/node:v0.18.0-alpha-69-gb2784ba3-v1.26.2, to correctly parse you will need to rely on splitting at the second v (and that we won't have v within versions at least).

See https://github.com/kubernetes-sigs/kind/issues/3124#issue-1624286198

The latter would also make it easier for kind to do clever client side tag listing and comparison with constraints on valid images I think, but the former is super simple to implement for everyone crane ls registry.k8s.io/kind/$(kind version)/node.

Note that there's a portable API to list tags (after the :) but NOT to list image names (before the :) in registries.

tao12345666333 commented 1 year ago

Although this may come as a surprise, I am able to offer a mirror of the China region if necessary. (Maybe this will make our migration easier. We can focus on providing general solutions.)

tao12345666333 commented 1 year ago

I prefer the second form, registry.k8s.io/kind/node:v0.17.0-v1.26.1

BenTheElder commented 1 year ago

Thanks @tao12345666333. If the default is no longer on dockerhub and users currently will have to override --image or nodes.image field to point at the mirror explicitly instead of dockerd config, this is my main regression concern for mirrors in china.

I suppose if this proves problematic we can introduce some env to tell kind to basically rewrite registry.k8s.io/kind => mirror independent of config / flags.

In the second form, we can think that one image contains multiple tags, right?

In both forms the tentative plan is to include both kind version and kubernetes version somehow, because we have to stop pushing mutable tags to use registry.k8s.io anyhow so image name+tag must be fully unique. It may also help clarify version support.

I think the second form might be a bit more confusing and complex to parse. The point about other registries and nesting is interesting, other than dockerhub do you have any examples? Ostensibly in the OCI spec multi-level nesting is valid but I know dockerhub doesn't seem to support it.

BenTheElder commented 1 year ago

The original ask was mirroring, but I think given the circumstances the plan is to ensure the primary registry is without issue.

tao12345666333 commented 1 year ago

The point about other registries and nesting is interesting, other than dockerhub do you have any examples? Ostensibly in the OCI spec multi-level nesting is valid but I know dockerhub doesn't seem to support it.

The following are from the top two public cloud vendors in China, but none of their image registry services support nesting.

I have the impression that some other cloud vendors have similar limitations. Or set multi-level nesting as an advanced premium feature.

BenTheElder commented 1 year ago

Thanks.

There's also the possibility of a variant of the first form like: registry.k8s.io/kind/node-v0.17.0:v1.26.1

Which I think is maybe slightly less clear to read than the first form, more similar to the second form, but has the same effects otherwise as the first form without an additional level of nesting.

It has the same trade-off in being somewhat clearer / easier for users to parse if they wish, but having tag list calls be per-kind version though instead of being able to filter more flexibly from just listing node images.

OTOH: After adding many many tags it may be faster to not have to list all tags ever. There are not sophisticated parameters for the tag list call https://github.com/opencontainers/distribution-spec/blob/main/spec.md#content-discovery

tao12345666333 commented 1 year ago

Yes.

registry.k8s.io/kind/node-v0.17.0:v1.26.1
If this form is used, I also agree. This can dispel my concerns about multiple nesting.

BenTheElder commented 1 year ago

I think we have one shot to introduce a better scheme here when we migrate and we better make things clearer and support things like #3053, otherwise I wouldn't bikeshed this particular detail so much.

  1. registry.k8s.io/kind/v0.17.0/node:v1.26.1 – Tentatively no: because nesting and mirroring. I do like how this reads though, it seems more obvious that it's part of kind v0.17.0 and that v1.26.1 is some other version (contextually kubernetes)
  2. registry.k8s.io/kind/node:v0.17.0-v1.26.1 – Little confusing / hard to parse? All images bundled in one list of tags ...
  3. registry.k8s.io/kind/node-v0.17.0:v1.26.1– Very similar to 2) but technically distinct image names for each release (And therefore tag discovery if we make use of that, with the pros and cons)
  4. registry.k8s.io/kind/v0.17.0:v1.26.1 – 3) but terser πŸ€”

I wonder if we have some prior art in something like kOps or cluster-API for conventions naming / tagging dual $subproject_version, $kubernetes_version images.

Edit: Very scientifically ℒ️ asking Twitter for input πŸ€·β€β™‚οΈ https://twitter.com/BenTheElder/status/1635877721644630021

Vad1mo commented 1 year ago

We operate a CNCF Harbor-based container registry as a service that has many benefits over most of the other registries out there.

There are also features regarding containerized image distribution, that might be valuable too as well.

Data Ownership is something very valuable that should not be underestimated. Imaging how many books, tutorials will now become outdated because they are referring to images that don't exist anymore.

liangyuanpeng commented 1 year ago

I'm manager a proxy of registry.k8s.io (k8s.gcr.io) , it's working for Chain user and it had running 3 year. I'd love to add it on top of the kind official docs.

Note: Only working for Chain user, this is the mission of registry.lank8s.cn.

tao12345666333 commented 1 year ago

Docker has modified the policy. If the Docker-Sponsored Open Source program does not go well, we can also migrate to a personal account without changing the name.

https://www.docker.com/blog/we-apologize-we-did-a-terrible-job-announcing-the-end-of-docker-free-teams/

BenTheElder commented 1 year ago

We're in the OSS program now https://github.com/kubernetes-sigs/kind/issues/3124#issuecomment-1474007648

One of the benefits is lifted rate limit. That's good for one year and then needs renewal.

We will still be considering next steps for long term improving images but first up is making some fixes to the dockerhub listings related to the OSS program.

BenTheElder commented 1 year ago

I'm also in contact with my co steering members and SIG K8s infra about this (and this time heavily involved in both ...), the new OSS program has very light requirements and there've been no objections unlike the initial program that Kubernetes (not my call) rejected and left us feeling like we couldn't clearly agree to either.

Despite all the complaints I'm seeing elsewhere, I would actually say this updated program seems very good.

I am however wary of future changes and already working on multi-vendor image hosting for the Kubernetes project so I think we still need to strongly consider our long term options.

BenTheElder commented 1 year ago

The original issue (rate limiting) should be resolved now by way of participation in the revamped Docker Open Source program.

We'll still look at if we should migrate to registry.k8s.io or elsewhere in a follow-up, but I think we can close this one for now.