ko-build / ko

Build and deploy Go applications
https://ko.build
Apache License 2.0
7.62k stars 400 forks source link

Add `pkg/v1/kind` for writing images to KinD #149

Closed mattmoor closed 4 years ago

mattmoor commented 4 years ago

I think we want something like: https://github.com/google/go-containerregistry/tree/master/pkg/v1/daemon

... but to write it directly to KinD.

cc @BenTheElder for pointers on the best API.

I'd love to get some mechanism like this so that we can support something like: KO_DOCKER_REPO=kind.local

mattmoor commented 4 years ago

cc @jonjohnsonjr @ImJasonH too

imjasonh commented 4 years ago

Noob question, does KinD not expose it's daemon or a local registry in some way? It'd be nice to just use v1/daemon and/or v1/remote.

mattmoor commented 4 years ago

Doesn't sound like it.

BenTheElder commented 4 years ago

you can run a local registry, though we don't have one OOTB yet https://kind.sigs.k8s.io/docs/user/local-registry/

still figuring out the right approach for that (would love suggestions!), registries perform much better...

WRT "exposing the daemon", the nodes currently run containerd for the runtime. so we support kind load ... commands to pipe into ctr for import on the nodes. https://kind.sigs.k8s.io/docs/user/quick-start/#loading-an-image-into-your-cluster

load is less performant but kinda neat because you can load images with any registry etc...

The other reason for having tooling beyond something like minikube docker-env or whatever is because we support multi-node and multi-cluster, and you can't just build straight to the daemon when there are N of them even if the nodes ran dockerd :upside_down_face:

mattmoor commented 4 years ago

If we wanted to write to ctr on the nodes the same way kind load does from the library, how would we do it?

BenTheElder commented 4 years ago

If we wanted to write to ctr on the nodes the same way kind load does from the library, how would we do it?

https://github.com/kubernetes-sigs/kind/blob/92cca75ba6efdb60c0a2ea8ffb3691b2e7bbbe37/pkg/cluster/nodeutils/util.go#L102

node.Command ~ wraps docker exec <node-name> command args.... (also for podman instead of docker, experimentally...)

imjasonh commented 4 years ago

It seems reasonable to me to have KinD folks (and any interested third parties) hack on this code in a separate repo, possibly in KinD's repo, using v1.Image etc interfaces, then contribute anything they settle on back to this repo when/if it's settled down.

We don't have much bandwidth to review changes and support code related to new technologies we're not familiar with, and there's no real reason KinD support has to live in this repo anyway AFAIK.

If you find there are changes you need to make to internal packages to support KinD in another repo, we can talk those through, no problem.

afbjorklund commented 4 years ago

For minikube we added a "cruntime" facade, to hide this big old hole in the CRI specification. https://github.com/kubernetes/minikube/blob/master/pkg/minikube/cruntime/cruntime.go#L85

Note that you might need an image namespace. (ctr -n=k8s.io)

BenTheElder commented 4 years ago

that's more or less what happens in the code linked above 🙃

On Sun, Apr 5, 2020 at 1:11 AM Anders Björklund notifications@github.com wrote:

For minikube we added a "cruntime" facade, to hide this big old hole in the CRI specification.

https://github.com/kubernetes/minikube/blob/master/pkg/minikube/cruntime/cruntime.go#L85

Note that you might need an image namespace. (ctr -n=k8s.io)

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/go-containerregistry/issues/706#issuecomment-609377066, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADK6QTPPGQEQ4FKV2SEDRLA4LHANCNFSM4L4PNA5Q .

afbjorklund commented 4 years ago

Biggest problem now seems to be the missing digests (#702)

afbjorklund commented 4 years ago

Another approach (to registry), is to nuke the entire container storage from orbit...

That is, to replace the container runtime internals with archived unpacked content.

Pros: performance

Cons: unportable

I'm still not sure if it is a feature or a liability, but it does boot faster (than load).

https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4

afbjorklund commented 4 years ago

The other reason for having tooling beyond something like minikube docker-env or whatever is because we support multi-node and multi-cluster, and you can't just build straight to the daemon when there are N of them even if the nodes ran dockerd

The awesome workaround for multi-node minikube involved the same hack and running a for loop.. I suppose the sane way of handling it would be deploying a registry, more similar to a real k8s cluster. Unfortunately we only have an internal registry as an addon, external is left as an exercise

https://minikube.sigs.k8s.io/docs/tasks/registry/

afbjorklund commented 4 years ago

Did I mention that localhost:5000 is horrible ? Tunneling it with a DaemonSet doubly so. :nauseated_face:

BenTheElder commented 4 years ago

The awesome workaround for multi-node minikube involved the same hack and running a for loop.. I suppose the sane way of handling it would be deploying a registry, more similar to a real k8s cluster. Unfortunately we only have an internal registry as an addon, external is left as an exercise

I meant that, this doesn't really help with something like docker build straight to the target node which is certainly cheaper in a single-node single cluster world. Since you're going to need to do a copy to each of the nodes it is not going to be better than for ... nodes { load(image) } using save / import etc. (actually worse, as we can do the save once).

kind load ... does the loop of course (see above).

Registry performs better and is more similar to a real cluster. Not sure why localhost:5000 is "horrible" ...

I wouldn't replace the CRI storage at runtime, that's asking for problems, but we do load things into it before runtime and that works very well :-)

afbjorklund commented 4 years ago

Registry performs better and is more similar to a real cluster. Not sure why localhost:5000 is "horrible" ...

The proper solution is to deploy a registry that is not "insecure", to avoid this implicitly trusted location.

It's fine to use port 5000 for local testing, similar to 2375 for Docker (rather than using port 2376) But as a way to access your cluster, to tunnel 5000 from the laptop to the master node ? Eww.

Anyway, minikube made a conscious decision to provide direct docker (and podman) access (env).

tliron commented 4 years ago

As long as we are considering workarounds and hacks, perhaps you might enjoy my Kubernetes Registry Spooler. Internal registries are hard.

afbjorklund commented 4 years ago

As long as we are considering workarounds and hacks

Unfortunately, these are the actual "solutions" provided...

imjasonh commented 4 years ago

I feel like we may have gotten off-topic here. Is there a specific request for this repository? If not, I'd like to close this issue.

If someone wants to prototype kind support for v1.Images in their own repo, I'd be interested to see it.

mattmoor commented 4 years ago

I'm going to transfer the issue to ko instead because that is the context in which this is really interesting.

BenTheElder commented 4 years ago

It's fine to use port 5000 for local testing, similar to 2375 for Docker (rather than using port 2376) But as a way to access your cluster, to tunnel 5000 from the laptop to the master node ? Eww.

The expectation with kind is that these clusters are local. There is better tooling for remote clusters than kind etc...

For local usage, loopback w/o a cert is defacto for most applications and seems like a non-issue.

Docker usage without auth is more powerful anyhow. If you do that in a dind type cluster you've given root. foo docker-env with a "vm less" cluster is just unauthed root ...

Back on topic though, @mattmoor please let me know if you have further questions, and feel free to file issues w/ kind, email me etc.

I don't think we should implement v1.Image in the kind repo because googlecontainer tools depends on k/k, that's a total no-go dependency for us.

mattmoor commented 4 years ago

because googlecontainer tools depends on k/k

Not anymore, we forked out the part we needed, and it should only have been pkg/authn/k8schain at that, anything else would prune the dep.

Honestly, it may be worth having a convo about how we put this together. ko is where this originally came up, but kaniko and buildpacks both use this library (among others), so I could see them wanting to take advantage of the same library to interface with kind for local dev.

In the meantime we should probably just document that when working with KinD folks should use a registry like DockerHub. 😬

tstromberg commented 4 years ago

A tangentially related KEP to be aware of: https://github.com/kubernetes/enhancements/pull/1757

markusthoemmes commented 4 years ago

Will this be solved via #180 ?