Closed mattmoor closed 4 years ago
cc @jonjohnsonjr @ImJasonH too
Noob question, does KinD not expose it's daemon or a local registry in some way? It'd be nice to just use v1/daemon and/or v1/remote.
Doesn't sound like it.
you can run a local registry, though we don't have one OOTB yet https://kind.sigs.k8s.io/docs/user/local-registry/
still figuring out the right approach for that (would love suggestions!), registries perform much better...
WRT "exposing the daemon", the nodes currently run containerd for the runtime. so we support kind load ...
commands to pipe into ctr
for import on the nodes. https://kind.sigs.k8s.io/docs/user/quick-start/#loading-an-image-into-your-cluster
load is less performant but kinda neat because you can load images with any registry etc...
The other reason for having tooling beyond something like minikube docker-env
or whatever is because we support multi-node and multi-cluster, and you can't just build straight to the daemon when there are N of them even if the nodes ran dockerd :upside_down_face:
If we wanted to write to ctr
on the nodes the same way kind load
does from the library, how would we do it?
If we wanted to write to ctr on the nodes the same way kind load does from the library, how would we do it?
node.Command
~ wraps docker exec <node-name> command args....
(also for podman instead of docker, experimentally...)
It seems reasonable to me to have KinD folks (and any interested third parties) hack on this code in a separate repo, possibly in KinD's repo, using v1.Image
etc interfaces, then contribute anything they settle on back to this repo when/if it's settled down.
We don't have much bandwidth to review changes and support code related to new technologies we're not familiar with, and there's no real reason KinD support has to live in this repo anyway AFAIK.
If you find there are changes you need to make to internal packages to support KinD in another repo, we can talk those through, no problem.
For minikube we added a "cruntime" facade, to hide this big old hole in the CRI specification. https://github.com/kubernetes/minikube/blob/master/pkg/minikube/cruntime/cruntime.go#L85
Note that you might need an image namespace. (ctr -n=k8s.io
)
that's more or less what happens in the code linked above 🙃
On Sun, Apr 5, 2020 at 1:11 AM Anders Björklund notifications@github.com wrote:
For minikube we added a "cruntime" facade, to hide this big old hole in the CRI specification.
https://github.com/kubernetes/minikube/blob/master/pkg/minikube/cruntime/cruntime.go#L85
Note that you might need an image namespace. (ctr -n=k8s.io)
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/go-containerregistry/issues/706#issuecomment-609377066, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADK6QTPPGQEQ4FKV2SEDRLA4LHANCNFSM4L4PNA5Q .
Biggest problem now seems to be the missing digests (#702)
Another approach (to registry), is to nuke the entire container storage from orbit...
That is, to replace the container runtime internals with archived unpacked content.
Pros: performance
Cons: unportable
I'm still not sure if it is a feature or a liability, but it does boot faster (than load).
The other reason for having tooling beyond something like
minikube docker-env
or whatever is because we support multi-node and multi-cluster, and you can't just build straight to the daemon when there are N of them even if the nodes ran dockerd
The awesome workaround for multi-node minikube involved the same hack and running a for loop.. I suppose the sane way of handling it would be deploying a registry, more similar to a real k8s cluster. Unfortunately we only have an internal registry as an addon, external is left as an exercise
Did I mention that localhost:5000
is horrible ? Tunneling it with a DaemonSet doubly so. :nauseated_face:
The awesome workaround for multi-node minikube involved the same hack and running a for loop.. I suppose the sane way of handling it would be deploying a registry, more similar to a real k8s cluster. Unfortunately we only have an internal registry as an addon, external is left as an exercise
I meant that, this doesn't really help with something like docker build
straight to the target node which is certainly cheaper in a single-node single cluster world. Since you're going to need to do a copy to each of the nodes it is not going to be better than for ... nodes { load(image) }
using save / import etc. (actually worse, as we can do the save once).
kind load ...
does the loop of course (see above).
Registry performs better and is more similar to a real cluster. Not sure why localhost:5000
is "horrible" ...
I wouldn't replace the CRI storage at runtime, that's asking for problems, but we do load things into it before runtime and that works very well :-)
Registry performs better and is more similar to a real cluster. Not sure why
localhost:5000
is "horrible" ...
The proper solution is to deploy a registry that is not "insecure", to avoid this implicitly trusted location.
It's fine to use port 5000 for local testing, similar to 2375 for Docker (rather than using port 2376) But as a way to access your cluster, to tunnel 5000 from the laptop to the master node ? Eww.
Anyway, minikube made a conscious decision to provide direct docker (and podman) access (env
).
As long as we are considering workarounds and hacks, perhaps you might enjoy my Kubernetes Registry Spooler. Internal registries are hard.
As long as we are considering workarounds and hacks
Unfortunately, these are the actual "solutions" provided...
I feel like we may have gotten off-topic here. Is there a specific request for this repository? If not, I'd like to close this issue.
If someone wants to prototype kind support for v1.Image
s in their own repo, I'd be interested to see it.
I'm going to transfer the issue to ko instead because that is the context in which this is really interesting.
It's fine to use port 5000 for local testing, similar to 2375 for Docker (rather than using port 2376) But as a way to access your cluster, to tunnel 5000 from the laptop to the master node ? Eww.
The expectation with kind is that these clusters are local. There is better tooling for remote clusters than kind etc...
For local usage, loopback w/o a cert is defacto for most applications and seems like a non-issue.
Docker usage without auth is more powerful anyhow. If you do that in a dind type cluster you've given root. foo docker-env
with a "vm less" cluster is just unauthed root ...
Back on topic though, @mattmoor please let me know if you have further questions, and feel free to file issues w/ kind, email me etc.
I don't think we should implement v1.Image
in the kind repo because googlecontainer tools depends on k/k, that's a total no-go dependency for us.
because googlecontainer tools depends on k/k
Not anymore, we forked out the part we needed, and it should only have been pkg/authn/k8schain
at that, anything else would prune the dep.
Honestly, it may be worth having a convo about how we put this together. ko
is where this originally came up, but kaniko
and buildpacks
both use this library (among others), so I could see them wanting to take advantage of the same library to interface with kind for local dev.
In the meantime we should probably just document that when working with KinD folks should use a registry like DockerHub. 😬
A tangentially related KEP to be aware of: https://github.com/kubernetes/enhancements/pull/1757
Will this be solved via #180 ?
I think we want something like: https://github.com/google/go-containerregistry/tree/master/pkg/v1/daemon
... but to write it directly to KinD.
cc @BenTheElder for pointers on the best API.
I'd love to get some mechanism like this so that we can support something like:
KO_DOCKER_REPO=kind.local