tilt-dev / tilt

Define your dev environment as code. For microservice apps on Kubernetes.
https://tilt.dev/
Apache License 2.0
7.69k stars 303 forks source link

Support k3s in --docker mode #3654

Open leoluk opened 4 years ago

leoluk commented 4 years ago

k3s can use a local Docker daemon in --docker mode.

This is similar to Docker for Desktop mode, where built images are immediately available in the cluster.

Currently, this isn't properly detected.

nicks commented 4 years ago

Hmmm....can you link to documentation on this?

This is the first I've heard of this, and we do talk to the K3d team about this sort of interop from time to time.

nicks commented 4 years ago

(In general, what we would do in this case is ask the K3s team to provide some sort of protocol for determining when K3s is in this mode, like a ConfigMap created by the cluster, and then Tilt would read this ConfigMap...so I would need to read more about how they're advertising/documenting this)

leoluk commented 4 years ago

Here's the documentation: https://rancher.com/docs/k3s/latest/en/advanced/#using-docker-as-the-container-runtime

By the way, I tried to pretend that it's a docker-desktop cluster by renaming the context, and it works flawlessly!

leoluk commented 4 years ago

Reproduction instructions:

curl -sfL https://get.k3s.io | sh -s - --docker
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
sed -i 's/  name: default/  name: docker-desktop/g' $KUBECONFIG
sed -i 's/current-context: default/current-context: docker-desktop/g' $KUBECONFIG
sed -i 's/cluster: default/cluster: docker-desktop/g' $KUBECONFIG

tilt up
nicks commented 4 years ago

Hmmm....those instructions are about setting up K3s on a Node.

I'm not sure this is a safe or recommended way to run K3s for local development, but let me point some people in the K3s project at this issue and see what they say.

We usually see people using K3d to run K3s for local development, see our instructions here: https://github.com/tilt-dev/k3d-local-registry

leoluk commented 4 years ago

Perhaps the real feature request here is being able to disable image pushing?

Skaffold has a local override: https://skaffold.dev/docs/environment/local-cluster/#manual-override

nicks commented 4 years ago

Ya, here's an overview of some of the problems with this approach: https://minikube.sigs.k8s.io/docs/drivers/none/

You can disable image pushing today with Tilt with custom_build(disable_push) https://docs.tilt.dev/api.html#api.custom_build

I don't think we want to support a disable-push option as easy as Skaffold's. I worry that adding options for fundamentally unsafe modes leads to checkboxes that kill (see: https://limi.net/checkboxes), i.e., situations where it's way too enable a bunch of modes that interact in weird and broken ways (and in fact we've had complaints about this already w/r/t custom_build's disable_push)

leoluk commented 4 years ago

Ya, here's an overview of some of the problems with this approach: https://minikube.sigs.k8s.io/docs/drivers/none/

It's not ideal for local development because it pollutes the local Docker daemon, but there are valid use cases for it (like a CI environment, which is what we use it for). It has the big advantage of being very fast since builds are local, and it shaves precious seconds off the build time that would be spent pushing to and pulling from a registry.

You can disable image pushing today with Tilt with custom_build(disable_push) https://docs.tilt.dev/api.html#api.custom_build

We're building an open source project that comes with a Tiltfile. We don't want any environment-specific config in it such that it works out of the box with whatever local cluster other contributors are using. It would also mean not having any of the conveniences of using docker_build (dependency resolution, etc).

(same about allow_k8s_contexts, where to put config specific to my local environment?)

I don't think we want to support a disable-push option as easy as Skaffold's. I worry that adding options for fundamentally unsafe modes leads to checkboxes that kill (see: https://limi.net/checkboxes), i.e., situations where it's way too enable a bunch of modes that interact in weird and broken ways (and in fact we've had complaints about this already w/r/t custom_build's disable_push)

In terms of security, it makes no difference - you can easily break out of kind or k3d, they're no security boundaries.

Agreed about the dangers of excess configurability, but doesn't this particular checkbox - certain clusters not requiring pushes - already exist? Why only support it for minikube and docker-desktop? What about custom minikube deployments?

nicks commented 4 years ago

hmmm...the minikube doc above says "Most users of this driver should consider the newer Docker driver, as it is significantly easier to configure and does not require root access. The ‘none’ driver is recommended for advanced users only.", and lists issues of "Decreased security", "Decreased reliability", and "Data loss".

But you say: "In terms of security, it makes no difference" - I'm having trouble reconciling this with the minikube documentation. Is there a document you're basing that on, or is this your own independent security analysis?

But if you understand the security risk, I think there are two paths forward:

So I could imagine Tilt supporting a config map like:

apiVersion: v1
kind: ConfigMap
metadata:
  name: tilt-cluster-config
  namespace: kube-public
data:
  useLocalDockerDaemon: true
leoluk commented 4 years ago

hmmm...the minikube doc above says "Most users of this driver should consider the newer Docker driver, as it is significantly easier to configure and does not require root access. The ‘none’ driver is recommended for advanced users only.", and lists issues of "Decreased security", "Decreased reliability", and "Data loss".

minikube in --driver=none mode essentially installs a single-node k8s cluster on your host using kubeadm in a very non-hermetic fashion. When I tried it in a clean VM, I could get it to work with tilt using the same docker-desktop trick, but it overwrote my local .kube/config and left behind dozens of files after running minikube delete, including a defunct kubelet.service in a place where it clearly doesn't belong (/lib).

Can definitely see why they declare it a dangerous feature :-)

k3s, on the other hand, is explicitly designed to run directly on a host and it takes great care to namespace all of its components. Without --docker, it even brings its own containerd that peacefully co-exists with whatever else is running there, and it leaves no traces after running k3s-uninstall.sh. With --docker, it's less hermetic and won't clean up old containers/images but still doesn't break anything or overwrite existing configs.

In a different project with Bazel, we just use plain k3s for developing in a VM without any Docker daemon running. Images are loaded straight into containerd and the deployments are updated with the new hashes:

https://github.com/leoluk/NetMeta/blob/6fe1e53651ed32d3582eca8ce80ffd4c22e6a40a/scripts/build_containers.sh#L14

But you say: "In terms of security, it makes no difference" - I'm having trouble reconciling this with the minikube documentation. Is there a document you're basing that on, or is this your own independent security analysis?

Last time I checked, "Kubernetes in Docker" tools like k3d, kind and minikube in Docker mode, have to run the kubelet in a privileged container with root privileges. Access to the k8s API means root privileges on the host.

There's some work going on to get k8s running in user namespaces but it's still a work in progress: https://github.com/rootless-containers/usernetes/issues/42

The only "safe" runtime is normal minikube with VMs.

But if you understand the security risk, I think there are two paths forward:

  • in the short term, you can rename the kubectl context to docker-for-desktop, and trick tilt, right?

Yes, that works as expected.

Sounds perfect! (and thanks for responding so quickly!)