kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.19k stars 4.87k forks source link

Use lima as a driver #12508

Closed Patrick0308 closed 2 years ago

Patrick0308 commented 3 years ago

Steps to reproduce the issue: I want to use lima instead of docker on mac. Can I use lima as a minikube driver?

Run minikube logs --file=logs.txt and drag and drop the log file into this issue

Full output of failed command if not minikube start:

afbjorklund commented 3 years ago

There are some third parties already doing this: https://github.com/abiosoft/colima <-- now using k3s

Ultimately we might want to have a QEMU driver for Mac, or reuse the current libvirt driver called "kvm2"

The main problem is with allocating the network, currrently lima is doing the ssh hacks we rejected earlier:


Note that the default container engine in lima is containerd/buildkitd, which is not yet supported by minikube (or kind)

So you will need to change that for either docker or podman, in order to run. But there are example yaml files of both...

afbjorklund commented 3 years ago

I think Docker's HyperKit is already deprecated, being replaced with Apple's Virtualization.framework

Unfortunately the needed QEMU patches for arm64 (M1) are not merged yet, but it works on amd64...

There is an -accel hvf flag when starting it, that works the same way as -accel kvm does on Linux.

See upstream lima for how to build a patched qemu version, there is a third-party brew repository for it.

Patrick0308 commented 3 years ago

@afbjorklund Ok, I get it.

Unfortunately the needed QEMU patches for arm64 (M1) are not merged yet

Can you link the issue or pr here? I am using the M1 mac now.

afbjorklund commented 3 years ago

They are not available in the QEMU releases yet, so one needs a special qemu branch:

https://github.com/lima-vm/lima#installation

Manual installation steps (required for ARM Mac)

brew install simnalamburt/x/qemu-hvf

afbjorklund commented 3 years ago

This app is also looking promising, as a GUI instead of VirtualBox: https://mac.getutm.app/

There is also a cross-platform version called AQEMU: https://github.com/tobimensch/aqemu


Note that with these solutions, you end up with a virtual machine (VM) running on your Mac.

So you still need to configure kubectl on the Mac, to talk to the minikube running on it...

afbjorklund commented 3 years ago

Using lima is similar to using WSL2 on Windows, it will handle the VM and network and host mounts for you.

Lima can be considered as a some sort of unofficial "macOS subsystem for Linux", or "containerd for Mac".

Even though intended for macOS, you can run lima also on Linux. This is good for the developers, such as myself.

The mounted filesystems have the same performance problems as the other solutions, and the networking is limited.

afbjorklund commented 3 years ago

If you want to run multi-node or complex networking, or want full control over storage, I would recommend using a VM driver.

But for casual users, these drivers are handy. I call it the "Wizard of Oz mode", pay no attention to the man behind the curtain!

medyagh commented 2 years ago

I am not familiar with lima, but if it is just like docker or podman, and can use standard OCI image, I would accept a contribution that adds it as another minikube driver

afbjorklund commented 2 years ago

I am not familiar with lima, but if it is just like docker or podman

It is similar to WSL. It uses containerd/buildkitd by default, though.

See https://github.com/kubernetes-sigs/kind/issues/2317 for a discussion about nerdctl (for a "containerd-in-containerd" driver, similar to DinD and CinP)

But Lima can use any distribution with any container runtime...

afbjorklund commented 2 years ago

The default docker installation in lima is rootless:

brew install lima

$ limactl start ./docker.yaml
$ export DOCKER_HOST=unix://$HOME/docker.sock

brew install minikube

$ minikube start

Exiting due to MK_USAGE: Container runtime must be set to "containerd" for rootless

$ minikube start --driver docker --container-runtime=containerd

Done! kubectl is now configured to use "docker" cluster and "default" namespace by default


So minikube works in lima, when using the "docker" (or "podman") driver.

It is also possible to run minikube with the "none" driver. (see #12926)

But currently there is no need to support lima, as a minikube driver ?

Instead it is used to provide a VM - similar to the VM of Docker Desktop.

niklassemmler commented 2 years ago

I've tried to follow the instructions above, but for me it cannot create the control plane:

❯ limactl start https://raw.githubusercontent.com/lima-vm/lima/master/examples/docker.yaml

? Creating an instance "docker" Proceed with the default configuration
INFO[0000] Attempting to download the image from "https://cloud-images.ubuntu.com/impish/current/impish-server-cloudimg-arm64.img"  digest=
INFO[0001] Using cache "/Users/theuser/Library/Caches/lima/download/by-url-sha256/a9f81252e41821dac2357ea4c9b5a5a1c71526b41bc4473d6365fa3594b86dd9/data"
INFO[0001] [hostagent] Starting QEMU (hint: to watch the boot progress, see "/Users/theuser/.lima/docker/serial.log")
INFO[0001] SSH Local Port: 49783
INFO[0001] [hostagent] Waiting for the essential requirement 1 of 5: "ssh"
INFO[0058] [hostagent] The essential requirement 1 of 5 is satisfied
INFO[0058] [hostagent] Waiting for the essential requirement 2 of 5: "user session is ready for ssh"
INFO[0059] [hostagent] The essential requirement 2 of 5 is satisfied
INFO[0059] [hostagent] Waiting for the essential requirement 3 of 5: "sshfs binary to be installed"
INFO[0059] [hostagent] The essential requirement 3 of 5 is satisfied
INFO[0059] [hostagent] Waiting for the essential requirement 4 of 5: "/etc/fuse.conf to contain \"user_allow_other\""
INFO[0059] [hostagent] The essential requirement 4 of 5 is satisfied
INFO[0059] [hostagent] Waiting for the essential requirement 5 of 5: "the guest agent to be running"
INFO[0059] [hostagent] The essential requirement 5 of 5 is satisfied
INFO[0059] [hostagent] Mounting "/Users/theuser"
INFO[0059] [hostagent] Mounting "/tmp/lima"
INFO[0059] [hostagent] Waiting for the optional requirement 1 of 1: "user probe 1/1"
INFO[0059] [hostagent] Forwarding "/run/user/501/docker.sock" (guest) to "/Users/theuser/.lima/docker/sock/docker.sock" (host)
INFO[0059] [hostagent] Forwarding "/run/lima-guestagent.sock" (guest) to "/Users/theuser/.lima/docker/ga.sock" (host)
INFO[0059] [hostagent] Not forwarding TCP 127.0.0.53:53
INFO[0059] [hostagent] Not forwarding TCP 0.0.0.0:22
INFO[0059] [hostagent] Not forwarding TCP [::]:22
INFO[0071] [hostagent] The optional requirement 1 of 1 is satisfied
INFO[0071] [hostagent] Waiting for the final requirement 1 of 1: "boot scripts must have finished"
INFO[0075] [hostagent] The final requirement 1 of 1 is satisfied
INFO[0075] READY. Run `limactl shell docker` to open the shell.
INFO[0075] To run `docker` on the host (assumes docker-cli is installed):
INFO[0075] $ export DOCKER_HOST=unix:///Users/theuser/.lima/docker/sock/docker.sock
INFO[0075] $ docker ...
❯ export DOCKER_HOST=unix:///Users/theuser/.lima/docker/sock/docker.sock
❯ minikube start --driver docker --container-runtime=containerd
😄  minikube v1.25.0 on Darwin 12.1 (arm64)
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.23.1 preload ...
    > preloaded-images-k8s-v16-v1...: 486.31 MiB / 486.31 MiB  100.00% 9.91 MiB
    > gcr.io/k8s-minikube/kicbase: 343.02 MiB / 343.02 MiB  100.00% 5.46 MiB p/
🔥  Creating docker container (CPUs=2, Memory=3861MB) ...
📦  Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
    ▪ kubelet.housekeeping-interval=5m
    ▪ kubelet.cni-conf-dir=/etc/cni/net.mk
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.13.0-27-generic
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: missing
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
        - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'

stderr:
    [WARNING SystemVerification]: missing optional cgroups: hugetlb
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-27-generic\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...

💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.13.0-27-generic
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: missing
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
        - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'

stderr:
    [WARNING SystemVerification]: missing optional cgroups: hugetlb
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-27-generic\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.13.0-27-generic
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: missing
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
        - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'

stderr:
    [WARNING SystemVerification]: missing optional cgroups: hugetlb
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-27-generic\n", err: exit status 1
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172
afbjorklund commented 2 years ago

Maybe it needs some special configuration, to run with rootless docker ? I haven't tried that myself.

There is some talk about cgroup delegation: https://rootlesscontaine.rs/getting-started/common/cgroup2/

https://minikube.sigs.k8s.io/docs/drivers/docker/#rootless-docker

The alternative would be to use the regular /var/run/docker.sock


sudo usermod -aG docker $USER

And modify in the lima yaml file:

    # NOTE: you may remove the lines below, if you prefer to use rootful docker, not rootless
    systemctl disable --now docker
portForwards:
- guestSocket: "/run/user/{{.UID}}/docker.sock"
  hostSocket: "{{.Dir}}/sock/docker.sock"
eminaktas commented 2 years ago

This issue happens with the latest minikube, but with 1.24.0, it works fine. However, I tried @afbjorklund's suggestion for the newest minikube didn't work; I might be missing something.

Docker socket is not usable from the host machine, but inside VM, it is usable for both lima user and root.

error during connect: Get "http://%2FUsers%2FMACPC%2F.lima%2Fdocker%2Fsock%2Fdocker.sock/v1.24/containers/json": EOF

minikube v1.24.0 installation for MacOS:

$ curl -LO https://storage.googleapis.com/minikube/releases/v1.24.0/minikube-darwin-amd64
$ sudo install minikube-darwin-amd64 /usr/local/bin/minikube
afbjorklund commented 2 years ago

I think you want to open an issue on https://github.com/lima-vm/lima for that, the ssh tunneling of the unix socket "should" work

afbjorklund commented 2 years ago

Theoretically, one could write a howto on how to deploy rootful docker on Lima and then use that to run kubernetes-in-docker. i.e. use lima instead of docker-machine, and then use the kind/kic install on top of / inside that VM similar to Docker Desktop.

But for the casual user, I think it would be much more straight-forward to just start a virtual machine with Kubernetes on it ? Currently minikube has issues with this (both "none" and "ssh" drivers are broken, so I opted for using kubeadm directly:

https://github.com/lima-vm/lima/blob/master/examples/k8s.yaml

limactl start https://raw.githubusercontent.com/lima-vm/lima/master/examples/k8s.yaml

It is basically an executable version of the upstream documentation:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

Just that it chooses a distribution (ubuntu) and a runtime (containerd) and a network (flannel) for you automatically. Since Kubernetes is a toolbox (puzzle), you would otherwise have to choose an OS and a CRI and a CNI yourself...

Of course, you could also run minikube start and it would do the same kind of automatic component selection. You would get a different distribution and runtime and network, but you would still be able to run kubectl in the end.

eminaktas commented 2 years ago

Giving access to all users fixes the problem with sudo chmod 666 /var/run/docker.sock, but I think this is not the expected behavior. Giving lima VM user permission to docker.sock should be enough. But somehow, sudo usermod -aG docker $USER doesn't affect. It is like you have to start a new session like su - myUser

root-docker.yaml ```yaml # Example to use Docker instead of containerd & nerdctl # $ limactl start ./root-docker.yaml # $ limactl shell docker docker run -it -v $HOME:$HOME --rm alpine # To run `docker` on the host (assumes docker-cli is installed): # $ export DOCKER_HOST=$(limactl list docker --format 'unix://{{.Dir}}/sock/docker.sock') # $ docker ... # This example requires Lima v0.8.0 or later images: # Hint: run `limactl prune` to invalidate the "current" cache - location: "https://cloud-images.ubuntu.com/impish/current/impish-server-cloudimg-amd64.img" arch: "x86_64" - location: "https://cloud-images.ubuntu.com/impish/current/impish-server-cloudimg-arm64.img" arch: "aarch64" mounts: - location: "~" - location: "/tmp/lima" writable: true # containerd is managed by Docker, not by Lima, so the values are set to false here. containerd: system: false user: false provision: - mode: system script: | #!/bin/sh sed -i 's/host.lima.internal.*/host.lima.internal host.docker.internal/' /etc/hosts - mode: system script: | #!/bin/bash set -eux -o pipefail command -v docker >/dev/null 2>&1 && exit 0 export DEBIAN_FRONTEND=noninteractive curl -fsSL https://get.docker.com | sh # You can active it here but doesn't change the behavior. # usermod -aG docker lima # lima is default user for my system. - mode: user script: | #!/bin/bash set -eux -o pipefail sudo usermod -aG docker $USER portForwards: - guestSocket: "/var/run/docker.sock" hostSocket: "{{.Dir}}/sock/docker.sock" message: | To run `docker` on the host (assumes docker-cli is installed): $ export DOCKER_HOST=unix://{{.Dir}}/sock/docker.sock $ docker ... ```

If you try the YAML example above, execute limactl shell root-docker. If you check the current session groups with groups command, you won't see the docker. But if you run sudo su - lima first, then run groups command, you'll see the docker group.

afbjorklund commented 2 years ago

It is like you have to start a new session like su - myUser

You need a new login session, or to use newgrp docker (which starts a new shell). Otherwise it keeps the old groups.

"Rebooting", as in stopping and starting also works (as usual).

Eventually it should be possible to run the KIC installation also in rootless docker, but it might require some tweaks.

niklassemmler commented 2 years ago

This issue happens with the latest minikube, but with 1.24.0, it works fine.

Went back to minikube 1.24 and indeed the issue disappeared. Great thanks @eminaktas.

Is mounting from lima supported atm? I can mount my home directory into lima vm (limactl shell docker ls ~ shows the content from the host). However, when I ssh into minikube the dir is not mounted. When I run minikube mount I receive an error.

❯ minikube mount "/Users/theuser/Documents/code/minikube:/data" &
Mounting host path /Users/theuser/Documents/code/minikube into VM as /data ...
    ▪ Mount type:
    ▪ User ID:      docker
    ▪ Group ID:     docker
    ▪ Version:      9p2000.L
    ▪ Message Size: 262144
    ▪ Permissions:  755 (-rwxr-xr-x)
    ▪ Options:      map[]
    ▪ Bind Address: 127.0.0.1:60236
🚀  Userspace file server: ufs starting

❌  Exiting due to GUEST_MOUNT: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=60236,trans=tcp,version=9p2000.L fe80::1 /data" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=60236,trans=tcp,version=9p2000.L fe80::1 /data": Process exited with status 32
stdout:

stderr:
mount: /data: permission denied.

╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                        │
│    😿  If the above advice does not help, please let us know:                                                          │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                        │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
│    Please also attach the following file to the GitHub issue:                                                          │
│    - /var/folders/m2/_1_4fj9d5554nslmpq2n_bq00000gn/T/minikube_mount_d0fae3a7cd3e55c27614b8893dcb4c2a9f36e8d6_0.log    │
│                                                                                                                        │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

The path /Users/theuser/Documents/code/minikube exists on the host and lima vm. (Logs are here just in case: https://paste.ee/p/nsL7y)

afbjorklund commented 2 years ago

You can add volume mounts from the docker host (VM) to the minikube node (container)

Forgot the syntax right now, but should be on docker driver page (or volumes)

EDIT: minikube start --mount

The volumes go in --mount-string

niklassemmler commented 2 years ago

If it works with minikube start --mount --mount-string=..., shouldn't it also work with minikube mount? I had some problems with the former and docker.

EDIT: Just learned that the former runs via the driver and the latter via 9P.

niklassemmler commented 2 years ago

Just tried it

❯ minikube start --mount-string="/Users/theuser/Documents/code/minikube:/data" --mount --driver=docker --container-runtime=containerd
😄  minikube v1.24.0 on Darwin 12.1 (arm64)
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=6, Memory=5500MB) ...
📦  Preparing Kubernetes v1.22.3 on containerd 1.4.9 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

The folder is generated, but empty:

❯ minikube ssh
Last login: Sun Jan 23 20:55:45 2022 from 192.168.49.1
docker@minikube:~$ ls /data

I have plenty files under this folder on my host and the path is the same under lima vm.

afbjorklund commented 2 years ago

Note that /data is pre-defined by minikube, so it might be over-mounted from the minikube volume.

https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/

https://github.com/kubernetes/minikube/blob/master/deploy/kicbase/automount/minikube-automount

Check the location with findmnt /data.

niklassemmler commented 2 years ago

Awesome! Changing the dir to /mydata works. Everything works (minikube 1.24). 👍

Also, for anyone else using an ARM architecture, keep in mind that the default docker configuration doesn't support cross compiling. So, you may not get amd64 images to work. I finally found a hint on how to deal with this here: https://github.com/lima-vm/lima/issues/42#issuecomment-916561621

schaeferto commented 2 years ago

Even if this is closed - I had to downgrade minikube from 1.25.x -> 1.24 to get this working. I am using the example/docker.yaml for starting the lima-vm. Do you know the reason or better - a solution to use lima-vm as the driver for the latest minikube version?

The issue encountered with the latest minikube version is exactly the same as @metaswirl explained in https://github.com/kubernetes/minikube/issues/12508#issuecomment-1017723433.

Or would you recommend to use colima with k8s, as it integrates k8s as a maintained feature?

afbjorklund commented 2 years ago

Do you know the reason or better - a solution to use lima-vm as the driver for the latest minikube version?

The long term plan (well, "current") is to provide a similar driver for minikube (to lima's), and then run the minikube os with it.

Sadly minikube has some issues running under lima (#12926), so therefore it is recommended to run kubeadm (k8s.yaml)

Or would you recommend to use colima with k8s, as it integrates k8s as a maintained feature?

I'm not sure why you would use colima, it seems like a limited version of Rancher Desktop ?

But it is theoretically possible to install docker using lima, and then use minikube's docker driver. Or podman, same way. Just that it is not fully working or supported, like with Docker Desktop...

I think the first step would be to have some better documentation in lima, on how these work

schaeferto commented 2 years ago

Thanks for the quick reply. Well:

Maybe some background ;-) ...

My use-case is just having some kind of k8s playground on my local machine. So what I am looking for is an easy approach to be able to run kubectl on my machine.

So after migrating from docker for desktop to lima-vm (with config: ./docker.yaml), I am looking for an easy way to install a k8s cluster. Thats why I tried out minikube and failed at first.

What I did understand from your answer and https://github.com/kubernetes/minikube/issues/12926 is, that minikube deploys the k8s cluster in a docker container, which creates multiple additional docker containers (so Docker in Docker). And the lima-vm config ./k8s.yaml will solve the DinD problem by installing the k8s cluster directly inside the lima-vm correct?

So maybe I give this a shot - thanks a lot.

And one last thing: minikube > v1.24.0 has problems using the redirected docker.socks file? Sorry for my little understanding of the whole topic :D ... just new to all this dev ops stuff :-)

afbjorklund commented 2 years ago

And the lima-vm config ./k8s.yaml will solve the DinD problem by installing the k8s cluster directly inside the lima-vm correct?

Correct.

It is more similar to the "ssh" (generic) driver in minikube, with the VM already provided by lima.

And one last thing: minikube > v1.24.0 has problems using the redirected docker.socks file?

It is a little surprised by it. In the old days, you either had a local unix socket or a remote tcp socket.

This remote unix socket confuses some old assumptions, so needs some workaround to get the IP...