docker / for-win

Bug reports for Docker Desktop for Windows
https://www.docker.com/products/docker#/windows
1.87k stars 291 forks source link

Cannot enable kubernetes, it said "kubernetes failed to start" with red symbol in the UI #13814

Open muhano opened 12 months ago

muhano commented 12 months ago

Description

the kubernetes is failed to start after enabled it in the setting.

these are workaround steps that i already do

  1. make sure the 127.0.0.1 kubernetes.docker.internal already in etc host
  2. delete the .kube folder in users folder then restart kubernetes
  3. delete the pki folder in AppData/Local/ Docker and restart docker

with these steps the kubernetes still cannot starting.

Reproduce

  1. start docker desktop
  2. select setting, and enable kubernetes from the kubernes tab
  3. wait for kubernetes to start, (kubernetes symbol is yellow)
  4. the kubernetes symbol turn to red and said "kubernetes failed to start"

Expected behavior

Kubernetes should be start normally after being enabled.

docker version

Client:
 Cloud integration: v1.0.35+desktop.5
 Version:           24.0.6
 API version:       1.43
 Go version:        go1.20.7
 Git commit:        ed223bc
 Built:             Mon Sep  4 12:32:48 2023
 OS/Arch:           windows/amd64
 Context:           default

Server: Docker Desktop 4.25.2 (129061)
 Engine:
  Version:          24.0.6
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.7
  Git commit:       1a79695
  Built:            Mon Sep  4 12:32:16 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.22
  GitCommit:        8165feabfdfe38c65b599c4993d227328c231fca
 runc:
  Version:          1.1.8
  GitCommit:        v1.1.8-0-g82f18fe
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

docker info

Client:
 Version:    24.0.6
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.11.2-desktop.5
    Path:     C:\Program Files\Docker\cli-plugins\docker-buildx.exe
  compose: Docker Compose (Docker Inc.)
    Version:  v2.23.0-desktop.1
    Path:     C:\Program Files\Docker\cli-plugins\docker-compose.exe
  dev: Docker Dev Environments (Docker Inc.)
    Version:  v0.1.0
    Path:     C:\Program Files\Docker\cli-plugins\docker-dev.exe
  extension: Manages Docker extensions (Docker Inc.)
    Version:  v0.2.20
    Path:     C:\Program Files\Docker\cli-plugins\docker-extension.exe
  init: Creates Docker-related starter files for your project (Docker Inc.)
    Version:  v0.1.0-beta.9
    Path:     C:\Program Files\Docker\cli-plugins\docker-init.exe
  sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
    Version:  0.6.0
    Path:     C:\Program Files\Docker\cli-plugins\docker-sbom.exe
  scan: Docker Scan (Docker Inc.)
    Version:  v0.26.0
    Path:     C:\Program Files\Docker\cli-plugins\docker-scan.exe
  scout: Docker Scout (Docker Inc.)
    Version:  v1.0.9
    Path:     C:\Program Files\Docker\cli-plugins\docker-scout.exe

Server:
 Containers: 28
  Running: 0
  Paused: 0
  Stopped: 28
 Images: 28
 Server Version: 24.0.6
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 8165feabfdfe38c65b599c4993d227328c231fca
 runc version: v1.1.8-0-g82f18fe
 init version: de40ad0
 Security Options:
  seccomp
   Profile: unconfined
 Kernel Version: 5.15.133.1-microsoft-standard-WSL2
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 12
 Total Memory: 7.632GiB
 Name: D0025-169
 ID: a94d6977-4569-4527-873c-f8663b5ebd7d
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 No Proxy: hubproxy.docker.internal
 Experimental: false
 Insecure Registries:
  hubproxy.docker.internal:5555
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
WARNING: daemon is not using the default seccomp profile

Diagnostics ID

2BF7B3DA-D02A-4320-880F-5832AA381A9E/20231129074838

Additional Info

Using command kubectl get nodes give me result: couldn't get current server API group list: Get "https://kubernetes.docker.internal:6443/api?timeout=32s": EOF image

Using command kubectl config get-context give normal result: image

yoboygo commented 12 months ago

I have the same problem. The etcd service not start.

Soromeister commented 12 months ago

Please paste the contents of the file located at %HOMEPATH%\.wslconfig. This file is inside your Windows filesystem, not on WSL.

muhano commented 12 months ago

Please paste the contents of the file located at %HOMEPATH%\.wslconfig. This file is inside your Windows filesystem, not on WSL.

Hello, this is the list of my wsl

image

it containts: docker-desktop-data (Default) docker-desktop

Soromeister commented 12 months ago

Hi @muhano , there is an actual file located at C:\Users\YourUserName\.wslconfig. I was referring to it, not the list of WSL distributions, but since you don't have any WSL distribution installed, I would say try the below and see how that works:

This worked for me and hoping it'll work for you too.

muhano commented 11 months ago

thanks for the detailed steps, is there alternative from purging docker data and reset docker to factory default? there are lots of containers that i use.

muhano commented 11 months ago

Hello, currently i manage to run kubernetes using minikube in hyper-v vm. I don't want to risk lose my containers data by resetting the docker windows. Thanks.

tan264 commented 8 months ago

Hi @muhano , there is an actual file located at C:\Users\YourUserName\.wslconfig. I was referring to it, not the list of WSL distributions, but since you don't have any WSL distribution installed, I would say try the below and see how that works:

  • Install a WSL distro, like Ubuntu from the Microsoft Store;
  • Start the WSL distro and set a username and password;
  • Reboot PC;
  • Purge Docker data;
  • Reset Docker to factory defaults;
  • Simply restart Docker itself (might need to end the Docker tasks via Task Manager) and then go to settings > resources > WSL integration and check your WSL distribution from the list;
  • Click Apply & restart button;
  • Close all WSL windows and open a Powershell, then type wsl --shutdown;
  • Docker will throw an error message. Ignore it for now;
  • Start your WSL distro and while having the terminal prompt open, simply restart Docker one more time;
  • Wait for Docker engine to start, then go to Kubernetes and check "Enable Kubernetes;
  • Click Apply & restart button;
  • Wait for K8s to be installed (You might get a prompt, just click yes);

This worked for me and hoping it'll work for you too.

this worked for me, tks

f2calv commented 6 months ago

Hi @muhano , there is an actual file located at C:\Users\YourUserName\.wslconfig. I was referring to it, not the list of WSL distributions, but since you don't have any WSL distribution installed, I would say try the below and see how that works:

  • Install a WSL distro, like Ubuntu from the Microsoft Store;
  • Start the WSL distro and set a username and password;
  • Reboot PC;
  • Purge Docker data;
  • Reset Docker to factory defaults;
  • Simply restart Docker itself (might need to end the Docker tasks via Task Manager) and then go to settings > resources > WSL integration and check your WSL distribution from the list;
  • Click Apply & restart button;
  • Close all WSL windows and open a Powershell, then type wsl --shutdown;
  • Docker will throw an error message. Ignore it for now;
  • Start your WSL distro and while having the terminal prompt open, simply restart Docker one more time;
  • Wait for Docker engine to start, then go to Kubernetes and check "Enable Kubernetes;
  • Click Apply & restart button;
  • Wait for K8s to be installed (You might get a prompt, just click yes);

This worked for me and hoping it'll work for you too.

Tried all these steps, still fails. After the penultimate step "Click Apply & restart button" I see the notification, "pulling images", then "preparing configuration" then the k8s icon goes red and shows "Kubernetes failed to start".

>wsl -l
Windows Subsystem for Linux Distributions:
Ubuntu-20.04 (Default)
docker-desktop
docker-desktop-data
>docker version
Client:
 Cloud integration: v1.0.35+desktop.13
 Version:           26.1.1
 API version:       1.45
 Go version:        go1.21.9
 Git commit:        4cf5afa
 Built:             Tue Apr 30 11:48:43 2024
 OS/Arch:           windows/amd64
 Context:           default

Server: Docker Desktop 4.30.0 (149282)
 Engine:
  Version:          26.1.1
  API version:      1.45 (minimum version 1.24)
  Go version:       go1.21.9
  Git commit:       ac2de55
  Built:            Tue Apr 30 11:48:28 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.31
  GitCommit:        e377cd56a71523140ca6ae87e30244719194a521
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
>docker images
REPOSITORY                                TAG                                                                           IMAGE ID       CREATED         SIZE
docker/desktop-kubernetes                 kubernetes-v1.29.2-cni-v1.4.0-critools-v1.29.0-cri-dockerd-v0.3.11-1-debian   15340d8e9882   2 months ago    439MB
registry.k8s.io/kube-apiserver            v1.29.2                                                                       8a9000f98a52   3 months ago    127MB
registry.k8s.io/kube-scheduler            v1.29.2                                                                       6fc5e6b7218c   3 months ago    59.5MB
registry.k8s.io/kube-controller-manager   v1.29.2                                                                       138fb5a3a2e3   3 months ago    122MB
registry.k8s.io/kube-proxy                v1.29.2                                                                       9344fce2372f   3 months ago    82.3MB
registry.k8s.io/etcd                      3.5.10-0                                                                      a0eed15eed44   6 months ago    148MB
registry.k8s.io/coredns/coredns           v1.11.1                                                                       cbb01a7bd410   9 months ago    59.8MB
registry.k8s.io/pause                     3.9                                                                           e6f181688397   19 months ago   744kB

Any idea which of the (many) log files in %AppData\Local\Docker\log\ I should review to get a further idea in this failure?

Note: Kubernetes with Docker Desktop worked in a previous version, sometime last year... so something has changed.

zhichengzhou98 commented 3 months ago

I also encountered this problem when using docker desktop. I solved it like this:

  1. When I open docker desktop and start k8s directly, it will fail. The log error is as follows: E0804 13:38:34.181479 439 run.go:74] "command failed" err="failed to run Kubelet: invalid configuration: cgroup [\"kubepods\"] has some missing paths: /sys/fs/cgroup/cpu/kubepods"
  2. In docker desktop settings -> Docker Engine, add the following configuration "exec-opts": ["native.cgroupdriver=systemd"]
  3. Restart docker desktop. At this time, starting k8s still fails. The log error is as follows: E0804 13:46:49.720393 444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop?timeout=30s\": dial tcp 192.168.65.3:6443: connect: connection refused" interval="7s" I0804 13:46:49.841938 444 kubelet_node_status.go:73] "Attempting to register node" node="docker-desktop" E0804 13:46:49.843938 444 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://kubernetes.docker.internal:6443/api/v1/nodes\": dial tcp 192.168.65.3:6443: connect: connection refused" node="docker-desktop" E0804 13:46:50.099101 444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-controller-manager-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" E0804 13:46:50.099115 444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-apiserver-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" E0804 13:46:50.099138 444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-apiserver-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/kube-apiserver-docker-desktop" E0804 13:46:50.099138 444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-controller-manager-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/kube-controller-manager-docker-desktop" E0804 13:46:50.099149 444 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-apiserver-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/kube-apiserver-docker-desktop" E0804 13:46:50.099149 444 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-controller-manager-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/kube-controller-manager-docker-desktop" E0804 13:46:50.099188 444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-docker-desktop_kube-system(91838c84176e55a239acd0e97bb0c8cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-docker-desktop_kube-system(91838c84176e55a239acd0e97bb0c8cf)\\\": rpc error: code = Unknown desc = failed to create a sandbox for pod \\\"kube-apiserver-docker-desktop\\\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \\\"xxx.slice\\\"\"" pod="kube-system/kube-apiserver-docker-desktop" podUID="91838c84176e55a239acd0e97bb0c8cf" E0804 13:46:50.099206 444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-docker-desktop_kube-system(815abf9efdec70808b2f2e38e47476ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-docker-desktop_kube-system(815abf9efdec70808b2f2e38e47476ca)\\\": rpc error: code = Unknown desc = failed to create a sandbox for pod \\\"kube-controller-manager-docker-desktop\\\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \\\"xxx.slice\\\"\"" pod="kube-system/kube-controller-manager-docker-desktop" podUID="815abf9efdec70808b2f2e38e47476ca" E0804 13:46:51.099488 444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"etcd-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" E0804 13:46:51.099519 444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"etcd-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/etcd-docker-desktop" E0804 13:46:51.099530 444 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"etcd-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/etcd-docker-desktop" E0804 13:46:51.099572 444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-docker-desktop_kube-system(a7259c8a6f480a66649ce97631b20e6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-docker-desktop_kube-system(a7259c8a6f480a66649ce97631b20e6f)\\\": rpc error: code = Unknown desc = failed to create a sandbox for pod \\\"etcd-docker-desktop\\\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \\\"xxx.slice\\\"\"" pod="kube-system/etcd-docker-desktop" podUID="a7259c8a6f480a66649ce97631b20e6f" E0804 13:46:51.099600 444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-scheduler-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" E0804 13:46:51.099628 444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-scheduler-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/kube-scheduler-docker-desktop" E0804 13:46:51.099651 444 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-scheduler-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/kube-scheduler-docker-desktop" E0804 13:46:51.099783 444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-docker-desktop_kube-system(a2aef464e32c9d92c9c87ecd4c049741)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-docker-desktop_kube-system(a2aef464e32c9d92c9c87ecd4c049741)\\\": rpc error: code = Unknown desc = failed to create a sandbox for pod \\\"kube-scheduler-docker-desktop\\\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \\\"xxx.slice\\\"\"" pod="kube-system/kube-scheduler-docker-desktop" podUID="a2aef464e32c9d92c9c87ecd4c049741" E0804 13:46:52.442423 444 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://kubernetes.docker.internal:6443/api/v1/namespaces/default/events\": dial tcp 192.168.65.3:6443: connect: connection refused" event="&Event{ObjectMeta:{docker-desktop.17e88a838d292b98 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:docker-desktop,UID:docker-desktop,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:docker-desktop,},FirstTimestamp:2024-08-04 13:45:55.082849176 +0000 UTC m=+0.063534433,LastTimestamp:2024-08-04 13:45:55.082849176 +0000 UTC m=+0.063534433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:docker-desktop,}" E0804 13:46:55.108714 444 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"docker-desktop\" not found"
  4. Delete the configuration added above and restart docker desktop. At this time, starting k8s is successful

Log Path: %users%\AppData\Local\Docker\log\vm\kubelet.log