Open muhano opened 12 months ago
I have the same problem. The etcd service not start.
Please paste the contents of the file located at %HOMEPATH%\.wslconfig
. This file is inside your Windows filesystem, not on WSL.
Please paste the contents of the file located at
%HOMEPATH%\.wslconfig
. This file is inside your Windows filesystem, not on WSL.
Hello, this is the list of my wsl
it containts: docker-desktop-data (Default) docker-desktop
Hi @muhano , there is an actual file located at C:\Users\YourUserName\.wslconfig
. I was referring to it, not the list of WSL distributions, but since you don't have any WSL distribution installed, I would say try the below and see how that works:
Apply & restart
button;wsl --shutdown
;Apply & restart
button;This worked for me and hoping it'll work for you too.
thanks for the detailed steps, is there alternative from purging docker data and reset docker to factory default? there are lots of containers that i use.
Hello, currently i manage to run kubernetes using minikube in hyper-v vm. I don't want to risk lose my containers data by resetting the docker windows. Thanks.
Hi @muhano , there is an actual file located at
C:\Users\YourUserName\.wslconfig
. I was referring to it, not the list of WSL distributions, but since you don't have any WSL distribution installed, I would say try the below and see how that works:
- Install a WSL distro, like Ubuntu from the Microsoft Store;
- Start the WSL distro and set a username and password;
- Reboot PC;
- Purge Docker data;
- Reset Docker to factory defaults;
- Simply restart Docker itself (might need to end the Docker tasks via Task Manager) and then go to settings > resources > WSL integration and check your WSL distribution from the list;
- Click
Apply & restart
button;- Close all WSL windows and open a Powershell, then type
wsl --shutdown
;- Docker will throw an error message. Ignore it for now;
- Start your WSL distro and while having the terminal prompt open, simply restart Docker one more time;
- Wait for Docker engine to start, then go to Kubernetes and check "Enable Kubernetes;
- Click
Apply & restart
button;- Wait for K8s to be installed (You might get a prompt, just click yes);
This worked for me and hoping it'll work for you too.
this worked for me, tks
Hi @muhano , there is an actual file located at
C:\Users\YourUserName\.wslconfig
. I was referring to it, not the list of WSL distributions, but since you don't have any WSL distribution installed, I would say try the below and see how that works:
- Install a WSL distro, like Ubuntu from the Microsoft Store;
- Start the WSL distro and set a username and password;
- Reboot PC;
- Purge Docker data;
- Reset Docker to factory defaults;
- Simply restart Docker itself (might need to end the Docker tasks via Task Manager) and then go to settings > resources > WSL integration and check your WSL distribution from the list;
- Click
Apply & restart
button;- Close all WSL windows and open a Powershell, then type
wsl --shutdown
;- Docker will throw an error message. Ignore it for now;
- Start your WSL distro and while having the terminal prompt open, simply restart Docker one more time;
- Wait for Docker engine to start, then go to Kubernetes and check "Enable Kubernetes;
- Click
Apply & restart
button;- Wait for K8s to be installed (You might get a prompt, just click yes);
This worked for me and hoping it'll work for you too.
Tried all these steps, still fails.
After the penultimate step "Click Apply & restart
button" I see the notification, "pulling images", then "preparing configuration" then the k8s icon goes red and shows "Kubernetes failed to start".
>wsl -l
Windows Subsystem for Linux Distributions:
Ubuntu-20.04 (Default)
docker-desktop
docker-desktop-data
>docker version
Client:
Cloud integration: v1.0.35+desktop.13
Version: 26.1.1
API version: 1.45
Go version: go1.21.9
Git commit: 4cf5afa
Built: Tue Apr 30 11:48:43 2024
OS/Arch: windows/amd64
Context: default
Server: Docker Desktop 4.30.0 (149282)
Engine:
Version: 26.1.1
API version: 1.45 (minimum version 1.24)
Go version: go1.21.9
Git commit: ac2de55
Built: Tue Apr 30 11:48:28 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.31
GitCommit: e377cd56a71523140ca6ae87e30244719194a521
runc:
Version: 1.1.12
GitCommit: v1.1.12-0-g51d5e94
docker-init:
Version: 0.19.0
GitCommit: de40ad0
>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker/desktop-kubernetes kubernetes-v1.29.2-cni-v1.4.0-critools-v1.29.0-cri-dockerd-v0.3.11-1-debian 15340d8e9882 2 months ago 439MB
registry.k8s.io/kube-apiserver v1.29.2 8a9000f98a52 3 months ago 127MB
registry.k8s.io/kube-scheduler v1.29.2 6fc5e6b7218c 3 months ago 59.5MB
registry.k8s.io/kube-controller-manager v1.29.2 138fb5a3a2e3 3 months ago 122MB
registry.k8s.io/kube-proxy v1.29.2 9344fce2372f 3 months ago 82.3MB
registry.k8s.io/etcd 3.5.10-0 a0eed15eed44 6 months ago 148MB
registry.k8s.io/coredns/coredns v1.11.1 cbb01a7bd410 9 months ago 59.8MB
registry.k8s.io/pause 3.9 e6f181688397 19 months ago 744kB
Any idea which of the (many) log files in %AppData\Local\Docker\log\ I should review to get a further idea in this failure?
Note: Kubernetes with Docker Desktop worked in a previous version, sometime last year... so something has changed.
I also encountered this problem when using docker desktop. I solved it like this:
E0804 13:38:34.181479 439 run.go:74] "command failed" err="failed to run Kubelet: invalid configuration: cgroup [\"kubepods\"] has some missing paths: /sys/fs/cgroup/cpu/kubepods"
E0804 13:46:49.720393 444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop?timeout=30s\": dial tcp 192.168.65.3:6443: connect: connection refused" interval="7s" I0804 13:46:49.841938 444 kubelet_node_status.go:73] "Attempting to register node" node="docker-desktop" E0804 13:46:49.843938 444 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://kubernetes.docker.internal:6443/api/v1/nodes\": dial tcp 192.168.65.3:6443: connect: connection refused" node="docker-desktop" E0804 13:46:50.099101 444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-controller-manager-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" E0804 13:46:50.099115 444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-apiserver-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" E0804 13:46:50.099138 444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-apiserver-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/kube-apiserver-docker-desktop" E0804 13:46:50.099138 444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-controller-manager-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/kube-controller-manager-docker-desktop" E0804 13:46:50.099149 444 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-apiserver-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/kube-apiserver-docker-desktop" E0804 13:46:50.099149 444 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-controller-manager-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/kube-controller-manager-docker-desktop" E0804 13:46:50.099188 444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-docker-desktop_kube-system(91838c84176e55a239acd0e97bb0c8cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-docker-desktop_kube-system(91838c84176e55a239acd0e97bb0c8cf)\\\": rpc error: code = Unknown desc = failed to create a sandbox for pod \\\"kube-apiserver-docker-desktop\\\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \\\"xxx.slice\\\"\"" pod="kube-system/kube-apiserver-docker-desktop" podUID="91838c84176e55a239acd0e97bb0c8cf" E0804 13:46:50.099206 444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-docker-desktop_kube-system(815abf9efdec70808b2f2e38e47476ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-docker-desktop_kube-system(815abf9efdec70808b2f2e38e47476ca)\\\": rpc error: code = Unknown desc = failed to create a sandbox for pod \\\"kube-controller-manager-docker-desktop\\\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \\\"xxx.slice\\\"\"" pod="kube-system/kube-controller-manager-docker-desktop" podUID="815abf9efdec70808b2f2e38e47476ca" E0804 13:46:51.099488 444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"etcd-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" E0804 13:46:51.099519 444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"etcd-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/etcd-docker-desktop" E0804 13:46:51.099530 444 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"etcd-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/etcd-docker-desktop" E0804 13:46:51.099572 444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-docker-desktop_kube-system(a7259c8a6f480a66649ce97631b20e6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-docker-desktop_kube-system(a7259c8a6f480a66649ce97631b20e6f)\\\": rpc error: code = Unknown desc = failed to create a sandbox for pod \\\"etcd-docker-desktop\\\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \\\"xxx.slice\\\"\"" pod="kube-system/etcd-docker-desktop" podUID="a7259c8a6f480a66649ce97631b20e6f" E0804 13:46:51.099600 444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-scheduler-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" E0804 13:46:51.099628 444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-scheduler-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/kube-scheduler-docker-desktop" E0804 13:46:51.099651 444 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-scheduler-docker-desktop\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"" pod="kube-system/kube-scheduler-docker-desktop" E0804 13:46:51.099783 444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-docker-desktop_kube-system(a2aef464e32c9d92c9c87ecd4c049741)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-docker-desktop_kube-system(a2aef464e32c9d92c9c87ecd4c049741)\\\": rpc error: code = Unknown desc = failed to create a sandbox for pod \\\"kube-scheduler-docker-desktop\\\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \\\"xxx.slice\\\"\"" pod="kube-system/kube-scheduler-docker-desktop" podUID="a2aef464e32c9d92c9c87ecd4c049741" E0804 13:46:52.442423 444 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://kubernetes.docker.internal:6443/api/v1/namespaces/default/events\": dial tcp 192.168.65.3:6443: connect: connection refused" event="&Event{ObjectMeta:{docker-desktop.17e88a838d292b98 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:docker-desktop,UID:docker-desktop,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:docker-desktop,},FirstTimestamp:2024-08-04 13:45:55.082849176 +0000 UTC m=+0.063534433,LastTimestamp:2024-08-04 13:45:55.082849176 +0000 UTC m=+0.063534433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:docker-desktop,}" E0804 13:46:55.108714 444 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"docker-desktop\" not found"
Log Path: %users%\AppData\Local\Docker\log\vm\kubelet.log
Description
the kubernetes is failed to start after enabled it in the setting.
these are workaround steps that i already do
with these steps the kubernetes still cannot starting.
Reproduce
Expected behavior
Kubernetes should be start normally after being enabled.
docker version
docker info
Diagnostics ID
2BF7B3DA-D02A-4320-880F-5832AA381A9E/20231129074838
Additional Info
Using command kubectl get nodes give me result: couldn't get current server API group list: Get "https://kubernetes.docker.internal:6443/api?timeout=32s": EOF
Using command kubectl config get-context give normal result: