kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.19k stars 4.87k forks source link

Exiting due to K8S_KUBELET_NOT_RUNNING #15579

Closed rahul-satal closed 1 year ago

rahul-satal commented 1 year ago

Unable to run the command - minikube start. Below is the logs -

Rahul_Satal@EPINPUNW05AE rahul % minikube start
๐Ÿ˜„  minikube v1.28.0 on Darwin 13.1
โœจ  Automatically selected the docker driver. Other choices: virtualbox, ssh
๐Ÿ“Œ  Using Docker Desktop driver with root privileges
๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿšœ  Pulling base image ...
๐Ÿ’พ  Downloading Kubernetes v1.25.3 preload ...
    > gcr.io/k8s-minikube/kicbase:  0 B [________________________] ?% ? p/s 29s
    > gcr.io/k8s-minikube/kicbase:  0 B [________________________] ?% ? p/s 29s
    > preloaded-images-k8s-v18-v1...:  385.44 MiB / 385.44 MiB  100.00% 5.08 Mi
    > index.docker.io/kicbase/sta...:  386.27 MiB / 386.27 MiB  100.00% 4.31 Mi
    > index.docker.io/kicbase/sta...:  0 B [___________________] ?% ? p/s 1m19s
โ—  minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.36, but successfully downloaded docker.io/kicbase/stable:v0.0.36 as a fallback image
๐Ÿ”ฅ  Creating docker container (CPUs=2, Memory=7812MB) ...

๐Ÿงฏ  Docker is nearly out of disk space, which may cause deployments to fail! (86% of capacity). You can pass '--force' to skip this check.
๐Ÿ’ก  Suggestion: 

    Try one or more of the following to free up space on the device:

    1. Run "docker system prune" to remove unused Docker data (optionally with "-a")
    2. Increase the storage allocated to Docker for Desktop by clicking on:
    Docker icon > Preferences > Resources > Disk Image Size
    3. Run "minikube ssh -- docker system prune" if using the Docker container runtime
๐Ÿฟ  Related issue: https://github.com/kubernetes/minikube/issues/9024

โ—  This container is having trouble accessing https://registry.k8s.io
๐Ÿ’ก  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
๐Ÿณ  Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
    โ–ช Generating certificates and keys ...
    โ–ช Booting up control plane ...
๐Ÿ’ข  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0103 04:34:19.997278     998 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
    [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
    [WARNING SystemVerification]: missing optional cgroups: blkio
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

    โ–ช Generating certificates and keys ...
    โ–ช Booting up control plane ...

๐Ÿ’ฃ  Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0103 04:38:26.366585    3482 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
    [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
    [WARNING SystemVerification]: missing optional cgroups: blkio
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚                                                                                           โ”‚
โ”‚    ๐Ÿ˜ฟ  If the above advice does not help, please let us know:                             โ”‚
โ”‚    ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose                           โ”‚
โ”‚                                                                                           โ”‚
โ”‚    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
โ”‚                                                                                           โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

โŒ  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0103 04:38:26.366585    3482 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
    [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
    [WARNING SystemVerification]: missing optional cgroups: blkio
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

๐Ÿ’ก  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
๐Ÿฟ  Related issue: https://github.com/kubernetes/minikube/issues/4172

Attach the log file

Log file generated with below error

Rahul_Satal@EPINPUNW05AE rahul % minikube logs --file=log.txt
E0103 10:29:06.276302   17838 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

โ—  unable to fetch logs for: describe nodes

Log file contents

* 
* ==> Audit <==
* |---------|------|----------|-------------|---------|---------------------|----------|
| Command | Args | Profile  |    User     | Version |     Start Time      | End Time |
|---------|------|----------|-------------|---------|---------------------|----------|
| start   |      | minikube | Rahul_Satal | v1.28.0 | 03 Jan 23 09:59 IST |          |
|---------|------|----------|-------------|---------|---------------------|----------|

* 
* ==> Last Start <==
* Log file created at: 2023/01/03 09:59:00
Running on machine: EPINPUNW05AE
Binary: Built with gc go1.19.3 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0103 09:59:00.588580   15255 out.go:296] Setting OutFile to fd 1 ...
I0103 09:59:00.588864   15255 out.go:348] isatty.IsTerminal(1) = true
I0103 09:59:00.588867   15255 out.go:309] Setting ErrFile to fd 2...
I0103 09:59:00.588872   15255 out.go:348] isatty.IsTerminal(2) = true
I0103 09:59:00.589032   15255 root.go:334] Updating PATH: /Users/Rahul_Satal/.minikube/bin
W0103 09:59:00.589194   15255 root.go:311] Error reading config file at /Users/Rahul_Satal/.minikube/config/config.json: open /Users/Rahul_Satal/.minikube/config/config.json: no such file or directory
I0103 09:59:00.590269   15255 out.go:303] Setting JSON to false
I0103 09:59:00.660006   15255 start.go:116] hostinfo: {"hostname":"EPINPUNW05AE","uptime":6520,"bootTime":1672713620,"procs":682,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"24f858e4-502d-55b0-9b78-3812730e28c0"}
W0103 09:59:00.660156   15255 start.go:124] gopshost.Virtualization returned error: not implemented yet
I0103 09:59:00.683262   15255 out.go:177] ๐Ÿ˜„  minikube v1.28.0 on Darwin 13.1
I0103 09:59:00.702389   15255 notify.go:220] Checking for updates...
W0103 09:59:00.702427   15255 preload.go:295] Failed to list preload files: open /Users/Rahul_Satal/.minikube/cache/preloaded-tarball: no such file or directory
I0103 09:59:00.702926   15255 driver.go:365] Setting default libvirt URI to qemu:///system
I0103 09:59:00.703106   15255 global.go:111] Querying for installed drivers using PATH=/Users/Rahul_Satal/.minikube/bin:/usr/local/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin
I0103 09:59:00.703439   15255 global.go:119] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:}
I0103 09:59:00.703667   15255 global.go:119] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/ Version:}
I0103 09:59:00.703716   15255 global.go:119] vmwarefusion default: false priority: 1, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:the 'vmwarefusion' driver is no longer available Reason: Fix:Switch to the newer 'vmware' driver by using '--driver=vmware'. This may require first deleting your existing cluster Doc:https://minikube.sigs.k8s.io/docs/drivers/vmware/ Version:}
I0103 09:59:01.162308   15255 docker.go:137] docker version: linux-20.10.17
I0103 09:59:01.162493   15255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0103 09:59:02.288496   15255 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.125980001s)
I0103 09:59:02.289027   15255 info.go:266] docker info: {ID:OXNR:ZZZN:BSXE:6HTN:2E4J:6JD7:TXFZ:2BLV:45HO:KSN2:XROK:ZYRG Containers:25 ContainersRunning:20 ContainersPaused:0 ContainersStopped:5 Images:13 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:192 OomKillDisable:false NGoroutines:172 SystemTime:2023-01-03 04:29:01.25783734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8242126848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0103 09:59:02.289153   15255 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0103 09:59:02.289312   15255 global.go:119] hyperkit default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "hyperkit": executable file not found in $PATH Reason: Fix:Run 'brew install hyperkit' Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/ Version:}
I0103 09:59:02.289355   15255 global.go:119] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/ Version:}
I0103 09:59:02.289428   15255 global.go:119] qemu2 default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "qemu-system-x86_64": executable file not found in $PATH Reason: Fix:Install qemu-system Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/qemu/ Version:}
I0103 09:59:02.289433   15255 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0103 09:59:02.957028   15255 virtualbox.go:136] virtual box version: 6.1.36r152435
I0103 09:59:02.957063   15255 global.go:119] virtualbox default: true priority: 6, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:6.1.36r152435
}
I0103 09:59:02.957103   15255 driver.go:300] not recommending "ssh" due to default: false
I0103 09:59:02.957146   15255 driver.go:335] Picked: docker
I0103 09:59:02.957156   15255 driver.go:336] Alternatives: [virtualbox ssh]
I0103 09:59:02.957163   15255 driver.go:337] Rejects: [podman vmware vmwarefusion hyperkit parallels qemu2]
I0103 09:59:03.001569   15255 out.go:177] โœจ  Automatically selected the docker driver. Other choices: virtualbox, ssh
I0103 09:59:03.021863   15255 start.go:282] selected driver: docker
I0103 09:59:03.021886   15255 start.go:808] validating driver "docker" against <nil>
I0103 09:59:03.021968   15255 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0103 09:59:03.022357   15255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0103 09:59:03.260900   15255 info.go:266] docker info: {ID:OXNR:ZZZN:BSXE:6HTN:2E4J:6JD7:TXFZ:2BLV:45HO:KSN2:XROK:ZYRG Containers:25 ContainersRunning:20 ContainersPaused:0 ContainersStopped:5 Images:13 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:192 OomKillDisable:false NGoroutines:172 SystemTime:2023-01-03 04:29:03.129826703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8242126848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0103 09:59:03.261061   15255 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
I0103 09:59:03.261315   15255 start_flags.go:384] Using suggested 7812MB memory alloc based on sys=32768MB, container=7860MB
I0103 09:59:03.261425   15255 start_flags.go:883] Wait components to verify : map[apiserver:true system_pods:true]
I0103 09:59:03.280265   15255 out.go:177] ๐Ÿ“Œ  Using Docker Desktop driver with root privileges
I0103 09:59:03.299166   15255 cni.go:95] Creating CNI manager for ""
I0103 09:59:03.299233   15255 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0103 09:59:03.299261   15255 start_flags.go:317] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:7812 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0103 09:59:03.319612   15255 out.go:177] ๐Ÿ‘  Starting control plane node minikube in cluster minikube
I0103 09:59:03.343050   15255 cache.go:120] Beginning downloading kic base image for docker with docker
I0103 09:59:03.363391   15255 out.go:177] ๐Ÿšœ  Pulling base image ...
I0103 09:59:03.402937   15255 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0103 09:59:03.402971   15255 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I0103 09:59:03.513632   15255 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
I0103 09:59:03.513996   15255 image.go:60] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
I0103 09:59:03.514307   15255 image.go:120] Writing gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
I0103 09:59:05.968732   15255 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I0103 09:59:05.968774   15255 cache.go:57] Caching tarball of preloaded images
I0103 09:59:05.969710   15255 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0103 09:59:05.991213   15255 out.go:177] ๐Ÿ’พ  Downloading Kubernetes v1.25.3 preload ...
I0103 09:59:06.011614   15255 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
I0103 09:59:13.061955   15255 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> /Users/Rahul_Satal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I0103 09:59:36.913472   15255 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 from local cache
I0103 09:59:36.913703   15255 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local daemon
I0103 09:59:36.913942   15255 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I0103 09:59:37.042012   15255 image.go:244] Writing gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local daemon
I0103 10:00:07.917923   15255 cache.go:182] failed to download gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456, will try fallback image if available: writing daemon image: error loading image: error during connect: Post "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.41/images/load?quiet=0": unable to calculate manifest: Get "https://storage.googleapis.com/artifacts.k8s-minikube.appspot.com/containers/images/sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40": dial tcp: lookup storage.googleapis.com: i/o timeout
I0103 10:00:07.917942   15255 image.go:76] Checking for docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I0103 10:00:08.454455   15255 cache.go:147] Downloading docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
I0103 10:00:08.455291   15255 image.go:60] Checking for docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
I0103 10:00:08.455358   15255 image.go:120] Writing docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
I0103 10:00:40.589338   15255 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
I0103 10:00:40.611750   15255 preload.go:256] verifying checksum of /Users/Rahul_Satal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
I0103 10:00:41.575026   15255 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
I0103 10:00:41.575406   15255 profile.go:148] Saving config to /Users/Rahul_Satal/.minikube/profiles/minikube/config.json ...
I0103 10:00:41.575436   15255 lock.go:35] WriteFile acquiring /Users/Rahul_Satal/.minikube/profiles/minikube/config.json: {Name:mk1b83ad1aeef83fc49c27bfde088fb3a3cb2d74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 10:01:46.121082   15255 cache.go:150] successfully saved docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 as a tarball
I0103 10:01:46.121093   15255 cache.go:161] Loading docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 from local cache
I0103 10:02:14.940049   15255 cache.go:164] successfully loaded docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 from cached tarball
I0103 10:02:14.940058   15255 cache.go:170] Downloading docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local daemon
I0103 10:02:14.941256   15255 image.go:76] Checking for docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I0103 10:02:15.479667   15255 image.go:244] Writing docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local daemon
I0103 10:03:44.930478   15255 cache.go:173] successfully downloaded docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
W0103 10:03:44.930549   15255 out.go:239] โ—  minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.36, but successfully downloaded docker.io/kicbase/stable:v0.0.36 as a fallback image
I0103 10:03:44.930597   15255 cache.go:208] Successfully downloaded all kic artifacts
I0103 10:03:44.930670   15255 start.go:364] acquiring machines lock for minikube: {Name:mkf8d1cd9371112e88ccea4e49bb8e1b96651fc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0103 10:03:44.931370   15255 start.go:368] acquired machines lock for "minikube" in 681.107ยตs
I0103 10:03:44.931780   15255 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:7812 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0103 10:03:44.931943   15255 start.go:125] createHost starting for "" (driver="docker")
I0103 10:03:44.975537   15255 out.go:204] ๐Ÿ”ฅ  Creating docker container (CPUs=2, Memory=7812MB) ...
I0103 10:03:44.976156   15255 start.go:159] libmachine.API.Create for "minikube" (driver="docker")
I0103 10:03:44.976213   15255 client.go:168] LocalClient.Create starting
I0103 10:03:44.977751   15255 main.go:134] libmachine: Creating CA: /Users/Rahul_Satal/.minikube/certs/ca.pem
I0103 10:03:45.265847   15255 main.go:134] libmachine: Creating client certificate: /Users/Rahul_Satal/.minikube/certs/cert.pem
I0103 10:03:45.359095   15255 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0103 10:03:45.462851   15255 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0103 10:03:45.463024   15255 network_create.go:272] running [docker network inspect minikube] to gather additional debugging logs...
I0103 10:03:45.463052   15255 cli_runner.go:164] Run: docker network inspect minikube
W0103 10:03:45.574929   15255 cli_runner.go:211] docker network inspect minikube returned with exit code 1
I0103 10:03:45.574962   15255 network_create.go:275] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0103 10:03:45.574988   15255 network_create.go:277] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error: No such network: minikube

** /stderr **
I0103 10:03:45.575109   15255 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0103 10:03:45.683474   15255 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005ea380] misses:0}
I0103 10:03:45.683519   15255 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0103 10:03:45.683538   15255 network_create.go:115] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0103 10:03:45.683674   15255 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube
I0103 10:03:45.840636   15255 network_create.go:99] docker network minikube 192.168.49.0/24 created
I0103 10:03:45.840688   15255 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0103 10:03:45.840828   15255 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0103 10:03:45.958661   15255 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0103 10:03:46.091013   15255 oci.go:103] Successfully created a docker volume minikube
I0103 10:03:46.091253   15255 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
I0103 10:03:46.849154   15255 oci.go:107] Successfully prepared a docker volume minikube
I0103 10:03:46.849197   15255 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0103 10:03:46.849212   15255 kic.go:179] Starting extracting preloaded images to volume ...
I0103 10:03:46.849367   15255 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/Rahul_Satal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
I0103 10:03:55.167379   15255 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/Rahul_Satal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (8.317986827s)
I0103 10:03:55.167437   15255 kic.go:188] duration metric: took 8.318330 seconds to extract preloaded images to volume
I0103 10:03:55.167597   15255 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0103 10:03:56.555724   15255 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.388103259s)
I0103 10:03:56.556016   15255 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=7812mb --memory-swap=7812mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
I0103 10:03:57.194567   15255 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}}
I0103 10:03:57.336383   15255 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0103 10:03:57.463155   15255 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0103 10:03:57.696200   15255 oci.go:144] the created container "minikube" has a running status.
I0103 10:03:57.696244   15255 kic.go:210] Creating ssh key for kic: /Users/Rahul_Satal/.minikube/machines/minikube/id_rsa...
I0103 10:03:58.055033   15255 kic_runner.go:191] docker (temp): /Users/Rahul_Satal/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0103 10:03:58.264708   15255 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0103 10:03:58.377741   15255 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0103 10:03:58.377764   15255 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0103 10:03:58.553843   15255 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0103 10:03:58.656498   15255 machine.go:88] provisioning docker machine ...
I0103 10:03:58.656558   15255 ubuntu.go:169] provisioning hostname "minikube"
I0103 10:03:58.656709   15255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0103 10:03:58.763076   15255 main.go:134] libmachine: Using SSH client type: native
I0103 10:03:58.763392   15255 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003edea0] 0x1003f1020 <nil>  [] 0s} 127.0.0.1 53402 <nil> <nil>}
I0103 10:03:58.763402   15255 main.go:134] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0103 10:03:58.923687   15255 main.go:134] libmachine: SSH cmd err, output: <nil>: minikube

I0103 10:03:58.923803   15255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0103 10:03:59.035952   15255 main.go:134] libmachine: Using SSH client type: native
I0103 10:03:59.037540   15255 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003edea0] 0x1003f1020 <nil>  [] 0s} 127.0.0.1 53402 <nil> <nil>}
I0103 10:03:59.037625   15255 main.go:134] libmachine: About to run SSH command:

        if ! grep -xq '.*\sminikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
            else 
                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
            fi
        fi
I0103 10:03:59.168101   15255 main.go:134] libmachine: SSH cmd err, output: <nil>: 
I0103 10:03:59.168123   15255 ubuntu.go:175] set auth options {CertDir:/Users/Rahul_Satal/.minikube CaCertPath:/Users/Rahul_Satal/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/Rahul_Satal/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/Rahul_Satal/.minikube/machines/server.pem ServerKeyPath:/Users/Rahul_Satal/.minikube/machines/server-key.pem ClientKeyPath:/Users/Rahul_Satal/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/Rahul_Satal/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/Rahul_Satal/.minikube}
I0103 10:03:59.168172   15255 ubuntu.go:177] setting up certificates
I0103 10:03:59.168185   15255 provision.go:83] configureAuth start
I0103 10:03:59.168306   15255 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0103 10:03:59.277852   15255 provision.go:138] copyHostCerts
I0103 10:03:59.278062   15255 exec_runner.go:151] cp: /Users/Rahul_Satal/.minikube/certs/key.pem --> /Users/Rahul_Satal/.minikube/key.pem (1675 bytes)
I0103 10:03:59.279150   15255 exec_runner.go:151] cp: /Users/Rahul_Satal/.minikube/certs/ca.pem --> /Users/Rahul_Satal/.minikube/ca.pem (1090 bytes)
I0103 10:03:59.279815   15255 exec_runner.go:151] cp: /Users/Rahul_Satal/.minikube/certs/cert.pem --> /Users/Rahul_Satal/.minikube/cert.pem (1135 bytes)
I0103 10:03:59.280261   15255 provision.go:112] generating server cert: /Users/Rahul_Satal/.minikube/machines/server.pem ca-key=/Users/Rahul_Satal/.minikube/certs/ca.pem private-key=/Users/Rahul_Satal/.minikube/certs/ca-key.pem org=Rahul_Satal.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0103 10:03:59.484737   15255 provision.go:172] copyRemoteCerts
I0103 10:03:59.485161   15255 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0103 10:03:59.485257   15255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0103 10:03:59.591931   15255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53402 SSHKeyPath:/Users/Rahul_Satal/.minikube/machines/minikube/id_rsa Username:docker}
I0103 10:03:59.697066   15255 ssh_runner.go:362] scp /Users/Rahul_Satal/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1090 bytes)
I0103 10:03:59.723388   15255 ssh_runner.go:362] scp /Users/Rahul_Satal/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I0103 10:03:59.750932   15255 ssh_runner.go:362] scp /Users/Rahul_Satal/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0103 10:03:59.785813   15255 provision.go:86] duration metric: configureAuth took 617.609245ms
I0103 10:03:59.785835   15255 ubuntu.go:193] setting minikube options for container-runtime
I0103 10:03:59.787029   15255 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0103 10:03:59.787142   15255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0103 10:03:59.907525   15255 main.go:134] libmachine: Using SSH client type: native
I0103 10:03:59.907797   15255 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003edea0] 0x1003f1020 <nil>  [] 0s} 127.0.0.1 53402 <nil> <nil>}
I0103 10:03:59.907805   15255 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0103 10:04:00.050236   15255 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay

I0103 10:04:00.050254   15255 ubuntu.go:71] root file system type: overlay
I0103 10:04:00.050519   15255 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0103 10:04:00.050644   15255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0103 10:04:00.164042   15255 main.go:134] libmachine: Using SSH client type: native
I0103 10:04:00.164281   15255 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003edea0] 0x1003f1020 <nil>  [] 0s} 127.0.0.1 53402 <nil> <nil>}
I0103 10:04:00.164344   15255 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0103 10:04:00.315691   15255 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0103 10:04:00.315852   15255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0103 10:04:00.419419   15255 main.go:134] libmachine: Using SSH client type: native
I0103 10:04:00.419703   15255 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003edea0] 0x1003f1020 <nil>  [] 0s} 127.0.0.1 53402 <nil> <nil>}
I0103 10:04:00.419715   15255 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0103 10:04:01.275299   15255 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service   2022-10-18 18:18:12.000000000 +0000
+++ /lib/systemd/system/docker.service.new  2023-01-03 04:34:00.314553877 +0000
@@ -1,30 +1,32 @@
 [Unit]
 Description=Docker Application Container Engine
 Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
 Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP $MAINPID

 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
 LimitNPROC=infinity
 LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0

 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

 # kill only the docker process, not all processes in the cgroup
 KillMode=process
-OOMScoreAdjust=-500

 [Install]
 WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0103 10:04:01.275320   15255 machine.go:91] provisioned docker machine in 2.618833799s
I0103 10:04:01.275326   15255 client.go:171] LocalClient.Create took 16.299330539s
I0103 10:04:01.275353   15255 start.go:167] duration metric: libmachine.API.Create for "minikube" took 16.299422732s
I0103 10:04:01.275358   15255 start.go:300] post-start starting for "minikube" (driver="docker")
I0103 10:04:01.275366   15255 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0103 10:04:01.275500   15255 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0103 10:04:01.275545   15255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0103 10:04:01.385913   15255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53402 SSHKeyPath:/Users/Rahul_Satal/.minikube/machines/minikube/id_rsa Username:docker}
I0103 10:04:01.482643   15255 ssh_runner.go:195] Run: cat /etc/os-release
I0103 10:04:01.490174   15255 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0103 10:04:01.490188   15255 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0103 10:04:01.490194   15255 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0103 10:04:01.490198   15255 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0103 10:04:01.490212   15255 filesync.go:126] Scanning /Users/Rahul_Satal/.minikube/addons for local assets ...
I0103 10:04:01.490465   15255 filesync.go:126] Scanning /Users/Rahul_Satal/.minikube/files for local assets ...
I0103 10:04:01.490560   15255 start.go:303] post-start completed in 215.197479ms
I0103 10:04:01.491192   15255 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0103 10:04:01.591931   15255 profile.go:148] Saving config to /Users/Rahul_Satal/.minikube/profiles/minikube/config.json ...
I0103 10:04:01.595271   15255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0103 10:04:01.595342   15255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0103 10:04:01.752564   15255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53402 SSHKeyPath:/Users/Rahul_Satal/.minikube/machines/minikube/id_rsa Username:docker}
I0103 10:04:01.848072   15255 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0103 10:04:01.878118   15255 out.go:177] 
W0103 10:04:01.898132   15255 out.go:239] ๐Ÿงฏ  Docker is nearly out of disk space, which may cause deployments to fail! (86%!o(MISSING)f capacity). You can pass '--force' to skip this check.
W0103 10:04:01.898288   15255 out.go:239] ๐Ÿ’ก  Suggestion: 

    Try one or more of the following to free up space on the device:

    1. Run "docker system prune" to remove unused Docker data (optionally with "-a")
    2. Increase the storage allocated to Docker for Desktop by clicking on:
    Docker icon > Preferences > Resources > Disk Image Size
    3. Run "minikube ssh -- docker system prune" if using the Docker container runtime
W0103 10:04:01.898358   15255 out.go:239] ๐Ÿฟ  Related issue: https://github.com/kubernetes/minikube/issues/9024
I0103 10:04:01.917706   15255 out.go:177] 
I0103 10:04:01.956319   15255 start.go:128] duration metric: createHost completed in 17.024541293s
I0103 10:04:01.956350   15255 start.go:83] releasing machines lock for "minikube", held for 17.025198986s
I0103 10:04:01.956527   15255 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0103 10:04:02.066532   15255 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0103 10:04:02.066674   15255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0103 10:04:02.068547   15255 ssh_runner.go:195] Run: systemctl --version
I0103 10:04:02.068771   15255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0103 10:04:02.180403   15255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53402 SSHKeyPath:/Users/Rahul_Satal/.minikube/machines/minikube/id_rsa Username:docker}
I0103 10:04:02.181550   15255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53402 SSHKeyPath:/Users/Rahul_Satal/.minikube/machines/minikube/id_rsa Username:docker}
I0103 10:04:02.275459   15255 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0103 10:04:17.303400   15255 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (15.236988481s)
W0103 10:04:17.303505   15255 start.go:747] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
stdout:

stderr:
curl: (28) Resolving timed out after 2000 milliseconds
I0103 10:04:17.303815   15255 ssh_runner.go:235] Completed: sudo systemctl cat docker.service: (15.028527471s)
I0103 10:04:17.303866   15255 cruntime.go:273] skipping containerd shutdown because we are bound to it
W0103 10:04:17.303910   15255 out.go:239] โ—  This container is having trouble accessing https://registry.k8s.io
W0103 10:04:17.303962   15255 out.go:239] ๐Ÿ’ก  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I0103 10:04:17.304132   15255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0103 10:04:17.328781   15255 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0103 10:04:17.349404   15255 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0103 10:04:17.439274   15255 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0103 10:04:17.528572   15255 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0103 10:04:17.612548   15255 ssh_runner.go:195] Run: sudo systemctl restart docker
I0103 10:04:17.882628   15255 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0103 10:04:17.980378   15255 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0103 10:04:18.067189   15255 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0103 10:04:18.090517   15255 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0103 10:04:18.092293   15255 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0103 10:04:18.098135   15255 start.go:472] Will wait 60s for crictl version
I0103 10:04:18.098251   15255 ssh_runner.go:195] Run: sudo crictl version
I0103 10:04:18.227836   15255 start.go:481] Version:  0.1.0
RuntimeName:  docker
RuntimeVersion:  20.10.20
RuntimeApiVersion:  1.41.0
I0103 10:04:18.227917   15255 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0103 10:04:18.263588   15255 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0103 10:04:18.321817   15255 out.go:204] ๐Ÿณ  Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
I0103 10:04:18.321978   15255 cli_runner.go:164] Run: docker exec -t minikube dig +short host.docker.internal
I0103 10:04:18.495566   15255 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0103 10:04:18.496002   15255 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0103 10:04:18.502290   15255 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0103 10:04:18.515111   15255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0103 10:04:18.612356   15255 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0103 10:04:18.612436   15255 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0103 10:04:18.638758   15255 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5

-- /stdout --
I0103 10:04:18.638774   15255 docker.go:543] Images already preloaded, skipping extraction
I0103 10:04:18.638856   15255 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0103 10:04:18.672049   15255 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5

-- /stdout --
I0103 10:04:18.672066   15255 cache_images.go:84] Images are preloaded, skipping loading
I0103 10:04:18.672181   15255 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0103 10:04:18.753818   15255 cni.go:95] Creating CNI manager for ""
I0103 10:04:18.753829   15255 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0103 10:04:18.753852   15255 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0103 10:04:18.753876   15255 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I0103 10:04:18.754034   15255 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/cri-dockerd.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.3
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%!"(MISSING)
  nodefs.inodesFree: "0%!"(MISSING)
  imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
  maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
  tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
  tcpCloseWaitTimeout: 0s

I0103 10:04:18.754153   15255 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=minikube --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 --runtime-request-timeout=15m

[Install]
 config:
{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0103 10:04:18.754278   15255 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
I0103 10:04:18.769565   15255 binaries.go:44] Found k8s binaries, skipping transfer
I0103 10:04:18.769710   15255 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0103 10:04:18.782023   15255 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (470 bytes)
I0103 10:04:18.800673   15255 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0103 10:04:18.818728   15255 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes)
I0103 10:04:18.837788   15255 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0103 10:04:18.845038   15255 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2    control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0103 10:04:18.859940   15255 certs.go:54] Setting up /Users/Rahul_Satal/.minikube/profiles/minikube for IP: 192.168.49.2
I0103 10:04:18.860032   15255 certs.go:187] generating minikubeCA CA: /Users/Rahul_Satal/.minikube/ca.key
I0103 10:04:18.988802   15255 crypto.go:156] Writing cert to /Users/Rahul_Satal/.minikube/ca.crt ...
I0103 10:04:18.988814   15255 lock.go:35] WriteFile acquiring /Users/Rahul_Satal/.minikube/ca.crt: {Name:mk5fffe64a2ec37ed7b07f71cec52ffc35bec1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 10:04:18.991476   15255 crypto.go:164] Writing key to /Users/Rahul_Satal/.minikube/ca.key ...
I0103 10:04:18.991505   15255 lock.go:35] WriteFile acquiring /Users/Rahul_Satal/.minikube/ca.key: {Name:mk2821b6a5e50cf661ae9487d2cec7bf5ef41248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 10:04:18.991962   15255 certs.go:187] generating proxyClientCA CA: /Users/Rahul_Satal/.minikube/proxy-client-ca.key
I0103 10:04:19.139743   15255 crypto.go:156] Writing cert to /Users/Rahul_Satal/.minikube/proxy-client-ca.crt ...
I0103 10:04:19.139763   15255 lock.go:35] WriteFile acquiring /Users/Rahul_Satal/.minikube/proxy-client-ca.crt: {Name:mk5c16ca602da8414cbf188197955ba19c1c7890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 10:04:19.140514   15255 crypto.go:164] Writing key to /Users/Rahul_Satal/.minikube/proxy-client-ca.key ...
I0103 10:04:19.140524   15255 lock.go:35] WriteFile acquiring /Users/Rahul_Satal/.minikube/proxy-client-ca.key: {Name:mk91ac69113f8a84177fc22022884dee2695228a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 10:04:19.140891   15255 certs.go:302] generating minikube-user signed cert: /Users/Rahul_Satal/.minikube/profiles/minikube/client.key
I0103 10:04:19.140909   15255 crypto.go:68] Generating cert /Users/Rahul_Satal/.minikube/profiles/minikube/client.crt with IP's: []
I0103 10:04:19.276147   15255 crypto.go:156] Writing cert to /Users/Rahul_Satal/.minikube/profiles/minikube/client.crt ...
I0103 10:04:19.276160   15255 lock.go:35] WriteFile acquiring /Users/Rahul_Satal/.minikube/profiles/minikube/client.crt: {Name:mkaa027d4a68f276c6b411e34e0ee16c5ef64b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 10:04:19.276596   15255 crypto.go:164] Writing key to /Users/Rahul_Satal/.minikube/profiles/minikube/client.key ...
I0103 10:04:19.276603   15255 lock.go:35] WriteFile acquiring /Users/Rahul_Satal/.minikube/profiles/minikube/client.key: {Name:mk7250e28ac50d3a8e983170d6ba1906eaeb8919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 10:04:19.276848   15255 certs.go:302] generating minikube signed cert: /Users/Rahul_Satal/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0103 10:04:19.276873   15255 crypto.go:68] Generating cert /Users/Rahul_Satal/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0103 10:04:19.371420   15255 crypto.go:156] Writing cert to /Users/Rahul_Satal/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0103 10:04:19.371432   15255 lock.go:35] WriteFile acquiring /Users/Rahul_Satal/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk4873fedb9134d205717f5bfcf3e01eb474aac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 10:04:19.371863   15255 crypto.go:164] Writing key to /Users/Rahul_Satal/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0103 10:04:19.371870   15255 lock.go:35] WriteFile acquiring /Users/Rahul_Satal/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk337fecee8d1a2f25aaabf64bdfd0894ec52145 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 10:04:19.372117   15255 certs.go:320] copying /Users/Rahul_Satal/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /Users/Rahul_Satal/.minikube/profiles/minikube/apiserver.crt
I0103 10:04:19.372405   15255 certs.go:324] copying /Users/Rahul_Satal/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /Users/Rahul_Satal/.minikube/profiles/minikube/apiserver.key
I0103 10:04:19.372748   15255 certs.go:302] generating aggregator signed cert: /Users/Rahul_Satal/.minikube/profiles/minikube/proxy-client.key
I0103 10:04:19.372777   15255 crypto.go:68] Generating cert /Users/Rahul_Satal/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0103 10:04:19.552031   15255 crypto.go:156] Writing cert to /Users/Rahul_Satal/.minikube/profiles/minikube/proxy-client.crt ...
I0103 10:04:19.552045   15255 lock.go:35] WriteFile acquiring /Users/Rahul_Satal/.minikube/profiles/minikube/proxy-client.crt: {Name:mk8ae7f5f5aab1ab86047ec58a37a8c1a75dd009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 10:04:19.553194   15255 crypto.go:164] Writing key to /Users/Rahul_Satal/.minikube/profiles/minikube/proxy-client.key ...
I0103 10:04:19.553275   15255 lock.go:35] WriteFile acquiring /Users/Rahul_Satal/.minikube/profiles/minikube/proxy-client.key: {Name:mk14df2f3248c7bb0da783abd9f92ff97ec5eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 10:04:19.554484   15255 certs.go:388] found cert: /Users/Rahul_Satal/.minikube/certs/Users/Rahul_Satal/.minikube/certs/ca-key.pem (1675 bytes)
I0103 10:04:19.554672   15255 certs.go:388] found cert: /Users/Rahul_Satal/.minikube/certs/Users/Rahul_Satal/.minikube/certs/ca.pem (1090 bytes)
I0103 10:04:19.554807   15255 certs.go:388] found cert: /Users/Rahul_Satal/.minikube/certs/Users/Rahul_Satal/.minikube/certs/cert.pem (1135 bytes)
I0103 10:04:19.554929   15255 certs.go:388] found cert: /Users/Rahul_Satal/.minikube/certs/Users/Rahul_Satal/.minikube/certs/key.pem (1675 bytes)
I0103 10:04:19.555781   15255 ssh_runner.go:362] scp /Users/Rahul_Satal/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0103 10:04:19.589026   15255 ssh_runner.go:362] scp /Users/Rahul_Satal/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0103 10:04:19.615752   15255 ssh_runner.go:362] scp /Users/Rahul_Satal/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0103 10:04:19.642774   15255 ssh_runner.go:362] scp /Users/Rahul_Satal/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0103 10:04:19.670884   15255 ssh_runner.go:362] scp /Users/Rahul_Satal/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0103 10:04:19.696224   15255 ssh_runner.go:362] scp /Users/Rahul_Satal/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0103 10:04:19.722173   15255 ssh_runner.go:362] scp /Users/Rahul_Satal/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0103 10:04:19.751420   15255 ssh_runner.go:362] scp /Users/Rahul_Satal/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0103 10:04:19.778196   15255 ssh_runner.go:362] scp /Users/Rahul_Satal/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0103 10:04:19.803468   15255 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0103 10:04:19.823386   15255 ssh_runner.go:195] Run: openssl version
I0103 10:04:19.831816   15255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0103 10:04:19.843312   15255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0103 10:04:19.849898   15255 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  3 04:34 /usr/share/ca-certificates/minikubeCA.pem
I0103 10:04:19.850014   15255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0103 10:04:19.858206   15255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0103 10:04:19.869839   15255 kubeadm.go:396] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:docker.io/kicbase/stable:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:7812 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0103 10:04:19.869966   15255 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0103 10:04:19.898978   15255 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0103 10:04:19.910687   15255 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0103 10:04:19.921291   15255 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0103 10:04:19.921400   15255 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0103 10:04:19.932880   15255 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0103 10:04:19.932914   15255 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0103 10:04:19.990177   15255 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
I0103 10:04:19.990297   15255 kubeadm.go:317] [preflight] Running pre-flight checks
I0103 10:04:20.123569   15255 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I0103 10:04:20.123670   15255 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0103 10:04:20.123775   15255 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0103 10:04:20.283687   15255 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0103 10:04:20.304305   15255 out.go:204]     โ–ช Generating certificates and keys ...
I0103 10:04:20.304510   15255 kubeadm.go:317] [certs] Using existing ca certificate authority
I0103 10:04:20.304604   15255 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I0103 10:04:20.609060   15255 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I0103 10:04:20.799322   15255 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I0103 10:04:20.907554   15255 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I0103 10:04:21.023724   15255 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I0103 10:04:21.167214   15255 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I0103 10:04:21.167403   15255 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
I0103 10:04:21.513072   15255 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I0103 10:04:21.513238   15255 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
I0103 10:04:21.797121   15255 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I0103 10:04:21.893287   15255 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I0103 10:04:22.063263   15255 kubeadm.go:317] [certs] Generating "sa" key and public key
I0103 10:04:22.063430   15255 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0103 10:04:22.246085   15255 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I0103 10:04:22.334683   15255 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0103 10:04:22.406845   15255 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0103 10:04:22.558729   15255 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0103 10:04:22.572581   15255 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0103 10:04:22.573974   15255 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0103 10:04:22.574049   15255 kubeadm.go:317] [kubelet-start] Starting the kubelet
I0103 10:04:22.679288   15255 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0103 10:04:22.700664   15255 out.go:204]     โ–ช Booting up control plane ...
I0103 10:04:22.700925   15255 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0103 10:04:22.701075   15255 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0103 10:04:22.701255   15255 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0103 10:04:22.701403   15255 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0103 10:04:22.701584   15255 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0103 10:05:02.665004   15255 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I0103 10:08:22.509660   15255 kubeadm.go:317] 
I0103 10:08:22.509733   15255 kubeadm.go:317] Unfortunately, an error has occurred:
I0103 10:08:22.509817   15255 kubeadm.go:317]   timed out waiting for the condition
I0103 10:08:22.509823   15255 kubeadm.go:317] 
I0103 10:08:22.509869   15255 kubeadm.go:317] This error is likely caused by:
I0103 10:08:22.509915   15255 kubeadm.go:317]   - The kubelet is not running
I0103 10:08:22.510072   15255 kubeadm.go:317]   - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0103 10:08:22.510087   15255 kubeadm.go:317] 
I0103 10:08:22.510293   15255 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0103 10:08:22.510377   15255 kubeadm.go:317]   - 'systemctl status kubelet'
I0103 10:08:22.510472   15255 kubeadm.go:317]   - 'journalctl -xeu kubelet'
I0103 10:08:22.510486   15255 kubeadm.go:317] 
I0103 10:08:22.510661   15255 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0103 10:08:22.510739   15255 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0103 10:08:22.510859   15255 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
I0103 10:08:22.510992   15255 kubeadm.go:317]   - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
I0103 10:08:22.511089   15255 kubeadm.go:317]   Once you have found the failing container, you can inspect its logs with:
I0103 10:08:22.511174   15255 kubeadm.go:317]   - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
I0103 10:08:22.515046   15255 kubeadm.go:317] W0103 04:34:19.997278     998 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0103 10:08:22.515259   15255 kubeadm.go:317]   [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
I0103 10:08:22.515385   15255 kubeadm.go:317]   [WARNING SystemVerification]: missing optional cgroups: blkio
I0103 10:08:22.515560   15255 kubeadm.go:317]   [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0103 10:08:22.515683   15255 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0103 10:08:22.515734   15255 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
W0103 10:08:22.516034   15255 out.go:239] ๐Ÿ’ข  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0103 04:34:19.997278     998 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
    [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
    [WARNING SystemVerification]: missing optional cgroups: blkio
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

I0103 10:08:22.516203   15255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I0103 10:08:26.276844   15255 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (3.760677168s)
I0103 10:08:26.277581   15255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0103 10:08:26.290634   15255 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0103 10:08:26.290739   15255 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0103 10:08:26.301865   15255 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0103 10:08:26.301889   15255 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0103 10:08:26.346405   15255 kubeadm.go:317] W0103 04:38:26.366585    3482 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0103 10:08:26.386564   15255 kubeadm.go:317]   [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
I0103 10:08:26.394973   15255 kubeadm.go:317]   [WARNING SystemVerification]: missing optional cgroups: blkio
I0103 10:08:26.491933   15255 kubeadm.go:317]   [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0103 10:12:27.798700   15255 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0103 10:12:27.798921   15255 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I0103 10:12:27.806799   15255 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
I0103 10:12:27.806880   15255 kubeadm.go:317] [preflight] Running pre-flight checks
I0103 10:12:27.807010   15255 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I0103 10:12:27.807273   15255 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0103 10:12:27.807578   15255 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0103 10:12:27.807748   15255 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0103 10:12:27.849144   15255 out.go:204]     โ–ช Generating certificates and keys ...
I0103 10:12:27.849416   15255 kubeadm.go:317] [certs] Using existing ca certificate authority
I0103 10:12:27.849558   15255 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I0103 10:12:27.849774   15255 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0103 10:12:27.849922   15255 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
I0103 10:12:27.850142   15255 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
I0103 10:12:27.850273   15255 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
I0103 10:12:27.850428   15255 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
I0103 10:12:27.850595   15255 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
I0103 10:12:27.850829   15255 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0103 10:12:27.851023   15255 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0103 10:12:27.851191   15255 kubeadm.go:317] [certs] Using the existing "sa" key
I0103 10:12:27.851335   15255 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0103 10:12:27.851513   15255 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I0103 10:12:27.851689   15255 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0103 10:12:27.851870   15255 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0103 10:12:27.852036   15255 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0103 10:12:27.852284   15255 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0103 10:12:27.852533   15255 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0103 10:12:27.852615   15255 kubeadm.go:317] [kubelet-start] Starting the kubelet
I0103 10:12:27.852781   15255 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0103 10:12:27.872393   15255 out.go:204]     โ–ช Booting up control plane ...
I0103 10:12:27.872927   15255 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0103 10:12:27.873247   15255 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0103 10:12:27.873519   15255 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0103 10:12:27.874008   15255 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0103 10:12:27.874751   15255 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0103 10:12:27.874965   15255 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I0103 10:12:27.874975   15255 kubeadm.go:317] 
I0103 10:12:27.875099   15255 kubeadm.go:317] Unfortunately, an error has occurred:
I0103 10:12:27.875222   15255 kubeadm.go:317]   timed out waiting for the condition
I0103 10:12:27.875242   15255 kubeadm.go:317] 
I0103 10:12:27.875346   15255 kubeadm.go:317] This error is likely caused by:
I0103 10:12:27.875431   15255 kubeadm.go:317]   - The kubelet is not running
I0103 10:12:27.875719   15255 kubeadm.go:317]   - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0103 10:12:27.875739   15255 kubeadm.go:317] 
I0103 10:12:27.876037   15255 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0103 10:12:27.876116   15255 kubeadm.go:317]   - 'systemctl status kubelet'
I0103 10:12:27.876230   15255 kubeadm.go:317]   - 'journalctl -xeu kubelet'
I0103 10:12:27.876250   15255 kubeadm.go:317] 
I0103 10:12:27.876522   15255 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0103 10:12:27.876809   15255 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0103 10:12:27.877171   15255 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
I0103 10:12:27.877433   15255 kubeadm.go:317]   - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
I0103 10:12:27.877707   15255 kubeadm.go:317]   Once you have found the failing container, you can inspect its logs with:
I0103 10:12:27.878006   15255 kubeadm.go:317]   - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
I0103 10:12:27.878088   15255 kubeadm.go:398] StartCluster complete in 8m8.029438149s
I0103 10:12:27.878172   15255 cri.go:52] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0103 10:12:27.879017   15255 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0103 10:12:27.924822   15255 cri.go:87] found id: ""
I0103 10:12:27.924838   15255 logs.go:274] 0 containers: []
W0103 10:12:27.924845   15255 logs.go:276] No container was found matching "kube-apiserver"
I0103 10:12:27.924851   15255 cri.go:52] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0103 10:12:27.924949   15255 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0103 10:12:27.961177   15255 cri.go:87] found id: ""
I0103 10:12:27.961191   15255 logs.go:274] 0 containers: []
W0103 10:12:27.961197   15255 logs.go:276] No container was found matching "etcd"
I0103 10:12:27.961203   15255 cri.go:52] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0103 10:12:27.961478   15255 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0103 10:12:27.992107   15255 cri.go:87] found id: ""
I0103 10:12:27.992118   15255 logs.go:274] 0 containers: []
W0103 10:12:27.992122   15255 logs.go:276] No container was found matching "coredns"
I0103 10:12:27.992126   15255 cri.go:52] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0103 10:12:27.992217   15255 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0103 10:12:28.021729   15255 cri.go:87] found id: ""
I0103 10:12:28.021739   15255 logs.go:274] 0 containers: []
W0103 10:12:28.021743   15255 logs.go:276] No container was found matching "kube-scheduler"
I0103 10:12:28.021747   15255 cri.go:52] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0103 10:12:28.021836   15255 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0103 10:12:28.052531   15255 cri.go:87] found id: ""
I0103 10:12:28.052541   15255 logs.go:274] 0 containers: []
W0103 10:12:28.052545   15255 logs.go:276] No container was found matching "kube-proxy"
I0103 10:12:28.052550   15255 cri.go:52] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I0103 10:12:28.052644   15255 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0103 10:12:28.084166   15255 cri.go:87] found id: ""
I0103 10:12:28.084177   15255 logs.go:274] 0 containers: []
W0103 10:12:28.084182   15255 logs.go:276] No container was found matching "kubernetes-dashboard"
I0103 10:12:28.084188   15255 cri.go:52] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I0103 10:12:28.084297   15255 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0103 10:12:28.112020   15255 cri.go:87] found id: ""
I0103 10:12:28.112031   15255 logs.go:274] 0 containers: []
W0103 10:12:28.112035   15255 logs.go:276] No container was found matching "storage-provisioner"
I0103 10:12:28.112039   15255 cri.go:52] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0103 10:12:28.112130   15255 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0103 10:12:28.139424   15255 cri.go:87] found id: ""
I0103 10:12:28.139453   15255 logs.go:274] 0 containers: []
W0103 10:12:28.139462   15255 logs.go:276] No container was found matching "kube-controller-manager"
I0103 10:12:28.139471   15255 logs.go:123] Gathering logs for dmesg ...
I0103 10:12:28.139480   15255 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0103 10:12:28.152088   15255 logs.go:123] Gathering logs for describe nodes ...
I0103 10:12:28.152100   15255 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0103 10:12:28.212604   15255 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: 
** stderr ** 
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
I0103 10:12:28.212617   15255 logs.go:123] Gathering logs for Docker ...
I0103 10:12:28.212623   15255 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0103 10:12:28.237461   15255 logs.go:123] Gathering logs for container status ...
I0103 10:12:28.237476   15255 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0103 10:12:28.271307   15255 logs.go:123] Gathering logs for kubelet ...
I0103 10:12:28.271320   15255 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0103 10:12:28.325148   15255 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0103 04:38:26.366585    3482 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
    [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
    [WARNING SystemVerification]: missing optional cgroups: blkio
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0103 10:12:28.325176   15255 out.go:239] 
W0103 10:12:28.325439   15255 out.go:239] ๐Ÿ’ฃ  Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0103 04:38:26.366585    3482 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
    [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
    [WARNING SystemVerification]: missing optional cgroups: blkio
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W0103 10:12:28.326079   15255 out.go:239] 
W0103 10:12:28.327075   15255 out.go:239] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚                                                                                           โ”‚
โ”‚    ๐Ÿ˜ฟ  If the above advice does not help, please let us know:                             โ”‚
โ”‚    ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose                           โ”‚
โ”‚                                                                                           โ”‚
โ”‚    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
โ”‚                                                                                           โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
I0103 10:12:28.382473   15255 out.go:177] 
W0103 10:12:28.421754   15255 out.go:239] โŒ  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0103 04:38:26.366585    3482 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
    [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
    [WARNING SystemVerification]: missing optional cgroups: blkio
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W0103 10:12:28.422253   15255 out.go:239] ๐Ÿ’ก  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0103 10:12:28.422457   15255 out.go:239] ๐Ÿฟ  Related issue: https://github.com/kubernetes/minikube/issues/4172
I0103 10:12:28.478409   15255 out.go:177] 

* 
* ==> Docker <==
* -- Logs begin at Tue 2023-01-03 04:33:57 UTC, end at Tue 2023-01-03 04:59:05 UTC. --
Jan 03 04:49:14 minikube dockerd[588]: time="2023-01-03T04:49:14.092610176Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:49:14 minikube dockerd[588]: time="2023-01-03T04:49:14.093540756Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:49:14 minikube dockerd[588]: time="2023-01-03T04:49:14.101713882Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:49:34 minikube dockerd[588]: time="2023-01-03T04:49:34.074358448Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:49:34 minikube dockerd[588]: time="2023-01-03T04:49:34.074568763Z" level=error msg="Not continuing with pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:49:34 minikube dockerd[588]: time="2023-01-03T04:49:34.074885281Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:50:34 minikube dockerd[588]: time="2023-01-03T04:50:34.036338056Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:50:34 minikube dockerd[588]: time="2023-01-03T04:50:34.036375563Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:50:34 minikube dockerd[588]: time="2023-01-03T04:50:34.038729310Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:50:54 minikube dockerd[588]: time="2023-01-03T04:50:54.039682803Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:50:54 minikube dockerd[588]: time="2023-01-03T04:50:54.039720087Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:50:54 minikube dockerd[588]: time="2023-01-03T04:50:54.043465093Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:51:14 minikube dockerd[588]: time="2023-01-03T04:51:14.024917428Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:51:14 minikube dockerd[588]: time="2023-01-03T04:51:14.024981316Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:51:14 minikube dockerd[588]: time="2023-01-03T04:51:14.028536229Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:51:34 minikube dockerd[588]: time="2023-01-03T04:51:34.006210095Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:51:34 minikube dockerd[588]: time="2023-01-03T04:51:34.006270336Z" level=error msg="Not continuing with pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:51:34 minikube dockerd[588]: time="2023-01-03T04:51:34.006490178Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:52:24 minikube dockerd[588]: time="2023-01-03T04:52:24.000837742Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:52:24 minikube dockerd[588]: time="2023-01-03T04:52:24.000911702Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:52:24 minikube dockerd[588]: time="2023-01-03T04:52:24.003749509Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:52:43 minikube dockerd[588]: time="2023-01-03T04:52:43.986203656Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:52:43 minikube dockerd[588]: time="2023-01-03T04:52:43.987425918Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:52:43 minikube dockerd[588]: time="2023-01-03T04:52:43.994763659Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:53:13 minikube dockerd[588]: time="2023-01-03T04:53:13.970349787Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:53:13 minikube dockerd[588]: time="2023-01-03T04:53:13.970551163Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:53:13 minikube dockerd[588]: time="2023-01-03T04:53:13.977214648Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:53:43 minikube dockerd[588]: time="2023-01-03T04:53:43.957463086Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:53:43 minikube dockerd[588]: time="2023-01-03T04:53:43.957882773Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:53:43 minikube dockerd[588]: time="2023-01-03T04:53:43.968023460Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:54:13 minikube dockerd[588]: time="2023-01-03T04:54:13.942208893Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:54:13 minikube dockerd[588]: time="2023-01-03T04:54:13.942781803Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:54:13 minikube dockerd[588]: time="2023-01-03T04:54:13.947447658Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:54:43 minikube dockerd[588]: time="2023-01-03T04:54:43.927407784Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:54:43 minikube dockerd[588]: time="2023-01-03T04:54:43.927650366Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:54:43 minikube dockerd[588]: time="2023-01-03T04:54:43.935259062Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:55:13 minikube dockerd[588]: time="2023-01-03T04:55:13.912284691Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:55:13 minikube dockerd[588]: time="2023-01-03T04:55:13.912481656Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:55:13 minikube dockerd[588]: time="2023-01-03T04:55:13.919397596Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:55:43 minikube dockerd[588]: time="2023-01-03T04:55:43.896676715Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:55:43 minikube dockerd[588]: time="2023-01-03T04:55:43.896976223Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:55:43 minikube dockerd[588]: time="2023-01-03T04:55:43.903052152Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:56:13 minikube dockerd[588]: time="2023-01-03T04:56:13.879896753Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:56:13 minikube dockerd[588]: time="2023-01-03T04:56:13.880008841Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:56:13 minikube dockerd[588]: time="2023-01-03T04:56:13.885512104Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:56:43 minikube dockerd[588]: time="2023-01-03T04:56:43.866845245Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:56:43 minikube dockerd[588]: time="2023-01-03T04:56:43.866971662Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:56:43 minikube dockerd[588]: time="2023-01-03T04:56:43.874578927Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:57:13 minikube dockerd[588]: time="2023-01-03T04:57:13.851592983Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:57:13 minikube dockerd[588]: time="2023-01-03T04:57:13.851673582Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:57:13 minikube dockerd[588]: time="2023-01-03T04:57:13.855074656Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:57:43 minikube dockerd[588]: time="2023-01-03T04:57:43.836718787Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:57:43 minikube dockerd[588]: time="2023-01-03T04:57:43.836797221Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:57:43 minikube dockerd[588]: time="2023-01-03T04:57:43.842089718Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:58:13 minikube dockerd[588]: time="2023-01-03T04:58:13.820761568Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:58:13 minikube dockerd[588]: time="2023-01-03T04:58:13.821432642Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:58:13 minikube dockerd[588]: time="2023-01-03T04:58:13.827385572Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:58:43 minikube dockerd[588]: time="2023-01-03T04:58:43.805513790Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:58:43 minikube dockerd[588]: time="2023-01-03T04:58:43.805709664Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"
Jan 03 04:58:43 minikube dockerd[588]: time="2023-01-03T04:58:43.812625065Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp [2404:6800:4003:c05::52]:443: connect: cannot assign requested address"

* 
* ==> container status <==
* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID

* 
* ==> describe nodes <==
* 
* ==> dmesg <==
* [Jan 3 02:41] ERROR: earlyprintk= earlyser already used
[  +0.000000] ERROR: earlyprintk= earlyser already used
[Jan 3 02:42] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0x7E, should be 0xDB (20200925/tbprint-173)
[  +0.004674] ACPI: setting ELCR to 0200 (from 06e0)
[  +0.212752]  #2
[  +0.068885]  #3
[  +4.644259] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[  +0.021348] the cryptoloop driver has been deprecated and will be removed in in Linux 5.16
[  +0.050661] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
[  +0.002949] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
[  +9.390960] grpcfuse: loading out-of-tree module taints kernel.
[Jan 3 04:19] hrtimer: interrupt took 1518429 ns

* 
* ==> kernel <==
*  04:59:06 up  2:17,  0 users,  load average: 0.25, 0.46, 0.55
Linux minikube 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"

* 
* ==> kubelet <==
* -- Logs begin at Tue 2023-01-03 04:33:57 UTC, end at Tue 2023-01-03 04:59:06 UTC. --
Jan 03 04:59:01 minikube kubelet[3611]: E0103 04:59:01.175866    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:01 minikube kubelet[3611]: E0103 04:59:01.276548    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:01 minikube kubelet[3611]: E0103 04:59:01.377042    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:01 minikube kubelet[3611]: E0103 04:59:01.477463    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:01 minikube kubelet[3611]: E0103 04:59:01.578178    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:01 minikube kubelet[3611]: E0103 04:59:01.678840    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:01 minikube kubelet[3611]: I0103 04:59:01.757149    3611 kubelet_node_status.go:70] "Attempting to register node" node="minikube"
Jan 03 04:59:01 minikube kubelet[3611]: E0103 04:59:01.757608    3611 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="minikube"
Jan 03 04:59:01 minikube kubelet[3611]: E0103 04:59:01.779583    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:01 minikube kubelet[3611]: E0103 04:59:01.879786    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:01 minikube kubelet[3611]: E0103 04:59:01.979939    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:02 minikube kubelet[3611]: E0103 04:59:02.080653    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:02 minikube kubelet[3611]: E0103 04:59:02.181278    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:02 minikube kubelet[3611]: E0103 04:59:02.281862    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:02 minikube kubelet[3611]: E0103 04:59:02.383308    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:02 minikube kubelet[3611]: E0103 04:59:02.483661    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:02 minikube kubelet[3611]: E0103 04:59:02.585124    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:02 minikube kubelet[3611]: E0103 04:59:02.686718    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:02 minikube kubelet[3611]: E0103 04:59:02.787786    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:02 minikube kubelet[3611]: E0103 04:59:02.879827    3611 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"minikube\" not found"
Jan 03 04:59:02 minikube kubelet[3611]: E0103 04:59:02.889421    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:02 minikube kubelet[3611]: E0103 04:59:02.990790    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:03 minikube kubelet[3611]: E0103 04:59:03.091767    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:03 minikube kubelet[3611]: E0103 04:59:03.192608    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:03 minikube kubelet[3611]: E0103 04:59:03.293076    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:03 minikube kubelet[3611]: E0103 04:59:03.394786    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:03 minikube kubelet[3611]: E0103 04:59:03.495792    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:03 minikube kubelet[3611]: E0103 04:59:03.598751    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:03 minikube kubelet[3611]: E0103 04:59:03.699748    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:03 minikube kubelet[3611]: E0103 04:59:03.800827    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:03 minikube kubelet[3611]: E0103 04:59:03.901154    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:04 minikube kubelet[3611]: E0103 04:59:04.002557    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:04 minikube kubelet[3611]: E0103 04:59:04.103635    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:04 minikube kubelet[3611]: E0103 04:59:04.204005    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:04 minikube kubelet[3611]: E0103 04:59:04.304637    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:04 minikube kubelet[3611]: E0103 04:59:04.406159    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:04 minikube kubelet[3611]: E0103 04:59:04.507013    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:04 minikube kubelet[3611]: E0103 04:59:04.607645    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:04 minikube kubelet[3611]: E0103 04:59:04.708019    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:04 minikube kubelet[3611]: E0103 04:59:04.809110    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:04 minikube kubelet[3611]: E0103 04:59:04.909981    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:05 minikube kubelet[3611]: E0103 04:59:05.010773    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:05 minikube kubelet[3611]: E0103 04:59:05.111524    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:05 minikube kubelet[3611]: E0103 04:59:05.211838    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:05 minikube kubelet[3611]: E0103 04:59:05.313123    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:05 minikube kubelet[3611]: E0103 04:59:05.414700    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:05 minikube kubelet[3611]: E0103 04:59:05.516130    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:05 minikube kubelet[3611]: E0103 04:59:05.617008    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:05 minikube kubelet[3611]: E0103 04:59:05.718114    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:05 minikube kubelet[3611]: E0103 04:59:05.766357    3611 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1736b29f3b7b437b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:time.Date(2023, time.January, 3, 4, 38, 28, 582196091, time.Local), LastTimestamp:time.Date(2023, time.January, 3, 4, 38, 28, 840669457, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events/minikube.1736b29f3b7b437b": dial tcp 192.168.49.2:8443: connect: connection refused'(may retry after sleeping)
Jan 03 04:59:05 minikube kubelet[3611]: E0103 04:59:05.807387    3611 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 03 04:59:05 minikube kubelet[3611]: E0103 04:59:05.819129    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:05 minikube kubelet[3611]: E0103 04:59:05.919391    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:06 minikube kubelet[3611]: E0103 04:59:06.020101    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:06 minikube kubelet[3611]: W0103 04:59:06.055031    3611 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 03 04:59:06 minikube kubelet[3611]: E0103 04:59:06.055086    3611 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 03 04:59:06 minikube kubelet[3611]: E0103 04:59:06.121129    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:06 minikube kubelet[3611]: E0103 04:59:06.221316    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:06 minikube kubelet[3611]: E0103 04:59:06.322327    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"
Jan 03 04:59:06 minikube kubelet[3611]: E0103 04:59:06.424421    3611 kubelet.go:2448] "Error getting node" err="node \"minikube\" not found"

Operating System

macOS (Default)

Driver

Docker

rahul-satal commented 1 year ago

I was getting this error as I was logged in on tsh. This worked for me - tsh logout