Closed JohnGossett closed 2 years ago
Hi @JohnGossett, thanks for reporting your issue with minikube!
Just curious if you're able to open that file yourself in Notepad or another app? It's possible that file has a weird permission on it. Let me know the result, thanks!
Hi @JohnGossett, were you able to try @spowelljr's suggestion above?
Nope! Totally missed the message, sorry. I'll try that tonight or tomorrow asap
From: klaases @.> Sent: Wednesday, February 9, 2022 1:15 PM To: kubernetes/minikube @.> Cc: Gossett, John @.>; Mention @.> Subject: Re: [kubernetes/minikube] Exiting due to GUEST_START: Failed kubeconfig update: writing kubeconfig: Error writing file C:\Users\John.kube\config: open C:\Users\John.kube\config: Access is denied. (Issue #13319)
Hi @JohnGossetthttps://urldefense.com/v3/__https://github.com/JohnGossett__;!!KGKeukY!h835tc6YCOE70fPb6z2isms9zw7QVuSLCLwFg2o_mkaRJ99XSsSrtT4sp-tE0n-DxIiRLcUfNSva$, were you able to try @spowelljrhttps://urldefense.com/v3/__https://github.com/spowelljr__;!!KGKeukY!h835tc6YCOE70fPb6z2isms9zw7QVuSLCLwFg2o_mkaRJ99XSsSrtT4sp-tE0n-DxIiRLedEKnVL$'s suggestion above?
— Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/kubernetes/minikube/issues/13319*issuecomment-1034106301__;Iw!!KGKeukY!h835tc6YCOE70fPb6z2isms9zw7QVuSLCLwFg2o_mkaRJ99XSsSrtT4sp-tE0n-DxIiRLYk08lwW$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/ASQZRJVL4VH4AALDRIOMFIDU2K4NVANCNFSM5LSMRH3A__;!!KGKeukY!h835tc6YCOE70fPb6z2isms9zw7QVuSLCLwFg2o_mkaRJ99XSsSrtT4sp-tE0n-DxIiRLfExGjjx$. Triage notifications on the go with GitHub Mobile for iOShttps://urldefense.com/v3/__https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675__;!!KGKeukY!h835tc6YCOE70fPb6z2isms9zw7QVuSLCLwFg2o_mkaRJ99XSsSrtT4sp-tE0n-DxIiRLfLfPJWY$ or Androidhttps://urldefense.com/v3/__https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign*3Dnotification-email*26utm_medium*3Demail*26utm_source*3Dgithub__;JSUlJSU!!KGKeukY!h835tc6YCOE70fPb6z2isms9zw7QVuSLCLwFg2o_mkaRJ99XSsSrtT4sp-tE0n-DxIiRLZFjYfor$. You are receiving this because you were mentioned.Message ID: @.***>
@JohnGossett any update?
File permissions didn't seem to be the issue after successfully opening with notepad++ and also manually checking them. Ultimately I ended up completely reinstalling and the issue didn't reoccur, just forgot to update (sorry!)
Cool, I'll go ahead and close this issue then. Feel free to reopen if you ever run into it again.
What Happened?
While trying to set up a minikube cluster following this tutorial I received the titular error (Exiting due to GUEST_START: Failed kubeconfig update: writing kubeconfig: Error writing file C:\Users\John.kube\config: open C:\Users\John.kube\config: Access is denied.). There was no proposed solution above so I tried setting my environment variable KUBECONFIG to my .kube conf file (etc/kube/config) which seemed to help with a similar issue but it did not effect my problem. I am running: minikube v1.24.0 docker:
Attach the log file
==> Audit <==
==> Last Start <==
Log file created at: 2022/01/09 18:02:48 Running on machine: LAPTOP-MTL4M75F Binary: Built with gc go1.17.2 for windows/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0109 18:02:48.586345 23808 out.go:297] Setting OutFile to fd 84 ... I0109 18:02:48.586345 23808 out.go:344] TERM=,COLORTERM=, which probably does not support color I0109 18:02:48.586345 23808 out.go:310] Setting ErrFile to fd 88... I0109 18:02:48.586345 23808 out.go:344] TERM=,COLORTERM=, which probably does not support color W0109 18:02:48.597938 23808 root.go:291] Error reading config file at C:\Users\John.minikube\config\config.json: open C:\Users\John.minikube\config\config.json: The system cannot find the file specified. I0109 18:02:48.598514 23808 out.go:304] Setting JSON to false I0109 18:02:48.604179 23808 start.go:112] hostinfo: {"hostname":"LAPTOP-MTL4M75F","uptime":1914,"bootTime":1641767454,"procs":296,"os":"windows","platform":"Microsoft Windows 10 Home","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"cbc9ce39-d767-4b46-bf17-3c86df36da74"} W0109 18:02:48.604179 23808 start.go:120] gopshost.Virtualization returned error: not implemented yet I0109 18:02:48.605219 23808 out.go:176] minikube v1.24.0 on Microsoft Windows 10 Home 10.0.19042 Build 19042 I0109 18:02:48.605219 23808 notify.go:174] Checking for updates... I0109 18:02:48.606801 23808 out.go:176] - KUBECONFIG=C:\Users\John.kube\config I0109 18:02:48.607317 23808 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3 I0109 18:02:48.607317 23808 driver.go:343] Setting default libvirt URI to qemu:///system I0109 18:02:48.852537 23808 docker.go:132] docker version: linux-20.10.11 I0109 18:02:48.857198 23808 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0109 18:02:49.322834 23808 info.go:263] docker info: {ID:G3JI:WJ2E:XMV6:DG3M:H6Z7:LCUJ:2Q6Q:O2NA:KT4O:6BBP:WN27:T425 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-09 23:02:49.072711351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:12898390016 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.11 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.1] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.14.0]] Warnings:}}
I0109 18:02:49.324952 23808 out.go:176] Using the docker driver based on existing profile
I0109 18:02:49.324952 23808 start.go:280] selected driver: docker
I0109 18:02:49.324952 23808 start.go:762] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2500 CPUs:3 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\John:/minikube-host}
I0109 18:02:49.325467 23808 start.go:773] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0109 18:02:49.337417 23808 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0109 18:02:49.738306 23808 info.go:263] docker info: {ID:G3JI:WJ2E:XMV6:DG3M:H6Z7:LCUJ:2Q6Q:O2NA:KT4O:6BBP:WN27:T425 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-09 23:02:49.530777191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:12898390016 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.11 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.1] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.14.0]] Warnings:}}
I0109 18:02:49.777886 23808 cni.go:93] Creating CNI manager for ""
I0109 18:02:49.777886 23808 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0109 18:02:49.777886 23808 start_flags.go:282] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2500 CPUs:3 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\John:/minikube-host}
I0109 18:02:49.778926 23808 out.go:176] Starting control plane node minikube in cluster minikube
I0109 18:02:49.778926 23808 cache.go:118] Beginning downloading kic base image for docker with docker
I0109 18:02:49.780494 23808 out.go:176] Pulling base image ...
I0109 18:02:49.780494 23808 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I0109 18:02:49.780494 23808 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
I0109 18:02:49.780494 23808 preload.go:148] Found local preload: C:\Users\John.minikube\cache\preloaded-tarball\preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4
I0109 18:02:49.780494 23808 cache.go:57] Caching tarball of preloaded images
I0109 18:02:49.781018 23808 preload.go:174] Found C:\Users\John.minikube\cache\preloaded-tarball\preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0109 18:02:49.781018 23808 cache.go:60] Finished verifying existence of preloaded tar for v1.22.3 on docker
I0109 18:02:49.781018 23808 profile.go:147] Saving config to C:\Users\John.minikube\profiles\minikube\config.json ...
I0109 18:02:50.038168 23808 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
I0109 18:02:50.038168 23808 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
I0109 18:02:50.038168 23808 cache.go:206] Successfully downloaded all kic artifacts
I0109 18:02:50.038168 23808 start.go:313] acquiring machines lock for minikube: {Name:mk58a5189f63c75d481eb1d5166fb2cfbfe822d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0109 18:02:50.038700 23808 start.go:317] acquired machines lock for "minikube" in 532.3µs
I0109 18:02:50.038700 23808 start.go:93] Skipping create...Using existing machine configuration
I0109 18:02:50.038700 23808 fix.go:55] fixHost starting:
I0109 18:02:50.047541 23808 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0109 18:02:50.287481 23808 fix.go:108] recreateIfNeeded on minikube: state=Running err=
W0109 18:02:50.287481 23808 fix.go:134] unexpected machine state, will restart:
I0109 18:02:50.288530 23808 out.go:176] * Updating the running docker "minikube" container ...
I0109 18:02:50.288530 23808 machine.go:88] provisioning docker machine ...
I0109 18:02:50.288530 23808 ubuntu.go:169] provisioning hostname "minikube"
I0109 18:02:50.292686 23808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0109 18:02:50.549913 23808 main.go:130] libmachine: Using SSH client type: native
I0109 18:02:50.550431 23808 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x539f80] 0x53ce40 [] 0s} 127.0.0.1 60818 }
I0109 18:02:50.550431 23808 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0109 18:02:50.611147 23808 main.go:130] libmachine: SSH cmd err, output: : minikube
I0109 18:02:50.615318 23808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0109 18:02:50.873494 23808 main.go:130] libmachine: Using SSH client type: native I0109 18:02:50.873494 23808 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x539f80] 0x53ce40 [] 0s} 127.0.0.1 60818 }
I0109 18:02:50.873494 23808 main.go:130] libmachine: About to run SSH command:
I0109 18:02:50.928703 23808 main.go:130] libmachine: SSH cmd err, output::
I0109 18:02:50.928703 23808 ubuntu.go:175] set auth options {CertDir:C:\Users\John.minikube CaCertPath:C:\Users\John.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\John.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\John.minikube\machines\server.pem ServerKeyPath:C:\Users\John.minikube\machines\server-key.pem ClientKeyPath:C:\Users\John.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\John.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\John.minikube}
I0109 18:02:50.928703 23808 ubuntu.go:177] setting up certificates
I0109 18:02:50.928703 23808 provision.go:83] configureAuth start
I0109 18:02:50.934388 23808 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0109 18:02:51.214845 23808 provision.go:138] copyHostCerts
I0109 18:02:51.214845 23808 exec_runner.go:144] found C:\Users\John.minikube/key.pem, removing ...
I0109 18:02:51.214845 23808 exec_runner.go:207] rm: C:\Users\John.minikube\key.pem
I0109 18:02:51.215317 23808 exec_runner.go:151] cp: C:\Users\John.minikube\certs\key.pem --> C:\Users\John.minikube/key.pem (1675 bytes)
I0109 18:02:51.215864 23808 exec_runner.go:144] found C:\Users\John.minikube/ca.pem, removing ...
I0109 18:02:51.216367 23808 exec_runner.go:207] rm: C:\Users\John.minikube\ca.pem
I0109 18:02:51.216389 23808 exec_runner.go:151] cp: C:\Users\John.minikube\certs\ca.pem --> C:\Users\John.minikube/ca.pem (1070 bytes)
I0109 18:02:51.216913 23808 exec_runner.go:144] found C:\Users\John.minikube/cert.pem, removing ...
I0109 18:02:51.216913 23808 exec_runner.go:207] rm: C:\Users\John.minikube\cert.pem
I0109 18:02:51.216913 23808 exec_runner.go:151] cp: C:\Users\John.minikube\certs\cert.pem --> C:\Users\John.minikube/cert.pem (1115 bytes)
I0109 18:02:51.217455 23808 provision.go:112] generating server cert: C:\Users\John.minikube\machines\server.pem ca-key=C:\Users\John.minikube\certs\ca.pem private-key=C:\Users\John.minikube\certs\ca-key.pem org=John.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0109 18:02:51.287452 23808 provision.go:172] copyRemoteCerts
I0109 18:02:51.297503 23808 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0109 18:02:51.297503 23808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0109 18:02:51.572054 23808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60818 SSHKeyPath:C:\Users\John.minikube\machines\minikube\id_rsa Username:docker}
I0109 18:02:51.650818 23808 ssh_runner.go:319] scp C:\Users\John.minikube\certs\ca.pem --> /etc/docker/ca.pem (1070 bytes)
I0109 18:02:51.662252 23808 ssh_runner.go:319] scp C:\Users\John.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
I0109 18:02:51.673194 23808 ssh_runner.go:319] scp C:\Users\John.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0109 18:02:51.684145 23808 provision.go:86] duration metric: configureAuth took 755.4421ms
I0109 18:02:51.684145 23808 ubuntu.go:193] setting minikube options for container-runtime
I0109 18:02:51.684677 23808 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I0109 18:02:51.689320 23808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0109 18:02:51.970461 23808 main.go:130] libmachine: Using SSH client type: native
I0109 18:02:51.970976 23808 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x539f80] 0x53ce40 [] 0s} 127.0.0.1 60818 }
I0109 18:02:51.970976 23808 main.go:130] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0109 18:02:52.027543 23808 main.go:130] libmachine: SSH cmd err, output: : overlay
I0109 18:02:52.027543 23808 ubuntu.go:71] root file system type: overlay I0109 18:02:52.027543 23808 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0109 18:02:52.032250 23808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0109 18:02:52.320467 23808 main.go:130] libmachine: Using SSH client type: native I0109 18:02:52.320467 23808 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x539f80] 0x53ce40 [] 0s} 127.0.0.1 60818 }
I0109 18:02:52.320467 23808 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service] Type=notify Restart=on-failure
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0109 18:02:52.381814 23808 main.go:130] libmachine: SSH cmd err, output:: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service] Type=notify Restart=on-failure
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install] WantedBy=multi-user.target
I0109 18:02:52.386477 23808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0109 18:02:52.668075 23808 main.go:130] libmachine: Using SSH client type: native I0109 18:02:52.668401 23808 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x539f80] 0x53ce40 [] 0s} 127.0.0.1 60818 }
I0109 18:02:52.668401 23808 main.go:130] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0109 18:02:52.727214 23808 main.go:130] libmachine: SSH cmd err, output: :
I0109 18:02:52.727227 23808 machine.go:91] provisioned docker machine in 2.4386845s
I0109 18:02:52.727233 23808 start.go:267] post-start starting for "minikube" (driver="docker")
I0109 18:02:52.727233 23808 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0109 18:02:52.732980 23808 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0109 18:02:52.736652 23808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0109 18:02:53.039928 23808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60818 SSHKeyPath:C:\Users\John.minikube\machines\minikube\id_rsa Username:docker}
I0109 18:02:53.136168 23808 ssh_runner.go:152] Run: cat /etc/os-release
I0109 18:02:53.138802 23808 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0109 18:02:53.138802 23808 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0109 18:02:53.138802 23808 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0109 18:02:53.138802 23808 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0109 18:02:53.138802 23808 filesync.go:126] Scanning C:\Users\John.minikube\addons for local assets ...
I0109 18:02:53.138802 23808 filesync.go:126] Scanning C:\Users\John.minikube\files for local assets ...
I0109 18:02:53.138802 23808 start.go:270] post-start completed in 411.5693ms
I0109 18:02:53.144515 23808 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0109 18:02:53.148657 23808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0109 18:02:53.445608 23808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60818 SSHKeyPath:C:\Users\John.minikube\machines\minikube\id_rsa Username:docker}
I0109 18:02:53.474994 23808 fix.go:57] fixHost completed within 3.4362934s
I0109 18:02:53.474994 23808 start.go:80] releasing machines lock for "minikube", held for 3.4362934s
I0109 18:02:53.479701 23808 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0109 18:02:53.745774 23808 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
I0109 18:02:53.750388 23808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0109 18:02:53.751424 23808 ssh_runner.go:152] Run: systemctl --version
I0109 18:02:53.755545 23808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0109 18:02:54.031882 23808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60818 SSHKeyPath:C:\Users\John.minikube\machines\minikube\id_rsa Username:docker}
I0109 18:02:54.041288 23808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60818 SSHKeyPath:C:\Users\John.minikube\machines\minikube\id_rsa Username:docker}
I0109 18:02:54.343572 23808 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
I0109 18:02:54.356604 23808 ssh_runner.go:152] Run: sudo systemctl cat docker.service
I0109 18:02:54.364444 23808 cruntime.go:255] skipping containerd shutdown because we are bound to it
I0109 18:02:54.370722 23808 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
I0109 18:02:54.378080 23808 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0109 18:02:54.392724 23808 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
I0109 18:02:54.478605 23808 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
I0109 18:02:54.576204 23808 ssh_runner.go:152] Run: sudo systemctl cat docker.service
I0109 18:02:54.588977 23808 ssh_runner.go:152] Run: sudo systemctl daemon-reload
I0109 18:02:54.676937 23808 ssh_runner.go:152] Run: sudo systemctl start docker
I0109 18:02:54.687886 23808 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
I0109 18:02:54.725263 23808 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
I0109 18:02:54.749378 23808 out.go:203] * Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
I0109 18:02:54.754040 23808 cli_runner.go:115] Run: docker exec -t minikube dig +short host.docker.internal
I0109 18:02:55.115311 23808 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0109 18:02:55.121031 23808 ssh_runner.go:152] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0109 18:02:55.127803 23808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0109 18:02:55.364608 23808 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I0109 18:02:55.368763 23808 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
I0109 18:02:55.389182 23808 docker.go:558] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.22.3
k8s.gcr.io/kube-controller-manager:v1.22.3
k8s.gcr.io/kube-scheduler:v1.22.3
k8s.gcr.io/kube-proxy:v1.22.3
kubernetesui/dashboard:v2.3.1
k8s.gcr.io/etcd:3.5.0-0
kubernetesui/metrics-scraper:v1.0.7
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.5
-- /stdout -- I0109 18:02:55.389182 23808 docker.go:489] Images already preloaded, skipping extraction I0109 18:02:55.393346 23808 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}} I0109 18:02:55.413599 23808 docker.go:558] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.22.3 k8s.gcr.io/kube-scheduler:v1.22.3 k8s.gcr.io/kube-controller-manager:v1.22.3 k8s.gcr.io/kube-proxy:v1.22.3 kubernetesui/dashboard:v2.3.1 k8s.gcr.io/etcd:3.5.0-0 kubernetesui/metrics-scraper:v1.0.7 k8s.gcr.io/coredns/coredns:v1.8.4 gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/pause:3.5
-- /stdout -- I0109 18:02:55.413599 23808 cache_images.go:79] Images are preloaded, skipping loading I0109 18:02:55.418296 23808 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}} I0109 18:02:55.467221 23808 cni.go:93] Creating CNI manager for "" I0109 18:02:55.467221 23808 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0109 18:02:55.467221 23808 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0109 18:02:55.467221 23808 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0109 18:02:55.467741 23808 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens:
authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: []
apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.22.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12
apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local"
disable disk resource management by default
imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests
apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0
Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0109 18:02:55.467741 23808 kubeadm.go:909] kubelet [Unit] Wants=docker.socket
[Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.22.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install] config: {KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0109 18:02:55.473578 23808 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.3 I0109 18:02:55.478258 23808 binaries.go:44] Found k8s binaries, skipping transfer I0109 18:02:55.484472 23808 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0109 18:02:55.488664 23808 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I0109 18:02:55.496988 23808 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0109 18:02:55.505354 23808 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes) I0109 18:02:55.519435 23808 ssh_runner.go:152] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0109 18:02:55.521521 23808 certs.go:54] Setting up C:\Users\John.minikube\profiles\minikube for IP: 192.168.49.2 I0109 18:02:55.522044 23808 certs.go:182] skipping minikubeCA CA generation: C:\Users\John.minikube\ca.key I0109 18:02:55.522044 23808 certs.go:182] skipping proxyClientCA CA generation: C:\Users\John.minikube\proxy-client-ca.key I0109 18:02:55.522044 23808 certs.go:298] skipping minikube-user signed cert generation: C:\Users\John.minikube\profiles\minikube\client.key I0109 18:02:55.522559 23808 certs.go:298] skipping minikube signed cert generation: C:\Users\John.minikube\profiles\minikube\apiserver.key.dd3b5fb2 I0109 18:02:55.522559 23808 certs.go:298] skipping aggregator signed cert generation: C:\Users\John.minikube\profiles\minikube\proxy-client.key I0109 18:02:55.523081 23808 certs.go:388] found cert: C:\Users\John.minikube\certs\C:\Users\John.minikube\certs\ca-key.pem (1675 bytes) I0109 18:02:55.523081 23808 certs.go:388] found cert: C:\Users\John.minikube\certs\C:\Users\John.minikube\certs\ca.pem (1070 bytes) I0109 18:02:55.523081 23808 certs.go:388] found cert: C:\Users\John.minikube\certs\C:\Users\John.minikube\certs\cert.pem (1115 bytes) I0109 18:02:55.523081 23808 certs.go:388] found cert: C:\Users\John.minikube\certs\C:\Users\John.minikube\certs\key.pem (1675 bytes) I0109 18:02:55.524117 23808 ssh_runner.go:319] scp C:\Users\John.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0109 18:02:55.535710 23808 ssh_runner.go:319] scp C:\Users\John.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0109 18:02:55.546645 23808 ssh_runner.go:319] scp C:\Users\John.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0109 18:02:55.557605 23808 ssh_runner.go:319] scp C:\Users\John.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0109 18:02:55.567976 23808 ssh_runner.go:319] scp C:\Users\John.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0109 18:02:55.578904 23808 ssh_runner.go:319] scp C:\Users\John.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0109 18:02:55.589847 23808 ssh_runner.go:319] scp C:\Users\John.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0109 18:02:55.600927 23808 ssh_runner.go:319] scp C:\Users\John.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0109 18:02:55.612387 23808 ssh_runner.go:319] scp C:\Users\John.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0109 18:02:55.623460 23808 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0109 18:02:55.637529 23808 ssh_runner.go:152] Run: openssl version I0109 18:02:55.646883 23808 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0109 18:02:55.657808 23808 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0109 18:02:55.660421 23808 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 9 22:39 /usr/share/ca-certificates/minikubeCA.pem I0109 18:02:55.666121 23808 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0109 18:02:55.675450 23808 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0109 18:02:55.680147 23808 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2500 CPUs:3 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\John:/minikube-host}
I0109 18:02:55.684797 23808 sshrunner.go:152] Run: docker ps --filter status=paused --filter=name=k8s.*(kube-system) --format={{.ID}}
I0109 18:02:55.709465 23808 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0109 18:02:55.714711 23808 kubeadm.go:401] found existing configuration files, will attempt cluster restart
I0109 18:02:55.714711 23808 kubeadm.go:600] restartCluster start
I0109 18:02:55.720954 23808 ssh_runner.go:152] Run: sudo test -d /data/minikube
I0109 18:02:55.725550 23808 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr: I0109 18:02:55.729671 23808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0109 18:02:55.969227 23808 kubeconfig.go:116] verify returned: extract IP: "minikube" does not appear in C:\Users\John.kube\config I0109 18:02:55.969227 23808 kubeconfig.go:127] "minikube" context is missing from C:\Users\John.kube\config - will repair! I0109 18:02:55.969746 23808 lock.go:35] WriteFile acquiring C:\Users\John.kube\config: {Name:mk323df93072bb1fdcb54a5cfaf36a282ef18e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
W0109 18:02:55.970268 23808 kubeadm.go:636] unable to update kubeconfig (cluster will likely require a reset): write: Error writing file C:\Users\John.kube\config: open C:\Users\John.kube\config: Access is denied.
I0109 18:02:55.970790 23808 kubeadm.go:604] restartCluster took 256.0793ms
W0109 18:02:55.970790 23808 out.go:241] ! Unable to restart cluster, will reset it: getting k8s client: client config: client config: context "minikube" does not exist
I0109 18:02:55.971310 23808 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0109 18:03:28.193126 23808 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (32.2218162s)
I0109 18:03:28.199545 23808 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
I0109 18:03:28.211517 23808 sshrunner.go:152] Run: docker ps -a --filter=name=k8s.*(kube-system) --format={{.ID}}
I0109 18:03:28.237256 23808 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0109 18:03:28.242468 23808 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0109 18:03:28.248243 23808 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0109 18:03:28.253477 23808 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0109 18:03:28.253477 23808 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0109 18:03:36.028484 23808 out.go:203] - Generating certificates and keys ... I0109 18:03:36.031130 23808 out.go:203] - Booting up control plane ... I0109 18:03:36.034123 23808 out.go:203] - Configuring RBAC rules ... I0109 18:03:36.035691 23808 cni.go:93] Creating CNI manager for "" I0109 18:03:36.035691 23808 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0109 18:03:36.035691 23808 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0109 18:03:36.044187 23808 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0109 18:03:36.044187 23808 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2022_01_09T18_03_36_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0109 18:03:36.048773 23808 ops.go:34] apiserver oom_adj: -16 I0109 18:03:36.135344 23808 kubeadm.go:985] duration metric: took 99.6523ms to wait for elevateKubeSystemPrivileges. I0109 18:03:36.135344 23808 kubeadm.go:392] StartCluster complete in 40.4551966s I0109 18:03:36.135344 23808 settings.go:142] acquiring lock: {Name:mkdba036b97918f39b5e40b845a11780e1043b53 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0109 18:03:36.135344 23808 settings.go:150] Updating kubeconfig: C:\Users\John.kube\config
I0109 18:03:36.136921 23808 lock.go:35] WriteFile acquiring C:\Users\John.kube\config: {Name:mk323df93072bb1fdcb54a5cfaf36a282ef18e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0109 18:03:36.138492 23808 out.go:176]
W0109 18:03:36.139019 23808 out.go:241] X Exiting due to GUEST_START: Failed kubeconfig update: writing kubeconfig: Error writing file C:\Users\John.kube\config: open C:\Users\John.kube\config: Access is denied.
W0109 18:03:36.139559 23808 out.go:241]
W0109 18:03:36.140086 23808 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run
minikube logs --file=logs.txt
and attach logs.txt to the GitHub issue. │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯==> Docker <==
-- Logs begin at Sun 2022-01-09 22:39:35 UTC, end at Sun 2022-01-09 23:08:04 UTC. -- Jan 09 22:49:26 minikube dockerd[479]: time="2022-01-09T22:49:26.085236733Z" level=info msg="ignoring event" container=5f47cea0c9ddd1f8bd98ce19729d010d81e7c3e26a43a31b8b07a8e70655a9d4 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:49:26 minikube dockerd[479]: time="2022-01-09T22:49:26.247599008Z" level=info msg="ignoring event" container=c7eebe2d397af7e1f3f6d8ec6471e902bdeeb16101315712579cf801ca62c22f module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:49:26 minikube dockerd[479]: time="2022-01-09T22:49:26.374871997Z" level=info msg="ignoring event" container=a5dc465feeb4379e1f4db02b9e65e4eea98053efd02979e6dc764416272a9e18 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:49:26 minikube dockerd[479]: time="2022-01-09T22:49:26.547747459Z" level=info msg="ignoring event" container=4839868ade8eea7fc9b28a299c4ab3f84f7db4cfc3cb1031e0beaed785cd4edf module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:49:36 minikube dockerd[479]: time="2022-01-09T22:49:36.613109245Z" level=info msg="Container 62f6a3e2e5884d3bf0cfaf5b4b26f912b534e99940f0779e221cf8ab7bde6f0e failed to exit within 10 seconds of signal 15 - using the force" Jan 09 22:49:36 minikube dockerd[479]: time="2022-01-09T22:49:36.639077705Z" level=info msg="ignoring event" container=62f6a3e2e5884d3bf0cfaf5b4b26f912b534e99940f0779e221cf8ab7bde6f0e module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:49:46 minikube dockerd[479]: time="2022-01-09T22:49:46.707053487Z" level=info msg="Container 5ea28158968011c2f1bbe317aa543946669c075da5b7fce8a9ed205ede9ca774 failed to exit within 10 seconds of signal 15 - using the force" Jan 09 22:49:46 minikube dockerd[479]: time="2022-01-09T22:49:46.729678706Z" level=info msg="ignoring event" container=5ea28158968011c2f1bbe317aa543946669c075da5b7fce8a9ed205ede9ca774 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:49:46 minikube dockerd[479]: time="2022-01-09T22:49:46.887421577Z" level=info msg="ignoring event" container=9ae8b55f0018f35379d0f5be67e7b4365384279f23a701d0baa6a3d6ff3b8697 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:49:47 minikube dockerd[479]: time="2022-01-09T22:49:47.058161771Z" level=info msg="ignoring event" container=0b9361d8b648ca2c5f62a8a0ef64ea99104a56a20dadc6c137180220f099f91f module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:49:47 minikube dockerd[479]: time="2022-01-09T22:49:47.238228430Z" level=info msg="ignoring event" container=df05c204cdec3e474291d58c6c6a57b46433497adb55bb5447743fa621cc9f76 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:49:47 minikube dockerd[479]: time="2022-01-09T22:49:47.408286566Z" level=info msg="ignoring event" container=97dcdbee51d2a19710d17d948e0a2c88fafd9fde31ee2d1d4519a8dd51fabab7 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:10 minikube dockerd[479]: time="2022-01-09T22:55:10.755450996Z" level=info msg="ignoring event" container=08135ca3dcce084038568985c3b1cd88b213ca000c6546df5590f041fa123622 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:15 minikube dockerd[479]: time="2022-01-09T22:55:15.837175627Z" level=info msg="ignoring event" container=240b0894b15a483197c867f2d99efc6a03c18969b592b4f4cb9ac18bd5519c2b module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:15 minikube dockerd[479]: time="2022-01-09T22:55:15.910824011Z" level=info msg="ignoring event" container=03b6b6a23547b8606c7db0610089991e5cb050afec2d346f2e4a3a0da7759db5 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:16 minikube dockerd[479]: time="2022-01-09T22:55:16.239046397Z" level=info msg="ignoring event" container=46e20700f7baed0eac51b01ec585db08fc731cf5edac7d8258350262823888eb module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:16 minikube dockerd[479]: time="2022-01-09T22:55:16.724936619Z" level=info msg="ignoring event" container=0016f6fd6f96a245cdefad2de90fe621f4de6c0788fb5c19effb0ba23b19b005 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:16 minikube dockerd[479]: time="2022-01-09T22:55:16.798300609Z" level=info msg="ignoring event" container=7dcaa4552980bd5d54e80c8a0753e4b1769af71ff8c01780642ea9c26ea11cc6 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:16 minikube dockerd[479]: time="2022-01-09T22:55:16.927353648Z" level=info msg="ignoring event" container=a6094412cd453e0117636c96c8648626d267df149bb4fe52784baddcdacb7d8b module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:26 minikube dockerd[479]: time="2022-01-09T22:55:26.994472647Z" level=info msg="Container c94af811c6dfb0cf0542c263170e0b4cebdb983259da94e61ca0f02c5db55bfa failed to exit within 10 seconds of signal 15 - using the force" Jan 09 22:55:27 minikube dockerd[479]: time="2022-01-09T22:55:27.019978885Z" level=info msg="ignoring event" container=c94af811c6dfb0cf0542c263170e0b4cebdb983259da94e61ca0f02c5db55bfa module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:27 minikube dockerd[479]: time="2022-01-09T22:55:27.105345574Z" level=info msg="ignoring event" container=d6f46a23ceb0beae3d2474748ddf6c4e5131cf368415365cbb41bf8caedd780c module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:37 minikube dockerd[479]: time="2022-01-09T22:55:37.173434726Z" level=info msg="Container 3c0e5a4eee3ff7ff32946c2f508dada3b32163f7ed72576baa26be555c5713da failed to exit within 10 seconds of signal 15 - using the force" Jan 09 22:55:37 minikube dockerd[479]: time="2022-01-09T22:55:37.201794866Z" level=info msg="ignoring event" container=3c0e5a4eee3ff7ff32946c2f508dada3b32163f7ed72576baa26be555c5713da module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:37 minikube dockerd[479]: time="2022-01-09T22:55:37.387673053Z" level=info msg="ignoring event" container=61dc52bf9cf052a7a6c204fb883dfe1f1198a7701f06399c6784b005cc8e301b module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:37 minikube dockerd[479]: time="2022-01-09T22:55:37.473613038Z" level=info msg="ignoring event" container=412c64b849a3e2e9fa7f8979c6114c53e679952b48825d1738c1049366cd5fe2 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:37 minikube dockerd[479]: time="2022-01-09T22:55:37.598201626Z" level=info msg="ignoring event" container=60e4f96facf7537df7bd3a8370e3b3a82e3fa782b0a15ee0e05815091889bd6c module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:55:37 minikube dockerd[479]: time="2022-01-09T22:55:37.683545384Z" level=info msg="ignoring event" container=791ec6240e7a9ff6c1b10f72f0a58b3b7dd0a0746e00e39456496605160d2923 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:25 minikube dockerd[479]: time="2022-01-09T22:59:25.894519314Z" level=info msg="ignoring event" container=b28a032d98f7f5bf403dcf76edc21ef1a7552493f9914bbe95b28ce102be770b module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:30 minikube dockerd[479]: time="2022-01-09T22:59:30.968846569Z" level=info msg="ignoring event" container=b91d3fd90ee51bb74d5d0ba51682c4392b113561315bef6fff7ab7ff7f5410d1 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:31 minikube dockerd[479]: time="2022-01-09T22:59:31.128295667Z" level=info msg="ignoring event" container=932856a2c660730e70f061fc334409e574a29bd992b73d7b93d27ccc81443e06 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:31 minikube dockerd[479]: time="2022-01-09T22:59:31.478584195Z" level=info msg="ignoring event" container=dd4ec63b9558967e31386f70e3eef4eec4647b217b79f45ccff405816fa00023 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:31 minikube dockerd[479]: time="2022-01-09T22:59:31.885489828Z" level=info msg="ignoring event" container=26552c5e10b76e30da118b07dfa1312272918d74a2bdab582c5cbcf55c098880 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:32 minikube dockerd[479]: time="2022-01-09T22:59:32.068865519Z" level=info msg="ignoring event" container=91f23a6d4c9da048fc12acca06d4f51466f21945b230b153bb71718a11bca326 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:32 minikube dockerd[479]: time="2022-01-09T22:59:32.165064855Z" level=info msg="ignoring event" container=40e08d3749311f8e8f0af5c4e471844cc81ada2058b514bdf946718d7d392100 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:42 minikube dockerd[479]: time="2022-01-09T22:59:42.235988287Z" level=info msg="Container 496554ef6899068c6f959f4aafebc9445ed639df09269bad50d0bd2fbc848cd7 failed to exit within 10 seconds of signal 15 - using the force" Jan 09 22:59:42 minikube dockerd[479]: time="2022-01-09T22:59:42.258804130Z" level=info msg="ignoring event" container=496554ef6899068c6f959f4aafebc9445ed639df09269bad50d0bd2fbc848cd7 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:42 minikube dockerd[479]: time="2022-01-09T22:59:42.340751672Z" level=info msg="ignoring event" container=5eff06f685866907970028198bda7bef07fd4770072f20cadf30c0f221803354 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:52 minikube dockerd[479]: time="2022-01-09T22:59:52.408826292Z" level=info msg="Container 7d64c54a99e458647c964636ca0831014c5ff523a36d25c4f34c74c2345f2ee9 failed to exit within 10 seconds of signal 15 - using the force" Jan 09 22:59:52 minikube dockerd[479]: time="2022-01-09T22:59:52.432645193Z" level=info msg="ignoring event" container=7d64c54a99e458647c964636ca0831014c5ff523a36d25c4f34c74c2345f2ee9 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:52 minikube dockerd[479]: time="2022-01-09T22:59:52.505884146Z" level=info msg="ignoring event" container=21f77d2f311752c7d34d5a8a292b877ae059e5cdca9950ae4bc3abf9ebf9d940 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:52 minikube dockerd[479]: time="2022-01-09T22:59:52.607499643Z" level=info msg="ignoring event" container=f9ee9e81cc59b820ec6242efc99283a01002eb0e0c06040557ccbb905292c6e7 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:52 minikube dockerd[479]: time="2022-01-09T22:59:52.692401776Z" level=info msg="ignoring event" container=dececf30437270c7ade207f71533b27e6b0ee4a1dc6c5d4f2ab27b70aec76f0b module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 22:59:52 minikube dockerd[479]: time="2022-01-09T22:59:52.781141046Z" level=info msg="ignoring event" container=83f88c5948afc477af7aa0c9008795b67a0918e7543a4faa660807ef828fe3bf module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:01 minikube dockerd[479]: time="2022-01-09T23:03:01.213794515Z" level=info msg="ignoring event" container=cab000b7b82f4625a42f23d52cf0ad75fce49bf1644d776ab493a9341c00bd06 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:06 minikube dockerd[479]: time="2022-01-09T23:03:06.293751571Z" level=info msg="ignoring event" container=abc8fd0fd9a3a905e297cf58cda89d440fb5c301248b78be9a65594d975e94a2 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:06 minikube dockerd[479]: time="2022-01-09T23:03:06.457699700Z" level=info msg="ignoring event" container=f69280fff9c1f6d300ba910d477c30fff43884c14783fef79b658c5d13b8aafe module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:06 minikube dockerd[479]: time="2022-01-09T23:03:06.795728911Z" level=info msg="ignoring event" container=1f74ac9f902c939454e4734efb8905349dccaebe56c769cb49a33a840d03967e module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:07 minikube dockerd[479]: time="2022-01-09T23:03:07.175273901Z" level=info msg="ignoring event" container=09cfa8200cc14e81b932166cf0db6d60267ca43f07d9ac54b7f59f73bf4a5d2a module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:07 minikube dockerd[479]: time="2022-01-09T23:03:07.327765554Z" level=info msg="ignoring event" container=f04fff6c5a15cb5a3aa474c1a44b0e2690dcc85a8f36406963988e2161e9279e module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:07 minikube dockerd[479]: time="2022-01-09T23:03:07.467321381Z" level=info msg="ignoring event" container=d9bae27c380ee03726368c5e21c5382ba46e4ef9a4e7c8e0ba9da9ad81830e89 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:17 minikube dockerd[479]: time="2022-01-09T23:03:17.534800774Z" level=info msg="Container e3f9a9f6852c6e61ebf9096e9c3f8cef09fe244ac7e511ac8985516371800287 failed to exit within 10 seconds of signal 15 - using the force" Jan 09 23:03:17 minikube dockerd[479]: time="2022-01-09T23:03:17.562738260Z" level=info msg="ignoring event" container=e3f9a9f6852c6e61ebf9096e9c3f8cef09fe244ac7e511ac8985516371800287 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:27 minikube dockerd[479]: time="2022-01-09T23:03:27.630981788Z" level=info msg="Container b7ad20c850818b24a9c17e30c3d15f2a80c94ef9f9d449032f095683666bba81 failed to exit within 10 seconds of signal 15 - using the force" Jan 09 23:03:27 minikube dockerd[479]: time="2022-01-09T23:03:27.658114641Z" level=info msg="ignoring event" container=b7ad20c850818b24a9c17e30c3d15f2a80c94ef9f9d449032f095683666bba81 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:27 minikube dockerd[479]: time="2022-01-09T23:03:27.744024596Z" level=info msg="ignoring event" container=a12527f48c904a9ffeb9d7e3e0548eb2fee3dd91a42f7c406638074b4dc016a9 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:27 minikube dockerd[479]: time="2022-01-09T23:03:27.847654758Z" level=info msg="ignoring event" container=33f4097791f30aabb5fd5a72d67a386cfe3c512e61ea5e543f4f02675ffd5d4e module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:27 minikube dockerd[479]: time="2022-01-09T23:03:27.940084872Z" level=info msg="ignoring event" container=2c5771e7a3839aa0982f2f5e086a1a451368ce5241534198f518af0e4b32baaa module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:28 minikube dockerd[479]: time="2022-01-09T23:03:28.058128024Z" level=info msg="ignoring event" container=cf549f83801881889c20138305573d5404173474fa5226ab6ca1a4cecddfaff0 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete" Jan 09 23:03:28 minikube dockerd[479]: time="2022-01-09T23:03:28.143499551Z" level=info msg="ignoring event" container=cf5a86eb3512a5982b0490248d1fe383d026ec22cd160f0cd6d1eabdb14d7901 module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 2446a043e503b 8d147537fb7d1 4 minutes ago Running coredns 0 95fd182ac7fb8 a60c4de4df4d9 8d147537fb7d1 4 minutes ago Running coredns 0 f91470c8be36b e41e2c0df053b 6120bd723dced 4 minutes ago Running kube-proxy 0 3b3823cf5925a 4e2a2d1fd3c05 0048118155842 4 minutes ago Running etcd 4 dc47f151df4eb d3a274017c693 53224b502ea4d 4 minutes ago Running kube-apiserver 4 e1ea56a8abac0 3cc33ddbb1c71 05c905cef780c 4 minutes ago Running kube-controller-manager 4 7c0366e5d1938 b9d2c547df651 0aa9c7e31d307 4 minutes ago Running kube-scheduler 4 19a261aac0a78
==> coredns [2446a043e503] <==
.:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.8.4 linux/amd64, go1.16.4, 053c4d5
==> coredns [a60c4de4df4d] <==
.:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.8.4 linux/amd64, go1.16.4, 053c4d5
==> describe nodes <==
Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2022_01_09T18_03_36_0700 minikube.k8s.io/version=v1.24.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sun, 09 Jan 2022 23:03:32 +0000 Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Sun, 09 Jan 2022 23:08:01 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Sun, 09 Jan 2022 23:03:36 +0000 Sun, 09 Jan 2022 23:03:30 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sun, 09 Jan 2022 23:03:36 +0000 Sun, 09 Jan 2022 23:03:30 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sun, 09 Jan 2022 23:03:36 +0000 Sun, 09 Jan 2022 23:03:30 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sun, 09 Jan 2022 23:03:36 +0000 Sun, 09 Jan 2022 23:03:32 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 16 ephemeral-storage: 263174212Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 12596084Ki pods: 110 Allocatable: cpu: 16 ephemeral-storage: 263174212Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 12596084Ki pods: 110 System Info: Machine ID: bba0be70c47c400ea3cf7733f1c0b4c1 System UUID: bba0be70c47c400ea3cf7733f1c0b4c1 Boot ID: 48fea21b-f96b-4588-8f82-61f4a2fcb79c Kernel Version: 5.10.16.3-microsoft-standard-WSL2 OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.8 Kubelet Version: v1.22.3 Kube-Proxy Version: v1.22.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
kube-system coredns-78fcd69978-jnn72 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (1%!)(MISSING) 4m16s kube-system coredns-78fcd69978-rx4mq 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (1%!)(MISSING) 4m16s kube-system etcd-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 4m31s kube-system kube-apiserver-minikube 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m28s kube-system kube-controller-manager-minikube 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m28s kube-system kube-proxy-r6dwk 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m16s kube-system kube-scheduler-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m28s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits
cpu 850m (5%!)(MISSING) 0 (0%!)(MISSING) memory 240Mi (1%!)(MISSING) 340Mi (2%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message
Normal Starting 4m15s kube-proxy
Normal Starting 4m35s kubelet Starting kubelet. Normal NodeHasSufficientMemory 4m35s (x3 over 4m35s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 4m35s (x3 over 4m35s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 4m35s (x2 over 4m35s) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 4m35s kubelet Updated Node Allocatable limit across pods Normal Starting 4m29s kubelet Starting kubelet. Normal NodeAllocatableEnforced 4m28s kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 4m28s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 4m28s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 4m28s kubelet Node minikube status is now: NodeHasSufficientPID
==> dmesg <==
[ +0.000001] kvm: no hardware support [ +0.011008] hv_utils: cannot register PTP clock: 0 [Jan 9 22:36] FS-Cache: Duplicate cookie detected [ +0.000002] FS-Cache: O-cookie c=00000000f802b728 [p=0000000034faf1f9 fl=222 nc=0 na=1] [ +0.000001] FS-Cache: O-cookie d=00000000af0b6750 n=00000000b77712ae [ +0.000000] FS-Cache: O-key=[10] '34323934393532343434' [ +0.000003] FS-Cache: N-cookie c=00000000a1c92ae8 [p=0000000034faf1f9 fl=2 nc=0 na=1] [ +0.000000] FS-Cache: N-cookie d=00000000af0b6750 n=0000000052acc172 [ +0.000000] FS-Cache: N-key=[10] '34323934393532343434' [ +0.000161] init: (1) ERROR: ConfigApplyWindowsLibPath:2129: open /etc/ld.so.conf.d/ld.wsl.conf [ +0.000001] failed 2 [ +0.069430] init: (1) ERROR: UpdateTimezone:97: America/New_York timezone not found. Is the tzdata package installed? [ +0.000004] init: (1) ERROR: InitEntryUtilityVm:2434: UpdateTimezone failed [ +0.314779] FS-Cache: Duplicate cookie detected [ +0.000002] FS-Cache: O-cookie c=00000000a1c92ae8 [p=0000000034faf1f9 fl=222 nc=0 na=1] [ +0.000001] FS-Cache: O-cookie d=00000000af0b6750 n=00000000c39e434a [ +0.000000] FS-Cache: O-key=[10] '34323934393532343832' [ +0.000002] FS-Cache: N-cookie c=00000000e8e1bd41 [p=0000000034faf1f9 fl=2 nc=0 na=1] [ +0.000001] FS-Cache: N-cookie d=00000000af0b6750 n=0000000005d9901a [ +0.000000] FS-Cache: N-key=[10] '34323934393532343832' [ +0.000143] init: (1) ERROR: ConfigApplyWindowsLibPath:2129: open /etc/ld.so.conf.d/ld.wsl.conf [ +0.000001] failed 2 [ +0.000490] init: (2) ERROR: UtilCreateProcessAndWait:486: /bin/mount failed with 2 [ +0.000044] init: (1) ERROR: UtilCreateProcessAndWait:501: /bin/mount failed with status 0x [ +0.000002] ff00 [ +0.000004] init: (1) ERROR: ConfigMountFsTab:2184: Processing fstab with mount -a failed. [ +0.000333] init: (3) ERROR: UtilCreateProcessAndWait:486: /bin/mount failed with 2 [ +0.000037] init: (1) ERROR: UtilCreateProcessAndWait:501: /bin/mount failed with status 0x [ +0.000001] ff00 [ +0.000004] init: (1) ERROR: MountPlan9:493: mount cache=mmap,noatime,trans=fd,rfdno=8,wfdno=8,msize=65536,aname=drvfs;path=C:\;uid=0;gid=0;symlinkroot=/mnt/ [Jan 9 22:37] WSL2: Performing memory compaction. [Jan 9 22:39] WSL2: Performing memory compaction. [Jan 9 22:40] WSL2: Performing memory compaction. [Jan 9 22:41] WSL2: Performing memory compaction. [Jan 9 22:42] WSL2: Performing memory compaction. [Jan 9 22:43] WSL2: Performing memory compaction. [Jan 9 22:44] WSL2: Performing memory compaction. [Jan 9 22:45] WSL2: Performing memory compaction. [Jan 9 22:46] WSL2: Performing memory compaction. [Jan 9 22:47] WSL2: Performing memory compaction. [Jan 9 22:48] WSL2: Performing memory compaction. [Jan 9 22:49] WSL2: Performing memory compaction. [Jan 9 22:50] WSL2: Performing memory compaction. [Jan 9 22:51] WSL2: Performing memory compaction. [Jan 9 22:52] WSL2: Performing memory compaction. [Jan 9 22:53] WSL2: Performing memory compaction. [Jan 9 22:54] WSL2: Performing memory compaction. [Jan 9 22:55] WSL2: Performing memory compaction. [Jan 9 22:56] WSL2: Performing memory compaction. [Jan 9 22:57] WSL2: Performing memory compaction. [Jan 9 22:58] WSL2: Performing memory compaction. [Jan 9 22:59] WSL2: Performing memory compaction. [Jan 9 23:00] WSL2: Performing memory compaction. [Jan 9 23:01] WSL2: Performing memory compaction. [Jan 9 23:02] WSL2: Performing memory compaction. [Jan 9 23:03] WSL2: Performing memory compaction. [Jan 9 23:04] WSL2: Performing memory compaction. [Jan 9 23:05] WSL2: Performing memory compaction. [Jan 9 23:07] WSL2: Performing memory compaction. [Jan 9 23:08] WSL2: Performing memory compaction.
==> etcd [4e2a2d1fd3c0] <==
{"level":"info","ts":"2022-01-09T23:03:30.351Z","caller":"etcdmain/etcd.go:72","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2022-01-09T23:03:30.351Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2022-01-09T23:03:30.351Z","caller":"embed/etcd.go:478","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-01-09T23:03:30.351Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2022-01-09T23:03:30.351Z","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.0","git-sha":"946a5a6f2","go-version":"go1.16.3","go-os":"linux","go-arch":"amd64","max-cpu-set":16,"max-cpu-available":16,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2022-01-09T23:03:30.354Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"2.660833ms"} {"level":"info","ts":"2022-01-09T23:03:30.359Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"} {"level":"info","ts":"2022-01-09T23:03:30.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"} {"level":"info","ts":"2022-01-09T23:03:30.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"} {"level":"info","ts":"2022-01-09T23:03:30.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2022-01-09T23:03:30.360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"} {"level":"info","ts":"2022-01-09T23:03:30.360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"warn","ts":"2022-01-09T23:03:30.363Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2022-01-09T23:03:30.365Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2022-01-09T23:03:30.367Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2022-01-09T23:03:30.369Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.0","cluster-version":"to_be_decided"} {"level":"info","ts":"2022-01-09T23:03:30.369Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2022-01-09T23:03:30.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2022-01-09T23:03:30.370Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2022-01-09T23:03:30.370Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-01-09T23:03:30.370Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-01-09T23:03:30.370Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-01-09T23:03:30.370Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2022-01-09T23:03:30.371Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2022-01-09T23:03:31.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2022-01-09T23:03:31.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2022-01-09T23:03:31.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2022-01-09T23:03:31.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2022-01-09T23:03:31.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2022-01-09T23:03:31.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2022-01-09T23:03:31.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2022-01-09T23:03:31.162Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2022-01-09T23:03:31.162Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-01-09T23:03:31.162Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-01-09T23:03:31.162Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-01-09T23:03:31.162Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-01-09T23:03:31.162Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-01-09T23:03:31.163Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2022-01-09T23:03:31.163Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-01-09T23:03:31.163Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2022-01-09T23:03:31.165Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-01-09T23:03:31.165Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
==> kernel <==
23:08:04 up 34 min, 0 users, load average: 0.05, 0.08, 0.07 Linux minikube 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS"
==> kube-apiserver [d3a274017c69] <==
W0109 23:03:31.701045 1 genericapiserver.go:455] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W0109 23:03:31.704465 1 genericapiserver.go:455] Skipping API apps/v1beta2 because it has no resources. W0109 23:03:31.704485 1 genericapiserver.go:455] Skipping API apps/v1beta1 because it has no resources. W0109 23:03:31.705394 1 genericapiserver.go:455] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. I0109 23:03:31.707765 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0109 23:03:31.707781 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W0109 23:03:31.722961 1 genericapiserver.go:455] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0109 23:03:32.823081 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0109 23:03:32.823187 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0109 23:03:32.823194 1 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I0109 23:03:32.823344 1 secure_serving.go:266] Serving securely on [::]:8443 I0109 23:03:32.823392 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0109 23:03:32.823496 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0109 23:03:32.823510 1 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0109 23:03:32.823555 1 available_controller.go:491] Starting AvailableConditionController I0109 23:03:32.823559 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0109 23:03:32.823514 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0109 23:03:32.823567 1 apf_controller.go:312] Starting API Priority and Fairness config controller I0109 23:03:32.823691 1 controller.go:83] Starting OpenAPI AggregationController I0109 23:03:32.823748 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0109 23:03:32.823757 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0109 23:03:32.823869 1 autoregister_controller.go:141] Starting autoregister controller I0109 23:03:32.823894 1 cache.go:32] Waiting for caches to sync for autoregister controller I0109 23:03:32.823911 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0109 23:03:32.823900 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0109 23:03:32.823930 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I0109 23:03:32.823915 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0109 23:03:32.824143 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0109 23:03:32.824322 1 controller.go:85] Starting OpenAPI controller I0109 23:03:32.824339 1 naming_controller.go:291] Starting NamingConditionController I0109 23:03:32.824353 1 establishing_controller.go:76] Starting EstablishingController I0109 23:03:32.824368 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0109 23:03:32.824378 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0109 23:03:32.824388 1 crd_finalizer.go:266] Starting CRDFinalizer E0109 23:03:32.824837 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: I0109 23:03:32.842696 1 shared_informer.go:247] Caches are synced for node_authorizer I0109 23:03:32.850374 1 controller.go:611] quota admission added evaluator for: namespaces I0109 23:03:32.887551 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0109 23:03:32.926546 1 cache.go:39] Caches are synced for autoregister controller I0109 23:03:32.926588 1 apf_controller.go:317] Running API Priority and Fairness config worker I0109 23:03:32.926609 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0109 23:03:32.926632 1 shared_informer.go:247] Caches are synced for crd-autoregister I0109 23:03:32.926634 1 cache.go:39] Caches are synced for AvailableConditionController controller I0109 23:03:32.926560 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0109 23:03:33.823337 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0109 23:03:33.823390 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0109 23:03:33.826929 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000 I0109 23:03:33.828705 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000 I0109 23:03:33.828723 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I0109 23:03:34.070291 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0109 23:03:34.091357 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0109 23:03:34.164315 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0109 23:03:34.164913 1 controller.go:611] quota admission added evaluator for: endpoints I0109 23:03:34.167100 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0109 23:03:34.840694 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0109 23:03:35.878652 1 controller.go:611] quota admission added evaluator for: deployments.apps I0109 23:03:35.897674 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0109 23:03:48.195690 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0109 23:03:48.545615 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0109 23:03:49.178169 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
==> kube-controller-manager [3cc33ddbb1c7] <==
I0109 23:03:47.541469 1 gc_controller.go:89] Starting GC controller I0109 23:03:47.541476 1 shared_informer.go:240] Waiting for caches to sync for GC I0109 23:03:47.593745 1 controllermanager.go:577] Started "disruption" I0109 23:03:47.593885 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0109 23:03:47.594157 1 disruption.go:363] Starting disruption controller I0109 23:03:47.594190 1 shared_informer.go:240] Waiting for caches to sync for disruption W0109 23:03:47.599739 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0109 23:03:47.601785 1 shared_informer.go:247] Caches are synced for node I0109 23:03:47.601810 1 range_allocator.go:172] Starting range CIDR allocator I0109 23:03:47.601812 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0109 23:03:47.601817 1 shared_informer.go:247] Caches are synced for cidrallocator I0109 23:03:47.602964 1 shared_informer.go:247] Caches are synced for taint I0109 23:03:47.603035 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: I0109 23:03:47.603078 1 taint_manager.go:187] "Starting NoExecuteTaintManager" W0109 23:03:47.603104 1 node_lifecycle_controller.go:1013] Missing timestamp for Node minikube. Assuming now as a timestamp. I0109 23:03:47.603130 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal. I0109 23:03:47.603329 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0109 23:03:47.605083 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0109 23:03:47.612681 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24] I0109 23:03:47.620299 1 shared_informer.go:247] Caches are synced for stateful set I0109 23:03:47.631557 1 shared_informer.go:247] Caches are synced for ephemeral I0109 23:03:47.636755 1 shared_informer.go:247] Caches are synced for endpoint I0109 23:03:47.639866 1 shared_informer.go:247] Caches are synced for HPA I0109 23:03:47.640973 1 shared_informer.go:247] Caches are synced for deployment I0109 23:03:47.640992 1 shared_informer.go:247] Caches are synced for TTL I0109 23:03:47.641007 1 shared_informer.go:247] Caches are synced for crt configmap I0109 23:03:47.641165 1 shared_informer.go:247] Caches are synced for expand I0109 23:03:47.641195 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0109 23:03:47.641401 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0109 23:03:47.641626 1 shared_informer.go:247] Caches are synced for GC I0109 23:03:47.641922 1 shared_informer.go:247] Caches are synced for PVC protection I0109 23:03:47.645121 1 shared_informer.go:247] Caches are synced for service account I0109 23:03:47.646217 1 shared_informer.go:247] Caches are synced for namespace I0109 23:03:47.648371 1 shared_informer.go:247] Caches are synced for job I0109 23:03:47.652560 1 shared_informer.go:247] Caches are synced for PV protection I0109 23:03:47.654776 1 shared_informer.go:247] Caches are synced for endpoint_slice I0109 23:03:47.657925 1 shared_informer.go:247] Caches are synced for daemon sets I0109 23:03:47.663294 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0109 23:03:47.663319 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0109 23:03:47.663376 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0109 23:03:47.663377 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0109 23:03:47.667538 1 shared_informer.go:247] Caches are synced for attach detach I0109 23:03:47.690999 1 shared_informer.go:247] Caches are synced for TTL after finished I0109 23:03:47.691027 1 shared_informer.go:247] Caches are synced for ReplicationController I0109 23:03:47.691064 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0109 23:03:47.692141 1 shared_informer.go:247] Caches are synced for ReplicaSet I0109 23:03:47.694319 1 shared_informer.go:247] Caches are synced for disruption I0109 23:03:47.694333 1 disruption.go:371] Sending events to api server. I0109 23:03:47.760647 1 shared_informer.go:247] Caches are synced for cronjob I0109 23:03:47.806279 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0109 23:03:47.841836 1 shared_informer.go:247] Caches are synced for persistent volume I0109 23:03:47.894051 1 shared_informer.go:247] Caches are synced for resource quota I0109 23:03:47.898458 1 shared_informer.go:247] Caches are synced for resource quota I0109 23:03:48.197345 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2" I0109 23:03:48.305551 1 shared_informer.go:247] Caches are synced for garbage collector I0109 23:03:48.316823 1 shared_informer.go:247] Caches are synced for garbage collector I0109 23:03:48.316847 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0109 23:03:48.549013 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-r6dwk" I0109 23:03:48.696923 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-jnn72" I0109 23:03:48.699504 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-rx4mq"
==> kube-proxy [e41e2c0df053] <==
E0109 23:03:49.145100 1 proxier.go:649] "Failed to read builtin modules file. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.16.3-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.16.3-microsoft-standard-WSL2/modules.builtin" I0109 23:03:49.146434 1 proxier.go:659] "Failed to load kernel module with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs" I0109 23:03:49.147422 1 proxier.go:659] "Failed to load kernel module with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr" I0109 23:03:49.148555 1 proxier.go:659] "Failed to load kernel module with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr" I0109 23:03:49.149477 1 proxier.go:659] "Failed to load kernel module with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh" I0109 23:03:49.150251 1 proxier.go:659] "Failed to load kernel module with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack" I0109 23:03:49.157476 1 node.go:172] Successfully retrieved node IP: 192.168.49.2 I0109 23:03:49.157505 1 server_others.go:140] Detected node IP 192.168.49.2 W0109 23:03:49.157521 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy I0109 23:03:49.176069 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary I0109 23:03:49.176090 1 server_others.go:212] Using iptables Proxier. I0109 23:03:49.176096 1 server_others.go:219] creating dualStackProxier for iptables. W0109 23:03:49.176107 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6 I0109 23:03:49.176337 1 server.go:649] Version: v1.22.3 I0109 23:03:49.176621 1 config.go:224] Starting endpoint slice config controller I0109 23:03:49.176681 1 config.go:315] Starting service config controller I0109 23:03:49.176680 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0109 23:03:49.176730 1 shared_informer.go:240] Waiting for caches to sync for service config I0109 23:03:49.277541 1 shared_informer.go:247] Caches are synced for endpoint slice config I0109 23:03:49.277573 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [b9d2c547df65] <==
I0109 23:03:30.625313 1 serving.go:347] Generated self-signed cert in-memory W0109 23:03:32.833833 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0109 23:03:32.833906 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0109 23:03:32.833931 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous. W0109 23:03:32.833943 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0109 23:03:32.929428 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0109 23:03:32.929478 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0109 23:03:32.929484 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259 I0109 23:03:32.929613 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" E0109 23:03:32.933583 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.PersistentVolume: failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0109 23:03:32.933615 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.PodDisruptionBudget: failed to list v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0109 23:03:32.933623 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.ReplicationController: failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0109 23:03:32.933799 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Namespace: failed to list v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0109 23:03:32.933886 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch v1.ConfigMap: failed to list v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0109 23:03:32.933954 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Node: failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0109 23:03:32.934080 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.PersistentVolumeClaim: failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0109 23:03:32.934153 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.StorageClass: failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0109 23:03:32.934243 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Service: failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0109 23:03:32.934362 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.CSIDriver: failed to list v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0109 23:03:32.934403 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.StatefulSet: failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0109 23:03:32.934476 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Pod: failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0109 23:03:32.934510 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.CSINode: failed to list v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0109 23:03:32.934521 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1beta1.CSIStorageCapacity: failed to list v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0109 23:03:32.934601 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.ReplicaSet: failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0109 23:03:33.840740 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.ReplicaSet: failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0109 23:03:33.906617 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.PersistentVolume: failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0109 23:03:33.927294 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Pod: failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0109 23:03:33.935276 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Node: failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0109 23:03:34.004393 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch v1.ConfigMap: failed to list v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0109 23:03:35.930397 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Sun 2022-01-09 22:39:35 UTC, end at Sun 2022-01-09 23:08:04 UTC. -- Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.138209 19615 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.138212 19615 policy_none.go:49] "None policy: Start" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.140088 19615 memory_manager.go:168] "Starting memorymanager" policy="None" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.140126 19615 state_mem.go:35] "Initializing new in-memory state store" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.140272 19615 state_mem.go:75] "Updated machine memory state" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.141009 19615 manager.go:607] "Failed to retrieve checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.141232 19615 plugin_manager.go:114] "Starting Kubelet Plugin Manager" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.150145 19615 kubelet_node_status.go:71] "Attempting to register node" node="minikube" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.156768 19615 kubelet_node_status.go:109] "Node was previously registered" node="minikube" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.156856 19615 kubelet_node_status.go:74] "Successfully registered node" node="minikube" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.327804 19615 topology_manager.go:200] "Topology Admit Handler" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.327933 19615 topology_manager.go:200] "Topology Admit Handler" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.327971 19615 topology_manager.go:200] "Topology Admit Handler" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.327993 19615 topology_manager.go:200] "Topology Admit Handler" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428074 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/08a3871e1baa241b73e5af01a6d01393-etcd-data\") pod \"etcd-minikube\" (UID: \"08a3871e1baa241b73e5af01a6d01393\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428114 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a60ad17d917e03c0e9b4ca796aa9460-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"5a60ad17d917e03c0e9b4ca796aa9460\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428136 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a60ad17d917e03c0e9b4ca796aa9460-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"5a60ad17d917e03c0e9b4ca796aa9460\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428153 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428168 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428180 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428195 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428207 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eee9e2da42102bf0a05e1e7b00e318bf-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"eee9e2da42102bf0a05e1e7b00e318bf\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428218 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a60ad17d917e03c0e9b4ca796aa9460-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"5a60ad17d917e03c0e9b4ca796aa9460\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428230 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a60ad17d917e03c0e9b4ca796aa9460-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"5a60ad17d917e03c0e9b4ca796aa9460\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428243 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428253 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428264 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/08a3871e1baa241b73e5af01a6d01393-etcd-certs\") pod \"etcd-minikube\" (UID: \"08a3871e1baa241b73e5af01a6d01393\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428275 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a60ad17d917e03c0e9b4ca796aa9460-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"5a60ad17d917e03c0e9b4ca796aa9460\") " Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.428287 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Jan 09 23:03:36 minikube kubelet[19615]: E0109 23:03:36.583822 19615 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Jan 09 23:03:36 minikube kubelet[19615]: I0109 23:03:36.977813 19615 apiserver.go:52] "Watching apiserver" Jan 09 23:03:37 minikube kubelet[19615]: I0109 23:03:37.233731 19615 reconciler.go:157] "Reconciler: start to sync state" Jan 09 23:03:37 minikube kubelet[19615]: E0109 23:03:37.584050 19615 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Jan 09 23:03:37 minikube kubelet[19615]: E0109 23:03:37.783659 19615 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Jan 09 23:03:37 minikube kubelet[19615]: E0109 23:03:37.984226 19615 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Jan 09 23:03:38 minikube kubelet[19615]: I0109 23:03:38.178662 19615 request.go:665] Waited for 1.140479126s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods Jan 09 23:03:38 minikube kubelet[19615]: E0109 23:03:38.184368 19615 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Jan 09 23:03:47 minikube kubelet[19615]: I0109 23:03:47.701337 19615 kuberuntime_manager.go:1078] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Jan 09 23:03:47 minikube kubelet[19615]: I0109 23:03:47.701686 19615 docker_service.go:359] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Jan 09 23:03:47 minikube kubelet[19615]: I0109 23:03:47.701823 19615 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Jan 09 23:03:48 minikube kubelet[19615]: I0109 23:03:48.551775 19615 topology_manager.go:200] "Topology Admit Handler" Jan 09 23:03:48 minikube kubelet[19615]: I0109 23:03:48.606597 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f783704-a591-40ad-b875-f5832911971f-lib-modules\") pod \"kube-proxy-r6dwk\" (UID: \"7f783704-a591-40ad-b875-f5832911971f\") " Jan 09 23:03:48 minikube kubelet[19615]: I0109 23:03:48.606639 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2pdb\" (UniqueName: \"kubernetes.io/projected/7f783704-a591-40ad-b875-f5832911971f-kube-api-access-j2pdb\") pod \"kube-proxy-r6dwk\" (UID: \"7f783704-a591-40ad-b875-f5832911971f\") " Jan 09 23:03:48 minikube kubelet[19615]: I0109 23:03:48.606655 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f783704-a591-40ad-b875-f5832911971f-xtables-lock\") pod \"kube-proxy-r6dwk\" (UID: \"7f783704-a591-40ad-b875-f5832911971f\") " Jan 09 23:03:48 minikube kubelet[19615]: I0109 23:03:48.606688 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7f783704-a591-40ad-b875-f5832911971f-kube-proxy\") pod \"kube-proxy-r6dwk\" (UID: \"7f783704-a591-40ad-b875-f5832911971f\") " Jan 09 23:03:48 minikube kubelet[19615]: I0109 23:03:48.700041 19615 topology_manager.go:200] "Topology Admit Handler" Jan 09 23:03:48 minikube kubelet[19615]: I0109 23:03:48.702407 19615 topology_manager.go:200] "Topology Admit Handler" Jan 09 23:03:48 minikube kubelet[19615]: W0109 23:03:48.709828 19615 container.go:586] Failed to update stats for container "/kubepods/burstable/podcc519c50-dcbb-4f32-b9dd-cd263f3c7997": /sys/fs/cgroup/cpuset/kubepods/burstable/podcc519c50-dcbb-4f32-b9dd-cd263f3c7997/cpuset.mems found to be empty, continuing to push stats Jan 09 23:03:48 minikube kubelet[19615]: I0109 23:03:48.808134 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc519c50-dcbb-4f32-b9dd-cd263f3c7997-config-volume\") pod \"coredns-78fcd69978-jnn72\" (UID: \"cc519c50-dcbb-4f32-b9dd-cd263f3c7997\") " Jan 09 23:03:48 minikube kubelet[19615]: I0109 23:03:48.808182 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e8cee26-0fa3-4762-aa4a-f96862861eb6-config-volume\") pod \"coredns-78fcd69978-rx4mq\" (UID: \"4e8cee26-0fa3-4762-aa4a-f96862861eb6\") " Jan 09 23:03:48 minikube kubelet[19615]: I0109 23:03:48.808201 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jtd9\" (UniqueName: \"kubernetes.io/projected/cc519c50-dcbb-4f32-b9dd-cd263f3c7997-kube-api-access-9jtd9\") pod \"coredns-78fcd69978-jnn72\" (UID: \"cc519c50-dcbb-4f32-b9dd-cd263f3c7997\") " Jan 09 23:03:48 minikube kubelet[19615]: I0109 23:03:48.808218 19615 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sldx4\" (UniqueName: \"kubernetes.io/projected/4e8cee26-0fa3-4762-aa4a-f96862861eb6-kube-api-access-sldx4\") pod \"coredns-78fcd69978-rx4mq\" (UID: \"4e8cee26-0fa3-4762-aa4a-f96862861eb6\") " Jan 09 23:03:49 minikube kubelet[19615]: I0109 23:03:49.697075 19615 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f91470c8be36bdb3f29a4a98d2eb513be2147300e3652d9075cbb25deb80ac48" Jan 09 23:03:49 minikube kubelet[19615]: I0109 23:03:49.697111 19615 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-jnn72 through plugin: invalid network status for" Jan 09 23:03:49 minikube kubelet[19615]: I0109 23:03:49.697172 19615 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-rx4mq through plugin: invalid network status for" Jan 09 23:03:49 minikube kubelet[19615]: I0109 23:03:49.697961 19615 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-jnn72 through plugin: invalid network status for" Jan 09 23:03:49 minikube kubelet[19615]: I0109 23:03:49.698393 19615 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="95fd182ac7fb811c5c69d662cc43f461e54294adbee8c013c41806ba41bf6d68" Jan 09 23:03:50 minikube kubelet[19615]: I0109 23:03:50.707674 19615 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-rx4mq through plugin: invalid network status for" Jan 09 23:03:50 minikube kubelet[19615]: I0109 23:03:50.710196 19615 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-jnn72 through plugin: invalid network status for" Jan 09 23:03:56 minikube kubelet[19615]: E0109 23:03:56.165460 19615 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/podcc519c50-dcbb-4f32-b9dd-cd263f3c7997\": RecentStats: unable to find data in memory cache]"
Operating System
Windows
Driver
Docker