kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.22k stars 4.87k forks source link

storage-provisioner failing with errors. #11513

Closed 3goats closed 3 years ago

3goats commented 3 years ago

Hi, I've installed Minikube on Ubuntu 21.04, however the storage-provisioner, keeps failing. The root of the problem seems to be related to a connectivity issue, but I can't work out why its failing. e.g.

kubectl logs -f storage-provisioner -n kube-system

I0526 17:06:19.958373       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0526 17:06:49.963657       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout

Steps to reproduce the issue:

  1. minikube start
  2. kubectl get po -A
  3. kubectl logs -f storage-provisioner -n kube-system

Full output of minikube logs command:

``` * * ==> Audit <== * |---------|----------------|----------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|----------------|----------|---------|---------|-------------------------------|-------------------------------| | start | | minikube | cbourne | v1.20.0 | Wed, 26 May 2021 10:42:22 BST | Wed, 26 May 2021 10:45:27 BST | | service | hello-minikube | minikube | cbourne | v1.20.0 | Wed, 26 May 2021 13:29:11 BST | Wed, 26 May 2021 13:29:11 BST | | tunnel | | minikube | cbourne | v1.20.0 | Wed, 26 May 2021 13:31:53 BST | Wed, 26 May 2021 13:35:40 BST | | delete | | minikube | cbourne | v1.20.0 | Wed, 26 May 2021 13:36:27 BST | Wed, 26 May 2021 13:36:31 BST | | start | | minikube | cbourne | v1.20.0 | Wed, 26 May 2021 17:39:27 BST | Wed, 26 May 2021 17:40:21 BST | | logs | | minikube | cbourne | v1.20.0 | Wed, 26 May 2021 17:43:55 BST | Wed, 26 May 2021 17:43:57 BST | |---------|----------------|----------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2021/05/26 17:39:27 Running on machine: hirsute-hippo Binary: Built with gc go1.16.1 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0526 17:39:27.437238 515401 out.go:291] Setting OutFile to fd 1 ... I0526 17:39:27.437474 515401 out.go:343] isatty.IsTerminal(1) = true I0526 17:39:27.437484 515401 out.go:304] Setting ErrFile to fd 2... I0526 17:39:27.437494 515401 out.go:343] isatty.IsTerminal(2) = true I0526 17:39:27.437769 515401 root.go:316] Updating PATH: /home/cbourne/.minikube/bin I0526 17:39:27.438228 515401 out.go:298] Setting JSON to false I0526 17:39:27.460470 515401 start.go:108] hostinfo: {"hostname":"hirsute-hippo","uptime":105644,"bootTime":1621941524,"procs":443,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"21.04","kernelVersion":"5.11.0-17-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"a2b3df03-17bf-4242-9fe8-a7e3f540c14d"} I0526 17:39:27.460623 515401 start.go:118] virtualization: kvm host I0526 17:39:27.469845 515401 out.go:170] ๐Ÿ˜„ minikube v1.20.0 on Ubuntu 21.04 I0526 17:39:27.470163 515401 notify.go:169] Checking for updates... I0526 17:39:27.470243 515401 driver.go:322] Setting default libvirt URI to qemu:///system I0526 17:39:27.470293 515401 global.go:103] Querying for installed drivers using PATH=/home/cbourne/.minikube/bin:/home/cbourne/.vscode-server/bin/e713fe9b05fc24facbec8f34fb1017133858842b/bin:/home/cbourne/.vscode-server/bin/e713fe9b05fc24facbec8f34fb1017133858842b/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin I0526 17:39:27.470385 515401 global.go:111] kvm2 default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Reason: Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/} I0526 17:39:27.480312 515401 global.go:111] none default: false priority: 4, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:running the 'none' driver as a regular user requires sudo permissions Reason: Fix: Doc:} I0526 17:39:27.480387 515401 global.go:111] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/} I0526 17:39:27.480410 515401 global.go:111] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0526 17:39:27.480517 515401 global.go:111] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/} I0526 17:39:27.480569 515401 global.go:111] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/} I0526 17:39:27.538652 515401 docker.go:119] docker version: linux-20.10.6 I0526 17:39:27.538734 515401 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0526 17:39:27.673270 515401 info.go:261] docker info: {ID:XS5B:IEEM:LSGG:TKLS:DTG6:UBKX:QGR3:YRDW:BUNG:DV4A:PIID:TPLB Containers:12 ContainersRunning:3 ContainersPaused:0 ContainersStopped:9 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:52 SystemTime:2021-05-26 17:39:27.586364053 +0100 BST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-17-generic OperatingSystem:Ubuntu 21.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:24 MemTotal:67406245888 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:hirsute-hippo Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.7.0]] Warnings:}} I0526 17:39:27.673418 515401 docker.go:225] overlay module found I0526 17:39:27.673432 515401 global.go:111] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0526 17:39:27.673454 515401 driver.go:258] not recommending "ssh" due to default: false I0526 17:39:27.673477 515401 driver.go:292] Picked: docker I0526 17:39:27.673484 515401 driver.go:293] Alternatives: [ssh] I0526 17:39:27.673490 515401 driver.go:294] Rejects: [none podman virtualbox vmware kvm2] I0526 17:39:27.683301 515401 out.go:170] โœจ Automatically selected the docker driver I0526 17:39:27.683343 515401 start.go:276] selected driver: docker I0526 17:39:27.683351 515401 start.go:718] validating driver "docker" against I0526 17:39:27.683370 515401 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0526 17:39:27.683477 515401 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0526 17:39:27.816679 515401 info.go:261] docker info: {ID:XS5B:IEEM:LSGG:TKLS:DTG6:UBKX:QGR3:YRDW:BUNG:DV4A:PIID:TPLB Containers:12 ContainersRunning:3 ContainersPaused:0 ContainersStopped:9 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:52 SystemTime:2021-05-26 17:39:27.731984207 +0100 BST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-17-generic OperatingSystem:Ubuntu 21.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:24 MemTotal:67406245888 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:hirsute-hippo Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.7.0]] Warnings:}} I0526 17:39:27.816960 515401 start_flags.go:259] no existing cluster config was found, will generate one from the flags I0526 17:39:27.840516 515401 start_flags.go:314] Using suggested 16000MB memory alloc based on sys=64283MB, container=64283MB I0526 17:39:27.840818 515401 start_flags.go:715] Wait components to verify : map[apiserver:true system_pods:true] I0526 17:39:27.840849 515401 cni.go:93] Creating CNI manager for "" I0526 17:39:27.840861 515401 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0526 17:39:27.840881 515401 start_flags.go:273] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:16000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0526 17:39:27.853758 515401 out.go:170] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0526 17:39:27.853828 515401 cache.go:111] Beginning downloading kic base image for docker with docker W0526 17:39:27.853842 515401 out.go:424] no arguments passed for "๐Ÿšœ Pulling base image ...\n" - returning raw string W0526 17:39:27.853876 515401 out.go:424] no arguments passed for "๐Ÿšœ Pulling base image ...\n" - returning raw string I0526 17:39:27.861917 515401 out.go:170] ๐Ÿšœ Pulling base image ... I0526 17:39:27.861997 515401 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker I0526 17:39:27.862088 515401 preload.go:106] Found local preload: /home/cbourne/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0526 17:39:27.862112 515401 cache.go:54] Caching tarball of preloaded images I0526 17:39:27.862135 515401 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory I0526 17:39:27.862163 515401 preload.go:132] Found /home/cbourne/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0526 17:39:27.862182 515401 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull I0526 17:39:27.862182 515401 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker I0526 17:39:27.862200 515401 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull I0526 17:39:27.862301 515401 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon I0526 17:39:27.863073 515401 profile.go:148] Saving config to /home/cbourne/.minikube/profiles/minikube/config.json ... I0526 17:39:27.863119 515401 lock.go:36] WriteFile acquiring /home/cbourne/.minikube/profiles/minikube/config.json: {Name:mkaef8a4bfbdf16780297bee1b1a282e3cd8fdca Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0526 17:39:27.936233 515401 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull I0526 17:39:27.936259 515401 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull I0526 17:39:27.936319 515401 cache.go:194] Successfully downloaded all kic artifacts I0526 17:39:27.936379 515401 start.go:313] acquiring machines lock for minikube: {Name:mk27799b760a7f35ef5b69e0d242d97f87dbbc84 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0526 17:39:27.936592 515401 start.go:317] acquired machines lock for "minikube" in 178.691ยตs I0526 17:39:27.936627 515401 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:16000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} I0526 17:39:27.936744 515401 start.go:126] createHost starting for "" (driver="docker") I0526 17:39:27.954796 515401 out.go:197] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=16000MB) ... I0526 17:39:27.955321 515401 start.go:160] libmachine.API.Create for "minikube" (driver="docker") I0526 17:39:27.955372 515401 client.go:168] LocalClient.Create starting I0526 17:39:27.955534 515401 main.go:128] libmachine: Reading certificate data from /home/cbourne/.minikube/certs/ca.pem I0526 17:39:27.955628 515401 main.go:128] libmachine: Decoding PEM data... I0526 17:39:27.955666 515401 main.go:128] libmachine: Parsing certificate... I0526 17:39:27.955905 515401 main.go:128] libmachine: Reading certificate data from /home/cbourne/.minikube/certs/cert.pem I0526 17:39:27.955960 515401 main.go:128] libmachine: Decoding PEM data... I0526 17:39:27.955988 515401 main.go:128] libmachine: Parsing certificate... I0526 17:39:27.956918 515401 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0526 17:39:28.025849 515401 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0526 17:39:28.025994 515401 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs... I0526 17:39:28.026032 515401 cli_runner.go:115] Run: docker network inspect minikube W0526 17:39:28.091466 515401 cli_runner.go:162] docker network inspect minikube returned with exit code 1 I0526 17:39:28.091507 515401 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0526 17:39:28.091528 515401 network_create.go:254] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0526 17:39:28.091677 515401 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0526 17:39:28.142963 515401 network.go:263] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000c561a8] misses:0} I0526 17:39:28.143055 515401 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0526 17:39:28.143094 515401 network_create.go:100] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0526 17:39:28.143218 515401 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube I0526 17:39:28.269361 515401 network_create.go:84] docker network minikube 192.168.49.0/24 created I0526 17:39:28.269388 515401 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container I0526 17:39:28.269468 515401 cli_runner.go:115] Run: docker ps -a --format {{.Names}} I0526 17:39:28.325788 515401 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0526 17:39:28.399527 515401 oci.go:102] Successfully created a docker volume minikube I0526 17:39:28.399627 515401 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib I0526 17:39:29.553889 515401 cli_runner.go:168] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: (1.154225153s) I0526 17:39:29.553908 515401 oci.go:106] Successfully prepared a docker volume minikube W0526 17:39:29.553945 515401 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0526 17:39:29.553953 515401 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0526 17:39:29.554009 515401 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker I0526 17:39:29.554011 515401 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'" I0526 17:39:29.554045 515401 preload.go:106] Found local preload: /home/cbourne/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0526 17:39:29.554049 515401 kic.go:179] Starting extracting preloaded images to volume ... I0526 17:39:29.554112 515401 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/cbourne/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir I0526 17:39:29.675820 515401 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e I0526 17:39:30.538795 515401 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}} I0526 17:39:30.586340 515401 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0526 17:39:30.636889 515401 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0526 17:39:30.818328 515401 oci.go:278] the created container "minikube" has a running status. I0526 17:39:30.818354 515401 kic.go:210] Creating ssh key for kic: /home/cbourne/.minikube/machines/minikube/id_rsa... I0526 17:39:31.140682 515401 kic_runner.go:188] docker (temp): /home/cbourne/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0526 17:39:31.257436 515401 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0526 17:39:31.300622 515401 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0526 17:39:31.300636 515401 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0526 17:39:36.452679 515401 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/cbourne/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (6.898518668s) I0526 17:39:36.452705 515401 kic.go:188] duration metric: took 6.898653 seconds to extract preloaded images to volume I0526 17:39:36.452801 515401 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0526 17:39:36.501039 515401 machine.go:88] provisioning docker machine ... I0526 17:39:36.501084 515401 ubuntu.go:169] provisioning hostname "minikube" I0526 17:39:36.501198 515401 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0526 17:39:36.558084 515401 main.go:128] libmachine: Using SSH client type: native I0526 17:39:36.558463 515401 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 49197 } I0526 17:39:36.558491 515401 main.go:128] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0526 17:39:36.718985 515401 main.go:128] libmachine: SSH cmd err, output: : minikube I0526 17:39:36.719083 515401 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0526 17:39:36.774432 515401 main.go:128] libmachine: Using SSH client type: native I0526 17:39:36.774813 515401 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 49197 } I0526 17:39:36.774856 515401 main.go:128] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0526 17:39:36.917585 515401 main.go:128] libmachine: SSH cmd err, output: : I0526 17:39:36.917602 515401 ubuntu.go:175] set auth options {CertDir:/home/cbourne/.minikube CaCertPath:/home/cbourne/.minikube/certs/ca.pem CaPrivateKeyPath:/home/cbourne/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/cbourne/.minikube/machines/server.pem ServerKeyPath:/home/cbourne/.minikube/machines/server-key.pem ClientKeyPath:/home/cbourne/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/cbourne/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/cbourne/.minikube} I0526 17:39:36.917618 515401 ubuntu.go:177] setting up certificates I0526 17:39:36.917626 515401 provision.go:83] configureAuth start I0526 17:39:36.917681 515401 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0526 17:39:36.974417 515401 provision.go:137] copyHostCerts I0526 17:39:36.974520 515401 exec_runner.go:145] found /home/cbourne/.minikube/ca.pem, removing ... I0526 17:39:36.974536 515401 exec_runner.go:190] rm: /home/cbourne/.minikube/ca.pem I0526 17:39:36.974699 515401 exec_runner.go:152] cp: /home/cbourne/.minikube/certs/ca.pem --> /home/cbourne/.minikube/ca.pem (1082 bytes) I0526 17:39:36.974933 515401 exec_runner.go:145] found /home/cbourne/.minikube/cert.pem, removing ... I0526 17:39:36.974944 515401 exec_runner.go:190] rm: /home/cbourne/.minikube/cert.pem I0526 17:39:36.975034 515401 exec_runner.go:152] cp: /home/cbourne/.minikube/certs/cert.pem --> /home/cbourne/.minikube/cert.pem (1123 bytes) I0526 17:39:36.975205 515401 exec_runner.go:145] found /home/cbourne/.minikube/key.pem, removing ... I0526 17:39:36.975217 515401 exec_runner.go:190] rm: /home/cbourne/.minikube/key.pem I0526 17:39:36.975297 515401 exec_runner.go:152] cp: /home/cbourne/.minikube/certs/key.pem --> /home/cbourne/.minikube/key.pem (1679 bytes) I0526 17:39:36.975436 515401 provision.go:111] generating server cert: /home/cbourne/.minikube/machines/server.pem ca-key=/home/cbourne/.minikube/certs/ca.pem private-key=/home/cbourne/.minikube/certs/ca-key.pem org=cbourne.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0526 17:39:37.868243 515401 provision.go:165] copyRemoteCerts I0526 17:39:37.868354 515401 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0526 17:39:37.868451 515401 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0526 17:39:37.922491 515401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49197 SSHKeyPath:/home/cbourne/.minikube/machines/minikube/id_rsa Username:docker} I0526 17:39:38.029425 515401 ssh_runner.go:316] scp /home/cbourne/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0526 17:39:38.053109 515401 ssh_runner.go:316] scp /home/cbourne/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes) I0526 17:39:38.083481 515401 ssh_runner.go:316] scp /home/cbourne/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0526 17:39:38.106123 515401 provision.go:86] duration metric: configureAuth took 1.188489976s I0526 17:39:38.106137 515401 ubuntu.go:193] setting minikube options for container-runtime I0526 17:39:38.106360 515401 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0526 17:39:38.154347 515401 main.go:128] libmachine: Using SSH client type: native I0526 17:39:38.154732 515401 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 49197 } I0526 17:39:38.154757 515401 main.go:128] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0526 17:39:38.317910 515401 main.go:128] libmachine: SSH cmd err, output: : overlay I0526 17:39:38.317925 515401 ubuntu.go:71] root file system type: overlay I0526 17:39:38.318160 515401 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ... I0526 17:39:38.318241 515401 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0526 17:39:38.367082 515401 main.go:128] libmachine: Using SSH client type: native I0526 17:39:38.367381 515401 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 49197 } I0526 17:39:38.367532 515401 main.go:128] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0526 17:39:38.530197 515401 main.go:128] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0526 17:39:38.530259 515401 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0526 17:39:38.583545 515401 main.go:128] libmachine: Using SSH client type: native I0526 17:39:38.583938 515401 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 49197 } I0526 17:39:38.583980 515401 main.go:128] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0526 17:39:39.848516 515401 main.go:128] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-04-09 22:45:28.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2021-05-26 16:39:38.527713760 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0526 17:39:39.848552 515401 machine.go:91] provisioned docker machine in 3.34749551s I0526 17:39:39.848570 515401 client.go:171] LocalClient.Create took 11.893189939s I0526 17:39:39.848619 515401 start.go:168] duration metric: libmachine.API.Create for "minikube" took 11.893282801s I0526 17:39:39.848635 515401 start.go:267] post-start starting for "minikube" (driver="docker") I0526 17:39:39.848644 515401 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0526 17:39:39.848762 515401 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0526 17:39:39.848856 515401 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0526 17:39:39.907365 515401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49197 SSHKeyPath:/home/cbourne/.minikube/machines/minikube/id_rsa Username:docker} I0526 17:39:40.021633 515401 ssh_runner.go:149] Run: cat /etc/os-release I0526 17:39:40.025065 515401 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0526 17:39:40.025092 515401 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0526 17:39:40.025105 515401 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0526 17:39:40.025110 515401 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0526 17:39:40.025120 515401 filesync.go:118] Scanning /home/cbourne/.minikube/addons for local assets ... I0526 17:39:40.025186 515401 filesync.go:118] Scanning /home/cbourne/.minikube/files for local assets ... I0526 17:39:40.025219 515401 start.go:270] post-start completed in 176.576463ms I0526 17:39:40.025636 515401 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0526 17:39:40.074670 515401 profile.go:148] Saving config to /home/cbourne/.minikube/profiles/minikube/config.json ... I0526 17:39:40.074985 515401 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0526 17:39:40.075028 515401 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0526 17:39:40.121001 515401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49197 SSHKeyPath:/home/cbourne/.minikube/machines/minikube/id_rsa Username:docker} I0526 17:39:40.234166 515401 start.go:129] duration metric: createHost completed in 12.297405703s I0526 17:39:40.234179 515401 start.go:80] releasing machines lock for "minikube", held for 12.297571876s I0526 17:39:40.234269 515401 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0526 17:39:40.292107 515401 ssh_runner.go:149] Run: systemctl --version I0526 17:39:40.292185 515401 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0526 17:39:40.292189 515401 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0526 17:39:40.292343 515401 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0526 17:39:40.350870 515401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49197 SSHKeyPath:/home/cbourne/.minikube/machines/minikube/id_rsa Username:docker} I0526 17:39:40.351275 515401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49197 SSHKeyPath:/home/cbourne/.minikube/machines/minikube/id_rsa Username:docker} I0526 17:39:40.445719 515401 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd I0526 17:39:40.559657 515401 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0526 17:39:40.573145 515401 cruntime.go:225] skipping containerd shutdown because we are bound to it I0526 17:39:40.573191 515401 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0526 17:39:40.585746 515401 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0526 17:39:40.602412 515401 ssh_runner.go:149] Run: sudo systemctl unmask docker.service I0526 17:39:40.704624 515401 ssh_runner.go:149] Run: sudo systemctl enable docker.socket I0526 17:39:40.830731 515401 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0526 17:39:40.844899 515401 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0526 17:39:40.986656 515401 ssh_runner.go:149] Run: sudo systemctl start docker I0526 17:39:41.005881 515401 ssh_runner.go:149] Run: docker version --format {{.Server.Version}} I0526 17:39:41.081233 515401 out.go:197] ๐Ÿณ Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... I0526 17:39:41.081354 515401 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0526 17:39:41.138112 515401 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0526 17:39:41.146045 515401 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0526 17:39:41.163116 515401 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker I0526 17:39:41.163142 515401 preload.go:106] Found local preload: /home/cbourne/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0526 17:39:41.163186 515401 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0526 17:39:41.235413 515401 docker.go:528] Got preloaded images: -- stdout -- gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 kubernetesui/dashboard:v2.1.0 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I0526 17:39:41.235444 515401 docker.go:465] Images already preloaded, skipping extraction I0526 17:39:41.235508 515401 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0526 17:39:41.293058 515401 docker.go:528] Got preloaded images: -- stdout -- gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 kubernetesui/dashboard:v2.1.0 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I0526 17:39:41.293084 515401 cache_images.go:74] Images are preloaded, skipping loading I0526 17:39:41.293193 515401 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}} I0526 17:39:41.422787 515401 cni.go:93] Creating CNI manager for "" I0526 17:39:41.422805 515401 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0526 17:39:41.422818 515401 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0526 17:39:41.422838 515401 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0526 17:39:41.423500 515401 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.2 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0526 17:39:41.423696 515401 kubeadm.go:901] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0526 17:39:41.423791 515401 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0526 17:39:41.435768 515401 binaries.go:44] Found k8s binaries, skipping transfer I0526 17:39:41.435870 515401 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0526 17:39:41.451585 515401 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I0526 17:39:41.472601 515401 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0526 17:39:41.505886 515401 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1840 bytes) I0526 17:39:41.536130 515401 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0526 17:39:41.543092 515401 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0526 17:39:41.556332 515401 certs.go:52] Setting up /home/cbourne/.minikube/profiles/minikube for IP: 192.168.49.2 I0526 17:39:41.556389 515401 certs.go:171] skipping minikubeCA CA generation: /home/cbourne/.minikube/ca.key I0526 17:39:41.556410 515401 certs.go:171] skipping proxyClientCA CA generation: /home/cbourne/.minikube/proxy-client-ca.key I0526 17:39:41.556463 515401 certs.go:286] generating minikube-user signed cert: /home/cbourne/.minikube/profiles/minikube/client.key I0526 17:39:41.556468 515401 crypto.go:69] Generating cert /home/cbourne/.minikube/profiles/minikube/client.crt with IP's: [] I0526 17:39:42.193956 515401 crypto.go:157] Writing cert to /home/cbourne/.minikube/profiles/minikube/client.crt ... I0526 17:39:42.193978 515401 lock.go:36] WriteFile acquiring /home/cbourne/.minikube/profiles/minikube/client.crt: {Name:mkbf6df79f440ac416cfcfeab9f8837326531646 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0526 17:39:42.194306 515401 crypto.go:165] Writing key to /home/cbourne/.minikube/profiles/minikube/client.key ... I0526 17:39:42.194325 515401 lock.go:36] WriteFile acquiring /home/cbourne/.minikube/profiles/minikube/client.key: {Name:mke74bcc9b8aa47e6fb149462dbcec37afb25675 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0526 17:39:42.194581 515401 certs.go:286] generating minikube signed cert: /home/cbourne/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0526 17:39:42.194592 515401 crypto.go:69] Generating cert /home/cbourne/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0526 17:39:42.388691 515401 crypto.go:157] Writing cert to /home/cbourne/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0526 17:39:42.388703 515401 lock.go:36] WriteFile acquiring /home/cbourne/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkb02b28b84dde8842902aef15a3a4a665f066fd Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0526 17:39:42.388854 515401 crypto.go:165] Writing key to /home/cbourne/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0526 17:39:42.388865 515401 lock.go:36] WriteFile acquiring /home/cbourne/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk09767e705a7ffa7443f86bb4aa46ccc53d7434 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0526 17:39:42.388997 515401 certs.go:297] copying /home/cbourne/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/cbourne/.minikube/profiles/minikube/apiserver.crt I0526 17:39:42.389092 515401 certs.go:301] copying /home/cbourne/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/cbourne/.minikube/profiles/minikube/apiserver.key I0526 17:39:42.389179 515401 certs.go:286] generating aggregator signed cert: /home/cbourne/.minikube/profiles/minikube/proxy-client.key I0526 17:39:42.389184 515401 crypto.go:69] Generating cert /home/cbourne/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0526 17:39:42.909510 515401 crypto.go:157] Writing cert to /home/cbourne/.minikube/profiles/minikube/proxy-client.crt ... I0526 17:39:42.909522 515401 lock.go:36] WriteFile acquiring /home/cbourne/.minikube/profiles/minikube/proxy-client.crt: {Name:mkc00da10425bea244479d88e299de74877338dc Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0526 17:39:42.909668 515401 crypto.go:165] Writing key to /home/cbourne/.minikube/profiles/minikube/proxy-client.key ... I0526 17:39:42.909678 515401 lock.go:36] WriteFile acquiring /home/cbourne/.minikube/profiles/minikube/proxy-client.key: {Name:mkc0dd44ad382680217cd86c078bba2282ef5c56 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0526 17:39:42.909942 515401 certs.go:361] found cert: /home/cbourne/.minikube/certs/home/cbourne/.minikube/certs/ca-key.pem (1679 bytes) I0526 17:39:42.909987 515401 certs.go:361] found cert: /home/cbourne/.minikube/certs/home/cbourne/.minikube/certs/ca.pem (1082 bytes) I0526 17:39:42.910022 515401 certs.go:361] found cert: /home/cbourne/.minikube/certs/home/cbourne/.minikube/certs/cert.pem (1123 bytes) I0526 17:39:42.910056 515401 certs.go:361] found cert: /home/cbourne/.minikube/certs/home/cbourne/.minikube/certs/key.pem (1679 bytes) I0526 17:39:42.911457 515401 ssh_runner.go:316] scp /home/cbourne/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0526 17:39:42.937952 515401 ssh_runner.go:316] scp /home/cbourne/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0526 17:39:42.962523 515401 ssh_runner.go:316] scp /home/cbourne/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0526 17:39:42.986119 515401 ssh_runner.go:316] scp /home/cbourne/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0526 17:39:43.017425 515401 ssh_runner.go:316] scp /home/cbourne/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0526 17:39:43.048422 515401 ssh_runner.go:316] scp /home/cbourne/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0526 17:39:43.076673 515401 ssh_runner.go:316] scp /home/cbourne/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0526 17:39:43.120816 515401 ssh_runner.go:316] scp /home/cbourne/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0526 17:39:43.149657 515401 ssh_runner.go:316] scp /home/cbourne/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0526 17:39:43.180655 515401 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0526 17:39:43.213463 515401 ssh_runner.go:149] Run: openssl version I0526 17:39:43.224806 515401 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0526 17:39:43.235388 515401 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0526 17:39:43.241973 515401 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 May 26 09:44 /usr/share/ca-certificates/minikubeCA.pem I0526 17:39:43.242055 515401 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0526 17:39:43.253597 515401 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0526 17:39:43.268473 515401 kubeadm.go:381] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:16000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0526 17:39:43.268575 515401 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0526 17:39:43.320756 515401 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0526 17:39:43.334699 515401 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0526 17:39:43.348134 515401 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0526 17:39:43.348199 515401 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0526 17:39:43.361484 515401 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0526 17:39:43.361531 515401 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" W0526 17:40:13.909577 515401 out.go:424] no arguments passed for " โ–ช Generating certificates and keys ..." - returning raw string W0526 17:40:13.909614 515401 out.go:424] no arguments passed for " โ–ช Generating certificates and keys ..." - returning raw string I0526 17:40:13.923010 515401 out.go:197] โ–ช Generating certificates and keys ... W0526 17:40:13.925681 515401 out.go:424] no arguments passed for " โ–ช Booting up control plane ..." - returning raw string W0526 17:40:13.925703 515401 out.go:424] no arguments passed for " โ–ช Booting up control plane ..." - returning raw string I0526 17:40:13.933387 515401 out.go:197] โ–ช Booting up control plane ... W0526 17:40:13.934843 515401 out.go:424] no arguments passed for " โ–ช Configuring RBAC rules ..." - returning raw string W0526 17:40:13.934860 515401 out.go:424] no arguments passed for " โ–ช Configuring RBAC rules ..." - returning raw string I0526 17:40:13.945356 515401 out.go:197] โ–ช Configuring RBAC rules ... I0526 17:40:13.947935 515401 cni.go:93] Creating CNI manager for "" I0526 17:40:13.947945 515401 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0526 17:40:13.947967 515401 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0526 17:40:13.948041 515401 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_05_26T17_40_13_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0526 17:40:13.948045 515401 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0526 17:40:13.990696 515401 ops.go:34] apiserver oom_adj: -16 I0526 17:40:14.210298 515401 kubeadm.go:977] duration metric: took 262.31261ms to wait for elevateKubeSystemPrivileges. I0526 17:40:15.056001 515401 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_05_26T17_40_13_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.107922122s) I0526 17:40:15.056048 515401 kubeadm.go:383] StartCluster complete in 31.78757632s I0526 17:40:15.056076 515401 settings.go:142] acquiring lock: {Name:mk028bb8f5c25ad84cf48f34edeed3ffa06fa595 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0526 17:40:15.056246 515401 settings.go:150] Updating kubeconfig: /home/cbourne/.kube/config I0526 17:40:15.057964 515401 lock.go:36] WriteFile acquiring /home/cbourne/.kube/config: {Name:mke313a0e935a38e4b09887d15d923cafe923deb Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0526 17:40:15.589068 515401 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I0526 17:40:15.589125 515401 start.go:201] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} W0526 17:40:15.589166 515401 out.go:424] no arguments passed for "๐Ÿ”Ž Verifying Kubernetes components...\n" - returning raw string W0526 17:40:15.589190 515401 out.go:424] no arguments passed for "๐Ÿ”Ž Verifying Kubernetes components...\n" - returning raw string I0526 17:40:15.599313 515401 out.go:170] ๐Ÿ”Ž Verifying Kubernetes components... I0526 17:40:15.589341 515401 addons.go:328] enableAddons start: toEnable=map[], additional=[] I0526 17:40:15.599478 515401 addons.go:55] Setting storage-provisioner=true in profile "minikube" I0526 17:40:15.599498 515401 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0526 17:40:15.599507 515401 addons.go:131] Setting addon storage-provisioner=true in "minikube" W0526 17:40:15.599518 515401 addons.go:140] addon storage-provisioner should already be in state true I0526 17:40:15.599526 515401 addons.go:55] Setting default-storageclass=true in profile "minikube" I0526 17:40:15.599544 515401 host.go:66] Checking if "minikube" exists ... I0526 17:40:15.599556 515401 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0526 17:40:15.600228 515401 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0526 17:40:15.600675 515401 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0526 17:40:15.617094 515401 api_server.go:50] waiting for apiserver process to appear ... I0526 17:40:15.617135 515401 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0526 17:40:15.640337 515401 api_server.go:70] duration metric: took 51.169816ms to wait for apiserver process to appear ... I0526 17:40:15.640357 515401 api_server.go:86] waiting for apiserver healthz status ... I0526 17:40:15.640371 515401 api_server.go:223] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0526 17:40:15.661935 515401 out.go:170] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0526 17:40:15.662158 515401 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml I0526 17:40:15.662171 515401 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0526 17:40:15.662282 515401 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0526 17:40:15.665448 515401 api_server.go:249] https://192.168.49.2:8443/healthz returned 200: ok I0526 17:40:15.666368 515401 api_server.go:139] control plane version: v1.20.2 I0526 17:40:15.666380 515401 api_server.go:129] duration metric: took 26.01706ms to wait for apiserver health ... I0526 17:40:15.666388 515401 system_pods.go:43] waiting for kube-system pods to appear ... I0526 17:40:15.666694 515401 addons.go:131] Setting addon default-storageclass=true in "minikube" W0526 17:40:15.666705 515401 addons.go:140] addon default-storageclass should already be in state true I0526 17:40:15.666720 515401 host.go:66] Checking if "minikube" exists ... I0526 17:40:15.667276 515401 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0526 17:40:15.675397 515401 system_pods.go:59] 0 kube-system pods found I0526 17:40:15.675414 515401 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up I0526 17:40:15.710055 515401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49197 SSHKeyPath:/home/cbourne/.minikube/machines/minikube/id_rsa Username:docker} I0526 17:40:15.713506 515401 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml I0526 17:40:15.713522 515401 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0526 17:40:15.713606 515401 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0526 17:40:15.778789 515401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49197 SSHKeyPath:/home/cbourne/.minikube/machines/minikube/id_rsa Username:docker} I0526 17:40:15.831576 515401 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0526 17:40:15.900325 515401 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0526 17:40:15.942369 515401 system_pods.go:59] 0 kube-system pods found I0526 17:40:15.942394 515401 retry.go:31] will retry after 381.329545ms: only 0 pod(s) have shown up I0526 17:40:16.327227 515401 system_pods.go:59] 0 kube-system pods found I0526 17:40:16.327244 515401 retry.go:31] will retry after 422.765636ms: only 0 pod(s) have shown up I0526 17:40:16.520327 515401 out.go:170] ๐ŸŒŸ Enabled addons: storage-provisioner, default-storageclass I0526 17:40:16.520353 515401 addons.go:330] enableAddons completed in 931.133639ms I0526 17:40:16.755736 515401 system_pods.go:59] 1 kube-system pods found I0526 17:40:16.755773 515401 system_pods.go:61] "storage-provisioner" [544cd6ef-9dc0-4221-86cc-6f77c54823b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0526 17:40:16.755787 515401 retry.go:31] will retry after 473.074753ms: only 1 pod(s) have shown up I0526 17:40:17.234755 515401 system_pods.go:59] 1 kube-system pods found I0526 17:40:17.234783 515401 system_pods.go:61] "storage-provisioner" [544cd6ef-9dc0-4221-86cc-6f77c54823b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0526 17:40:17.234801 515401 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up I0526 17:40:17.827546 515401 system_pods.go:59] 1 kube-system pods found I0526 17:40:17.827574 515401 system_pods.go:61] "storage-provisioner" [544cd6ef-9dc0-4221-86cc-6f77c54823b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0526 17:40:17.827587 515401 retry.go:31] will retry after 834.206799ms: only 1 pod(s) have shown up I0526 17:40:18.667756 515401 system_pods.go:59] 1 kube-system pods found I0526 17:40:18.667785 515401 system_pods.go:61] "storage-provisioner" [544cd6ef-9dc0-4221-86cc-6f77c54823b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0526 17:40:18.667798 515401 retry.go:31] will retry after 746.553905ms: only 1 pod(s) have shown up I0526 17:40:19.420850 515401 system_pods.go:59] 1 kube-system pods found I0526 17:40:19.420879 515401 system_pods.go:61] "storage-provisioner" [544cd6ef-9dc0-4221-86cc-6f77c54823b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0526 17:40:19.420893 515401 retry.go:31] will retry after 987.362415ms: only 1 pod(s) have shown up I0526 17:40:20.412663 515401 system_pods.go:59] 1 kube-system pods found I0526 17:40:20.412689 515401 system_pods.go:61] "storage-provisioner" [544cd6ef-9dc0-4221-86cc-6f77c54823b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0526 17:40:20.412702 515401 retry.go:31] will retry after 1.189835008s: only 1 pod(s) have shown up I0526 17:40:21.611668 515401 system_pods.go:59] 5 kube-system pods found I0526 17:40:21.611701 515401 system_pods.go:61] "etcd-minikube" [84298ecf-f407-4dca-9926-43c3754c21e5] Pending I0526 17:40:21.611712 515401 system_pods.go:61] "kube-apiserver-minikube" [b0703936-a24b-42d6-b9b0-803b5f372047] Pending I0526 17:40:21.611721 515401 system_pods.go:61] "kube-controller-manager-minikube" [0a6ff09f-2d80-48ef-a34b-0917cee4a06f] Pending I0526 17:40:21.611740 515401 system_pods.go:61] "kube-scheduler-minikube" [55030b69-7e3a-4c79-b4a0-56510c3c35fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0526 17:40:21.611753 515401 system_pods.go:61] "storage-provisioner" [544cd6ef-9dc0-4221-86cc-6f77c54823b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0526 17:40:21.611765 515401 system_pods.go:74] duration metric: took 5.945369873s to wait for pod list to return data ... I0526 17:40:21.611779 515401 kubeadm.go:538] duration metric: took 6.02261818s to wait for : map[apiserver:true system_pods:true] ... I0526 17:40:21.611805 515401 node_conditions.go:102] verifying NodePressure condition ... I0526 17:40:21.619029 515401 node_conditions.go:122] node storage ephemeral capacity is 960202784Ki I0526 17:40:21.619064 515401 node_conditions.go:123] node cpu capacity is 24 I0526 17:40:21.619109 515401 node_conditions.go:105] duration metric: took 7.293477ms to run NodePressure ... I0526 17:40:21.619125 515401 start.go:206] waiting for startup goroutines ... I0526 17:40:21.697003 515401 start.go:460] kubectl: 1.21.1, cluster: 1.20.2 (minor skew: 1) I0526 17:40:21.704718 515401 out.go:170] ๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Logs begin at Wed 2021-05-26 16:39:31 UTC, end at Wed 2021-05-26 16:44:50 UTC. -- May 26 16:39:31 minikube systemd[1]: Starting Docker Application Container Engine... May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.321595976Z" level=info msg="Starting up" May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.323419026Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.323451710Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.323485460Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.323511054Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.325235296Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.325270526Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.325298739Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.325319576Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.622030019Z" level=info msg="[graphdriver] using prior storage driver: overlay2" May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.708561014Z" level=warning msg="Your kernel does not support CPU realtime scheduler" May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.708628224Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.708651516Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.709054006Z" level=info msg="Loading containers: start." May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.882193342Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 26 16:39:31 minikube dockerd[219]: time="2021-05-26T16:39:31.986259221Z" level=info msg="Loading containers: done." May 26 16:39:32 minikube dockerd[219]: time="2021-05-26T16:39:32.114422629Z" level=info msg="Docker daemon" commit=8728dd2 graphdriver(s)=overlay2 version=20.10.6 May 26 16:39:32 minikube dockerd[219]: time="2021-05-26T16:39:32.114742256Z" level=info msg="Daemon has completed initialization" May 26 16:39:32 minikube systemd[1]: Started Docker Application Container Engine. May 26 16:39:32 minikube dockerd[219]: time="2021-05-26T16:39:32.192215230Z" level=info msg="API listen on /run/docker.sock" May 26 16:39:38 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. May 26 16:39:39 minikube systemd[1]: Stopping Docker Application Container Engine... May 26 16:39:39 minikube dockerd[219]: time="2021-05-26T16:39:39.298528904Z" level=info msg="Processing signal 'terminated'" May 26 16:39:39 minikube dockerd[219]: time="2021-05-26T16:39:39.301539750Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby May 26 16:39:39 minikube dockerd[219]: time="2021-05-26T16:39:39.303172729Z" level=info msg="Daemon shutdown complete" May 26 16:39:39 minikube dockerd[219]: time="2021-05-26T16:39:39.303283899Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby May 26 16:39:39 minikube systemd[1]: docker.service: Succeeded. May 26 16:39:39 minikube systemd[1]: Stopped Docker Application Container Engine. May 26 16:39:39 minikube systemd[1]: Starting Docker Application Container Engine... May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.370843960Z" level=info msg="Starting up" May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.373461060Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.373508996Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.373561976Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.373593519Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.374978804Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.375019979Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.375062144Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.375095779Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.391069639Z" level=info msg="[graphdriver] using prior storage driver: overlay2" May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.420090919Z" level=warning msg="Your kernel does not support CPU realtime scheduler" May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.420141279Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.420162112Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.420627335Z" level=info msg="Loading containers: start." May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.701246556Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.789000076Z" level=info msg="Loading containers: done." May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.818458821Z" level=info msg="Docker daemon" commit=8728dd2 graphdriver(s)=overlay2 version=20.10.6 May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.818514270Z" level=info msg="Daemon has completed initialization" May 26 16:39:39 minikube systemd[1]: Started Docker Application Container Engine. May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.857524005Z" level=info msg="API listen on [::]:2376" May 26 16:39:39 minikube dockerd[482]: time="2021-05-26T16:39:39.869902577Z" level=info msg="API listen on /var/run/docker.sock" May 26 16:41:08 minikube dockerd[482]: time="2021-05-26T16:41:08.939180146Z" level=info msg="ignoring event" container=14239b9c8cd1149c58b78b9700ddb3e29a34d9ea683a8c0b2bb5d70de9dc4c89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 16:41:39 minikube dockerd[482]: time="2021-05-26T16:41:39.780203823Z" level=info msg="ignoring event" container=fe6dd471a101259a54b77d668bb9aac6e281428798900263386dff32626ab34d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 16:42:24 minikube dockerd[482]: time="2021-05-26T16:42:24.055040031Z" level=info msg="ignoring event" container=74e3f89eff044ed457926fef684fbf1cd75c82f48d9f9414989626cebad1b1e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 16:43:18 minikube dockerd[482]: time="2021-05-26T16:43:18.038744733Z" level=info msg="ignoring event" container=e4fa853c1accfa4bda46916a30f63765fc0dc82ff3feeb0c7942cd62359cda1c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 16:44:39 minikube dockerd[482]: time="2021-05-26T16:44:39.062146139Z" level=info msg="ignoring event" container=b969586e02132b9c6b8c179fad76d023ad4dedc272c5c13539b2443684f76928 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID b969586e02132 6e38f40d628db 42 seconds ago Exited storage-provisioner 4 c9c72ef1d0a2c 81af33b546e06 bfe3a36ebd252 4 minutes ago Running coredns 0 75440db569e73 b8044999a36bd 43154ddb57a83 4 minutes ago Running kube-proxy 0 c17aa14b530c5 bae3136fc343b a8c2fdb8bf76e 4 minutes ago Running kube-apiserver 0 bec39652128c9 cb39790321d55 a27166429d98e 4 minutes ago Running kube-controller-manager 0 ad3bfb21a6ba6 7f34d7ef18a37 0369cf4303ffd 4 minutes ago Running etcd 0 7c1b82233e015 e4f8d3f66ccb9 ed2c44fbdd78b 4 minutes ago Running kube-scheduler 0 ac4177255254a * * ==> coredns [81af33b546e0] <== * E0526 16:41:34.254247 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:41:34.465130 1 trace.go:116] Trace[1747278511]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:41:04.464432084 +0000 UTC m=+31.379090783) (total time: 30.000659741s): Trace[1747278511]: [30.000659741s] [30.000659741s] END E0526 16:41:34.465162 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:42:05.725385 1 trace.go:116] Trace[817455089]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:41:35.724457847 +0000 UTC m=+62.639116543) (total time: 30.000843127s): Trace[817455089]: [30.000843127s] [30.000843127s] END E0526 16:42:05.725423 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:42:06.409097 1 trace.go:116] Trace[1006933274]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:41:36.408184666 +0000 UTC m=+63.322843365) (total time: 30.000819289s): Trace[1006933274]: [30.000819289s] [30.000819289s] END E0526 16:42:06.409131 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:42:06.679578 1 trace.go:116] Trace[629431445]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:41:36.678665395 +0000 UTC m=+63.593324091) (total time: 30.000856307s): Trace[629431445]: [30.000856307s] [30.000856307s] END E0526 16:42:06.679614 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:42:39.944330 1 trace.go:116] Trace[469339106]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:42:09.943449139 +0000 UTC m=+96.858107842) (total time: 30.000776093s): Trace[469339106]: [30.000776093s] [30.000776093s] END E0526 16:42:39.944370 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:42:40.515781 1 trace.go:116] Trace[774965466]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:42:10.515013348 +0000 UTC m=+97.429672044) (total time: 30.000727382s): Trace[774965466]: [30.000727382s] [30.000727382s] END E0526 16:42:40.515816 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:42:42.053856 1 trace.go:116] Trace[1852186258]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:42:12.052917817 +0000 UTC m=+98.967576513) (total time: 30.000859147s): Trace[1852186258]: [30.000859147s] [30.000859147s] END E0526 16:42:42.053898 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:43:17.645851 1 trace.go:116] Trace[637979947]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:42:47.644977484 +0000 UTC m=+134.559636180) (total time: 30.000808642s): Trace[637979947]: [30.000808642s] [30.000808642s] END E0526 16:43:17.645896 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:43:20.330788 1 trace.go:116] Trace[443632888]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:42:50.329955147 +0000 UTC m=+137.244613846) (total time: 30.000744809s): Trace[443632888]: [30.000744809s] [30.000744809s] END E0526 16:43:20.330825 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:43:20.568966 1 trace.go:116] Trace[1496193015]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:42:50.568344717 +0000 UTC m=+137.483003430) (total time: 30.00057985s): Trace[1496193015]: [30.00057985s] [30.00057985s] END E0526 16:43:20.569000 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:44:10.075022 1 trace.go:116] Trace[60780408]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:43:40.074138015 +0000 UTC m=+186.988796727) (total time: 30.00083156s): Trace[60780408]: [30.00083156s] [30.00083156s] END E0526 16:44:10.075062 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:44:10.079621 1 trace.go:116] Trace[1304066831]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:43:40.07902715 +0000 UTC m=+186.993685850) (total time: 30.00054978s): Trace[1304066831]: [30.00054978s] [30.00054978s] END E0526 16:44:10.079656 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I0526 16:44:14.207996 1 trace.go:116] Trace[170625356]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-26 16:43:44.207255872 +0000 UTC m=+191.121914578) (total time: 30.000690045s): Trace[170625356]: [30.000690045s] [30.000690045s] END E0526 16:44:14.208034 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" * * ==> describe nodes <== * Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_05_26T17_40_13_0700 minikube.k8s.io/version=v1.20.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 26 May 2021 16:40:10 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Wed, 26 May 2021 16:44:42 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Wed, 26 May 2021 16:40:29 +0000 Wed, 26 May 2021 16:40:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 26 May 2021 16:40:29 +0000 Wed, 26 May 2021 16:40:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 26 May 2021 16:40:29 +0000 Wed, 26 May 2021 16:40:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 26 May 2021 16:40:29 +0000 Wed, 26 May 2021 16:40:29 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 24 ephemeral-storage: 960202784Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 65826412Ki pods: 110 Allocatable: cpu: 24 ephemeral-storage: 960202784Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 65826412Ki pods: 110 System Info: Machine ID: 822f5ed6656e44929f6c2cc5d6881453 System UUID: 97e18df8-f3a0-48fd-ba14-9bc569c50c85 Boot ID: 9c957af3-f7bc-4149-af07-d2e063386bfc Kernel Version: 5.11.0-17-generic OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.6 Kubelet Version: v1.20.2 Kube-Proxy Version: v1.20.2 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-74ff55c5b-sdwgf 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 4m22s kube-system etcd-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 4m29s kube-system kube-apiserver-minikube 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m29s kube-system kube-controller-manager-minikube 200m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m29s kube-system kube-proxy-hh54g 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m22s kube-system kube-scheduler-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m29s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m34s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (3%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 4m52s (x5 over 4m52s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 4m52s (x5 over 4m52s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 4m52s (x5 over 4m52s) kubelet Node minikube status is now: NodeHasSufficientPID Normal Starting 4m30s kubelet Starting kubelet. Normal NodeHasSufficientMemory 4m30s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 4m30s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 4m30s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeNotReady 4m30s kubelet Node minikube status is now: NodeNotReady Normal NodeAllocatableEnforced 4m29s kubelet Updated Node Allocatable limit across pods Normal NodeReady 4m21s kubelet Node minikube status is now: NodeReady Warning readOnlySysFS 4m20s kube-proxy CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) Normal Starting 4m20s kube-proxy Starting kube-proxy. * * ==> dmesg <== * [ +0.000001] RIP: 0010:gmux_probe.cold+0xa4/0x3e0 [apple_gmux] [ +0.000004] Code: 31 c0 b9 07 00 00 00 4c 89 f7 f3 ab be 70 00 00 00 4c 89 e7 c7 45 c4 02 00 00 00 e8 15 fa ff ff 89 45 b8 3d ff ff ff 00 7e 09 <0f> 0b c7 45 b8 ff ff ff 00 4d 89 f0 48 c7 c1 80 96 31 c0 4c 89 e2 [ +0.000003] RSP: 0018:ffffc27341353b40 EFLAGS: 00010212 [ +0.000002] RAX: 0000000004000b00 RBX: 0000000000000004 RCX: 0000000000000000 [ +0.000001] RDX: ffff9ffa8f499840 RSI: 0000000000000010 RDI: ffff9ffa81d62498 [ +0.000002] RBP: ffffc27341353b98 R08: 0000000000000010 R09: 0000000000041634 [ +0.000001] R10: ffffc27341353930 R11: ffffffff92953508 R12: ffff9ffa81d62480 [ +0.000001] R13: ffff9ffa82781000 R14: ffffc27341353b4c R15: 0000000000000400 [ +0.000002] FS: 00007f59eb70d8c0(0000) GS:ffffa009bfc00000(0000) knlGS:0000000000000000 [ +0.000002] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ +0.000001] CR2: 000055a59e6f6120 CR3: 0000000110f38003 CR4: 00000000001706e0 [ +0.000002] Call Trace: [ +0.000003] pnp_device_probe+0xc2/0x160 [ +0.000005] ? gmux_read32+0xc0/0xc0 [apple_gmux] [ +0.000003] really_probe+0xff/0x460 [ +0.000004] driver_probe_device+0xe9/0x160 [ +0.000003] device_driver_attach+0xab/0xb0 [ +0.000002] __driver_attach+0x8f/0x150 [ +0.000003] ? device_driver_attach+0xb0/0xb0 [ +0.000002] bus_for_each_dev+0x7e/0xc0 [ +0.000002] driver_attach+0x1e/0x20 [ +0.000002] bus_add_driver+0x135/0x1f0 [ +0.000003] driver_register+0x95/0xf0 [ +0.000002] ? 0xffffffffc031e000 [ +0.000002] pnp_register_driver+0x20/0x30 [ +0.000004] gmux_pnp_driver_init+0x15/0x1000 [apple_gmux] [ +0.000004] do_one_initcall+0x48/0x1d0 [ +0.000004] ? kmem_cache_alloc_trace+0xf6/0x200 [ +0.000005] ? do_init_module+0x28/0x290 [ +0.000004] do_init_module+0x62/0x290 [ +0.000002] load_module+0x6fd/0x780 [ +0.000002] __do_sys_finit_module+0xc2/0x120 [ +0.000004] __x64_sys_finit_module+0x1a/0x20 [ +0.000002] do_syscall_64+0x38/0x90 [ +0.000005] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ +0.000004] RIP: 0033:0x7f59ebbc5f6d [ +0.000002] Code: 28 0d 00 0f 05 eb a9 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cb de 0c 00 f7 d8 64 89 01 48 [ +0.000002] RSP: 002b:00007ffc2cb3b238 EFLAGS: 00000246 ORIG_RAX: 0000000000000139 [ +0.000002] RAX: ffffffffffffffda RBX: 000055a59e6f7a50 RCX: 00007f59ebbc5f6d [ +0.000001] RDX: 0000000000000000 RSI: 00007f59ebd6be2d RDI: 0000000000000010 [ +0.000002] RBP: 0000000000020000 R08: 0000000000000000 R09: 000055a59cd1289d [ +0.000001] R10: 0000000000000010 R11: 0000000000000246 R12: 00007f59ebd6be2d [ +0.000001] R13: 0000000000000000 R14: 000055a59e6fc130 R15: 000055a59e6f7a50 [ +0.000003] ---[ end trace 5dc1969516b5fdfd ]--- [ +0.068277] at24 0-0052: supply vcc not found, using dummy regulator [ +0.000837] at24 0-0053: supply vcc not found, using dummy regulator [ +0.007404] mei_me 0000:00:16.0: can't derive routing for PCI INT A [ +0.000004] mei_me 0000:00:16.0: PCI INT A: no GSI [ +0.119988] b43-phy0 ERROR: FOUND UNSUPPORTED PHY (Analog 12, Type 11 (AC), Revision 1) [ +0.000010] b43: probe of bcma0:1 failed with error -95 [ +0.715354] applesmc applesmc.768: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). [ +4.105198] CRAT table not found [ +4.732266] kauditd_printk_skb: 23 callbacks suppressed [May25 11:19] hid-generic 0005:046D:B338.0007: unknown main item tag 0x0 [ +5.461496] radeon_dp_aux_transfer_native: 2090 callbacks suppressed [ +2.129722] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality. [May25 13:22] hid-generic 0005:046D:B338.0008: unknown main item tag 0x0 [May25 16:49] hid-generic 0005:046D:B338.0009: unknown main item tag 0x0 [May26 13:59] radeon_dp_aux_transfer_native: 452 callbacks suppressed [May26 14:00] hid-generic 0005:046D:B338.000A: unknown main item tag 0x0 * * ==> etcd [7f34d7ef18a3] <== * raft2021/05/26 16:40:00 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2021/05/26 16:40:00 INFO: aec36adc501070cc became follower at term 1 raft2021/05/26 16:40:00 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2021-05-26 16:40:00.087748 W | auth: simple token is not cryptographically signed 2021-05-26 16:40:00.111569 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] 2021-05-26 16:40:00.111670 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10) raft2021/05/26 16:40:00 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2021-05-26 16:40:00.112438 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be 2021-05-26 16:40:00.114259 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2021-05-26 16:40:00.114321 I | embed: listening for peers on 192.168.49.2:2380 2021-05-26 16:40:00.114426 I | embed: listening for metrics on http://127.0.0.1:2381 raft2021/05/26 16:40:00 INFO: aec36adc501070cc is starting a new election at term 1 raft2021/05/26 16:40:00 INFO: aec36adc501070cc became candidate at term 2 raft2021/05/26 16:40:00 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2 raft2021/05/26 16:40:00 INFO: aec36adc501070cc became leader at term 2 raft2021/05/26 16:40:00 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2 2021-05-26 16:40:00.633016 I | etcdserver: setting up the initial cluster version to 3.4 2021-05-26 16:40:00.638709 N | etcdserver/membership: set the initial cluster version to 3.4 2021-05-26 16:40:00.638810 I | etcdserver/api: enabled capabilities for version 3.4 2021-05-26 16:40:00.638830 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be 2021-05-26 16:40:00.638846 I | embed: ready to serve client requests 2021-05-26 16:40:00.638867 I | embed: ready to serve client requests 2021-05-26 16:40:00.678021 I | embed: serving client requests on 127.0.0.1:2379 2021-05-26 16:40:00.678564 I | embed: serving client requests on 192.168.49.2:2379 2021-05-26 16:40:10.699129 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" " with result "range_response_count:0 size:4" took too long (111.878362ms) to execute 2021-05-26 16:40:10.699195 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:0 size:4" took too long (111.941775ms) to execute 2021-05-26 16:40:10.699343 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-public\" " with result "range_response_count:0 size:4" took too long (103.677415ms) to execute 2021-05-26 16:40:10.699458 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:0 size:4" took too long (103.815026ms) to execute 2021-05-26 16:40:10.795648 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:40:24.364765 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:40:29.006301 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:729" took too long (127.652714ms) to execute 2021-05-26 16:40:29.007156 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3575" took too long (120.851379ms) to execute 2021-05-26 16:40:29.283882 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (101.623394ms) to execute 2021-05-26 16:40:30.221070 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:40:40.221373 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:40:50.221430 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:41:00.221410 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:41:10.221435 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:41:20.221283 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:41:30.221368 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:41:40.221332 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:41:50.221227 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:42:00.221430 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:42:10.221165 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:42:20.221229 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:42:30.221313 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:42:40.221403 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:42:50.221215 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:43:00.221439 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:43:10.221416 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:43:20.221208 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:43:30.221346 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:43:40.221312 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:43:50.221221 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:44:00.221476 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:44:10.221305 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:44:20.221191 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:44:30.221362 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:44:40.221263 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 16:44:50.221629 I | etcdserver/api/etcdhttp: /health OK (status code 200) * * ==> kernel <== * 16:44:50 up 1 day, 5:26, 0 users, load average: 0.60, 0.62, 0.30 Linux minikube 5.11.0-17-generic #18-Ubuntu SMP Thu May 6 20:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [bae3136fc343] <== * I0526 16:40:10.075383 1 controller.go:86] Starting OpenAPI controller I0526 16:40:10.075553 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0526 16:40:10.075609 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0526 16:40:10.075618 1 naming_controller.go:291] Starting NamingConditionController I0526 16:40:10.075708 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0526 16:40:10.075797 1 crd_finalizer.go:266] Starting CRDFinalizer I0526 16:40:10.075863 1 apf_controller.go:261] Starting API Priority and Fairness config controller I0526 16:40:10.076282 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0526 16:40:10.076307 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0526 16:40:10.076347 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0526 16:40:10.076391 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt E0526 16:40:10.077295 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: I0526 16:40:10.292172 1 shared_informer.go:247] Caches are synced for node_authorizer I0526 16:40:10.377758 1 cache.go:39] Caches are synced for autoregister controller I0526 16:40:10.377859 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0526 16:40:10.377924 1 apf_controller.go:266] Running API Priority and Fairness config worker I0526 16:40:10.378058 1 shared_informer.go:247] Caches are synced for crd-autoregister I0526 16:40:10.378116 1 cache.go:39] Caches are synced for AvailableConditionController controller I0526 16:40:10.378265 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0526 16:40:10.480432 1 controller.go:609] quota admission added evaluator for: namespaces I0526 16:40:11.072963 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0526 16:40:11.073016 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0526 16:40:11.081969 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000 I0526 16:40:11.090640 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000 I0526 16:40:11.090677 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I0526 16:40:12.025937 1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0526 16:40:12.102028 1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0526 16:40:12.245372 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0526 16:40:12.247002 1 controller.go:609] quota admission added evaluator for: endpoints I0526 16:40:12.255112 1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io I0526 16:40:12.758521 1 controller.go:609] quota admission added evaluator for: serviceaccounts I0526 16:40:13.716597 1 controller.go:609] quota admission added evaluator for: deployments.apps I0526 16:40:13.827758 1 controller.go:609] quota admission added evaluator for: daemonsets.apps I0526 16:40:20.698425 1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io I0526 16:40:28.780756 1 controller.go:609] quota admission added evaluator for: replicasets.apps I0526 16:40:28.803107 1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps I0526 16:40:40.773781 1 client.go:360] parsed scheme: "passthrough" I0526 16:40:40.773863 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 16:40:40.773884 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 16:41:11.363574 1 client.go:360] parsed scheme: "passthrough" I0526 16:41:11.363623 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 16:41:11.363637 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 16:41:53.907251 1 client.go:360] parsed scheme: "passthrough" I0526 16:41:53.907335 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 16:41:53.907357 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 16:42:25.032801 1 trace.go:205] Trace[1976878119]: "Get" url:/api/v1/namespaces/kube-system/pods/storage-provisioner/log,user-agent:kubectl/v1.21.1 (linux/amd64) kubernetes/5e58841,client:192.168.49.1 (26-May-2021 16:41:55.732) (total time: 29299ms): Trace[1976878119]: ---"Transformed response object" 29296ms (16:42:00.032) Trace[1976878119]: [29.299998413s] [29.299998413s] END I0526 16:42:33.878489 1 client.go:360] parsed scheme: "passthrough" I0526 16:42:33.878567 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 16:42:33.878587 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 16:43:18.132540 1 client.go:360] parsed scheme: "passthrough" I0526 16:43:18.132620 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 16:43:18.132641 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 16:43:50.481903 1 client.go:360] parsed scheme: "passthrough" I0526 16:43:50.481983 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 16:43:50.482003 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 16:44:32.898375 1 client.go:360] parsed scheme: "passthrough" I0526 16:44:32.898473 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 16:44:32.898507 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * * ==> kube-controller-manager [cb39790321d5] <== * I0526 16:40:28.370746 1 controllermanager.go:554] Started "endpointslice" I0526 16:40:28.370849 1 endpointslice_controller.go:237] Starting endpoint slice controller I0526 16:40:28.370878 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice I0526 16:40:28.620586 1 controllermanager.go:554] Started "persistentvolume-binder" I0526 16:40:28.621024 1 pv_controller_base.go:307] Starting persistent volume controller I0526 16:40:28.621049 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0526 16:40:28.621063 1 shared_informer.go:240] Waiting for caches to sync for persistent volume W0526 16:40:28.632116 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0526 16:40:28.682222 1 shared_informer.go:247] Caches are synced for TTL I0526 16:40:28.700202 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0526 16:40:28.702206 1 shared_informer.go:247] Caches are synced for attach detach I0526 16:40:28.702300 1 shared_informer.go:247] Caches are synced for service account I0526 16:40:28.703636 1 shared_informer.go:247] Caches are synced for ReplicationController I0526 16:40:28.707238 1 shared_informer.go:247] Caches are synced for job I0526 16:40:28.716384 1 shared_informer.go:247] Caches are synced for node I0526 16:40:28.716411 1 range_allocator.go:172] Starting range CIDR allocator I0526 16:40:28.716418 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0526 16:40:28.716423 1 shared_informer.go:247] Caches are synced for cidrallocator I0526 16:40:28.720341 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0526 16:40:28.721705 1 shared_informer.go:247] Caches are synced for deployment I0526 16:40:28.722533 1 shared_informer.go:247] Caches are synced for persistent volume I0526 16:40:28.722699 1 shared_informer.go:247] Caches are synced for endpoint I0526 16:40:28.724964 1 shared_informer.go:247] Caches are synced for namespace I0526 16:40:28.725004 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0526 16:40:28.745222 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0526 16:40:28.745302 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0526 16:40:28.745341 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0526 16:40:28.745388 1 shared_informer.go:247] Caches are synced for expand I0526 16:40:28.745433 1 shared_informer.go:247] Caches are synced for ReplicaSet I0526 16:40:28.745455 1 shared_informer.go:247] Caches are synced for taint I0526 16:40:28.745512 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W0526 16:40:28.745578 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. I0526 16:40:28.745633 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I0526 16:40:28.745948 1 taint_manager.go:187] Starting NoExecuteTaintManager I0526 16:40:28.746879 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0526 16:40:28.765495 1 shared_informer.go:247] Caches are synced for GC I0526 16:40:28.775563 1 shared_informer.go:247] Caches are synced for daemon sets I0526 16:40:28.777776 1 shared_informer.go:247] Caches are synced for PVC protection I0526 16:40:28.777787 1 shared_informer.go:247] Caches are synced for PV protection I0526 16:40:28.777829 1 shared_informer.go:247] Caches are synced for stateful set I0526 16:40:28.777849 1 shared_informer.go:247] Caches are synced for endpoint_slice I0526 16:40:28.777996 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0526 16:40:28.779668 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24] I0526 16:40:28.804729 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1" I0526 16:40:28.878205 1 shared_informer.go:247] Caches are synced for HPA I0526 16:40:28.879863 1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0526 16:40:28.880548 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-sdwgf" I0526 16:40:28.977920 1 shared_informer.go:247] Caches are synced for crt configmap I0526 16:40:28.978575 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0526 16:40:28.978745 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0526 16:40:28.978913 1 shared_informer.go:247] Caches are synced for resource quota I0526 16:40:28.978926 1 shared_informer.go:247] Caches are synced for disruption I0526 16:40:28.979294 1 disruption.go:339] Sending events to api server. I0526 16:40:28.980594 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hh54g" I0526 16:40:28.981587 1 shared_informer.go:247] Caches are synced for resource quota I0526 16:40:29.084656 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0526 16:40:29.384858 1 shared_informer.go:247] Caches are synced for garbage collector I0526 16:40:29.419853 1 shared_informer.go:247] Caches are synced for garbage collector I0526 16:40:29.419906 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0526 16:40:33.745963 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. * * ==> kube-proxy [b8044999a36b] <== * I0526 16:40:30.292039 1 node.go:172] Successfully retrieved node IP: 192.168.49.2 I0526 16:40:30.292210 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation W0526 16:40:30.340788 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy I0526 16:40:30.340987 1 server_others.go:185] Using iptables Proxier. I0526 16:40:30.341647 1 server.go:650] Version: v1.20.2 I0526 16:40:30.342494 1 conntrack.go:52] Setting nf_conntrack_max to 786432 E0526 16:40:30.343183 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime]) I0526 16:40:30.343425 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0526 16:40:30.343595 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0526 16:40:30.343941 1 config.go:315] Starting service config controller I0526 16:40:30.343977 1 shared_informer.go:240] Waiting for caches to sync for service config I0526 16:40:30.344070 1 config.go:224] Starting endpoint slice config controller I0526 16:40:30.344090 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0526 16:40:30.444342 1 shared_informer.go:247] Caches are synced for service config I0526 16:40:30.444257 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [e4f8d3f66ccb] <== * I0526 16:40:00.714515 1 serving.go:331] Generated self-signed cert in-memory W0526 16:40:10.283453 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0526 16:40:10.283489 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0526 16:40:10.283519 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous. W0526 16:40:10.283529 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0526 16:40:10.486982 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0526 16:40:10.487016 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0526 16:40:10.487506 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I0526 16:40:10.487748 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0526 16:40:10.489031 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0526 16:40:10.490512 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0526 16:40:10.490631 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0526 16:40:10.490809 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0526 16:40:10.490815 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0526 16:40:10.491758 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0526 16:40:10.491903 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0526 16:40:10.492020 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0526 16:40:10.492140 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0526 16:40:10.492618 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0526 16:40:10.492825 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0526 16:40:10.492837 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0526 16:40:11.310326 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0526 16:40:11.559656 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0526 16:40:11.559708 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0526 16:40:11.611300 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0526 16:40:11.685491 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0526 16:40:11.723227 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope I0526 16:40:14.287154 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Wed 2021-05-26 16:39:31 UTC, end at Wed 2021-05-26 16:44:51 UTC. -- May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.413400 2608 topology_manager.go:187] [topologymanager] Topology Admit Handler May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.413513 2608 topology_manager.go:187] [topologymanager] Topology Admit Handler May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.413603 2608 topology_manager.go:187] [topologymanager] Topology Admit Handler May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.488956 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489023 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-k8s-certs") pod "kube-controller-manager-minikube" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489060 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-kubeconfig") pod "kube-controller-manager-minikube" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489094 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489171 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/c767dbeb9ddd2d01964c2fc02c621c4e-ca-certs") pod "kube-apiserver-minikube" (UID: "c767dbeb9ddd2d01964c2fc02c621c4e") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489221 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/c767dbeb9ddd2d01964c2fc02c621c4e-k8s-certs") pod "kube-apiserver-minikube" (UID: "c767dbeb9ddd2d01964c2fc02c621c4e") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489278 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c767dbeb9ddd2d01964c2fc02c621c4e-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "c767dbeb9ddd2d01964c2fc02c621c4e") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489326 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489360 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-minikube" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489437 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/c31fe6a5afdd142cf3450ac972274b36-etcd-certs") pod "etcd-minikube" (UID: "c31fe6a5afdd142cf3450ac972274b36") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489502 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/c31fe6a5afdd142cf3450ac972274b36-etcd-data") pod "etcd-minikube" (UID: "c31fe6a5afdd142cf3450ac972274b36") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489550 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/c767dbeb9ddd2d01964c2fc02c621c4e-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "c767dbeb9ddd2d01964c2fc02c621c4e") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489581 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c767dbeb9ddd2d01964c2fc02c621c4e-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "c767dbeb9ddd2d01964c2fc02c621c4e") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489633 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-ca-certs") pod "kube-controller-manager-minikube" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489751 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7") May 26 16:40:21 minikube kubelet[2608]: I0526 16:40:21.489815 2608 reconciler.go:157] Reconciler: start to sync state May 26 16:40:28 minikube kubelet[2608]: I0526 16:40:28.802136 2608 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24 May 26 16:40:28 minikube kubelet[2608]: I0526 16:40:28.802559 2608 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},} May 26 16:40:28 minikube kubelet[2608]: I0526 16:40:28.802728 2608 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24 May 26 16:40:29 minikube kubelet[2608]: I0526 16:40:29.016057 2608 topology_manager.go:187] [topologymanager] Topology Admit Handler May 26 16:40:29 minikube kubelet[2608]: I0526 16:40:29.179937 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/37146854-1f04-4d0a-ab02-2e1d039f269c-kube-proxy") pod "kube-proxy-hh54g" (UID: "37146854-1f04-4d0a-ab02-2e1d039f269c") May 26 16:40:29 minikube kubelet[2608]: I0526 16:40:29.180331 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/37146854-1f04-4d0a-ab02-2e1d039f269c-xtables-lock") pod "kube-proxy-hh54g" (UID: "37146854-1f04-4d0a-ab02-2e1d039f269c") May 26 16:40:29 minikube kubelet[2608]: I0526 16:40:29.180487 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/37146854-1f04-4d0a-ab02-2e1d039f269c-lib-modules") pod "kube-proxy-hh54g" (UID: "37146854-1f04-4d0a-ab02-2e1d039f269c") May 26 16:40:29 minikube kubelet[2608]: I0526 16:40:29.180647 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-fntjg" (UniqueName: "kubernetes.io/secret/37146854-1f04-4d0a-ab02-2e1d039f269c-kube-proxy-token-fntjg") pod "kube-proxy-hh54g" (UID: "37146854-1f04-4d0a-ab02-2e1d039f269c") May 26 16:40:31 minikube kubelet[2608]: I0526 16:40:31.805003 2608 topology_manager.go:187] [topologymanager] Topology Admit Handler May 26 16:40:31 minikube kubelet[2608]: I0526 16:40:31.986893 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-hxvdr" (UniqueName: "kubernetes.io/secret/341d6262-8c96-4bba-b08f-cec0981a9420-coredns-token-hxvdr") pod "coredns-74ff55c5b-sdwgf" (UID: "341d6262-8c96-4bba-b08f-cec0981a9420") May 26 16:40:31 minikube kubelet[2608]: I0526 16:40:31.986946 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/341d6262-8c96-4bba-b08f-cec0981a9420-config-volume") pod "coredns-74ff55c5b-sdwgf" (UID: "341d6262-8c96-4bba-b08f-cec0981a9420") May 26 16:40:32 minikube kubelet[2608]: W0526 16:40:32.832110 2608 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-sdwgf through plugin: invalid network status for May 26 16:40:32 minikube kubelet[2608]: W0526 16:40:32.925493 2608 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-sdwgf through plugin: invalid network status for May 26 16:40:34 minikube kubelet[2608]: W0526 16:40:34.119742 2608 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-sdwgf through plugin: invalid network status for May 26 16:40:37 minikube kubelet[2608]: I0526 16:40:37.814875 2608 topology_manager.go:187] [topologymanager] Topology Admit Handler May 26 16:40:38 minikube kubelet[2608]: I0526 16:40:38.001807 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/544cd6ef-9dc0-4221-86cc-6f77c54823b3-tmp") pod "storage-provisioner" (UID: "544cd6ef-9dc0-4221-86cc-6f77c54823b3") May 26 16:40:38 minikube kubelet[2608]: I0526 16:40:38.001887 2608 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-pznrd" (UniqueName: "kubernetes.io/secret/544cd6ef-9dc0-4221-86cc-6f77c54823b3-storage-provisioner-token-pznrd") pod "storage-provisioner" (UID: "544cd6ef-9dc0-4221-86cc-6f77c54823b3") May 26 16:41:09 minikube kubelet[2608]: I0526 16:41:09.436697 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: 14239b9c8cd1149c58b78b9700ddb3e29a34d9ea683a8c0b2bb5d70de9dc4c89 May 26 16:41:40 minikube kubelet[2608]: I0526 16:41:40.742188 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: 14239b9c8cd1149c58b78b9700ddb3e29a34d9ea683a8c0b2bb5d70de9dc4c89 May 26 16:41:40 minikube kubelet[2608]: I0526 16:41:40.742709 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: fe6dd471a101259a54b77d668bb9aac6e281428798900263386dff32626ab34d May 26 16:41:40 minikube kubelet[2608]: E0526 16:41:40.743152 2608 pod_workers.go:191] Error syncing pod 544cd6ef-9dc0-4221-86cc-6f77c54823b3 ("storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)" May 26 16:41:53 minikube kubelet[2608]: I0526 16:41:53.712952 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: fe6dd471a101259a54b77d668bb9aac6e281428798900263386dff32626ab34d May 26 16:42:24 minikube kubelet[2608]: I0526 16:42:24.306307 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: fe6dd471a101259a54b77d668bb9aac6e281428798900263386dff32626ab34d May 26 16:42:24 minikube kubelet[2608]: I0526 16:42:24.306945 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: 74e3f89eff044ed457926fef684fbf1cd75c82f48d9f9414989626cebad1b1e2 May 26 16:42:24 minikube kubelet[2608]: E0526 16:42:24.307453 2608 pod_workers.go:191] Error syncing pod 544cd6ef-9dc0-4221-86cc-6f77c54823b3 ("storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)" May 26 16:42:36 minikube kubelet[2608]: I0526 16:42:36.712949 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: 74e3f89eff044ed457926fef684fbf1cd75c82f48d9f9414989626cebad1b1e2 May 26 16:42:36 minikube kubelet[2608]: E0526 16:42:36.713457 2608 pod_workers.go:191] Error syncing pod 544cd6ef-9dc0-4221-86cc-6f77c54823b3 ("storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)" May 26 16:42:47 minikube kubelet[2608]: I0526 16:42:47.712732 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: 74e3f89eff044ed457926fef684fbf1cd75c82f48d9f9414989626cebad1b1e2 May 26 16:43:18 minikube kubelet[2608]: I0526 16:43:18.840180 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: 74e3f89eff044ed457926fef684fbf1cd75c82f48d9f9414989626cebad1b1e2 May 26 16:43:18 minikube kubelet[2608]: I0526 16:43:18.840761 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fa853c1accfa4bda46916a30f63765fc0dc82ff3feeb0c7942cd62359cda1c May 26 16:43:18 minikube kubelet[2608]: E0526 16:43:18.841258 2608 pod_workers.go:191] Error syncing pod 544cd6ef-9dc0-4221-86cc-6f77c54823b3 ("storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)" May 26 16:43:33 minikube kubelet[2608]: I0526 16:43:33.712792 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fa853c1accfa4bda46916a30f63765fc0dc82ff3feeb0c7942cd62359cda1c May 26 16:43:33 minikube kubelet[2608]: E0526 16:43:33.713313 2608 pod_workers.go:191] Error syncing pod 544cd6ef-9dc0-4221-86cc-6f77c54823b3 ("storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)" May 26 16:43:44 minikube kubelet[2608]: I0526 16:43:44.712935 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fa853c1accfa4bda46916a30f63765fc0dc82ff3feeb0c7942cd62359cda1c May 26 16:43:44 minikube kubelet[2608]: E0526 16:43:44.714997 2608 pod_workers.go:191] Error syncing pod 544cd6ef-9dc0-4221-86cc-6f77c54823b3 ("storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)" May 26 16:43:56 minikube kubelet[2608]: I0526 16:43:56.712983 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fa853c1accfa4bda46916a30f63765fc0dc82ff3feeb0c7942cd62359cda1c May 26 16:43:56 minikube kubelet[2608]: E0526 16:43:56.713486 2608 pod_workers.go:191] Error syncing pod 544cd6ef-9dc0-4221-86cc-6f77c54823b3 ("storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)" May 26 16:44:08 minikube kubelet[2608]: I0526 16:44:08.713060 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fa853c1accfa4bda46916a30f63765fc0dc82ff3feeb0c7942cd62359cda1c May 26 16:44:39 minikube kubelet[2608]: I0526 16:44:39.589960 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fa853c1accfa4bda46916a30f63765fc0dc82ff3feeb0c7942cd62359cda1c May 26 16:44:39 minikube kubelet[2608]: I0526 16:44:39.590497 2608 scope.go:95] [topologymanager] RemoveContainer - Container ID: b969586e02132b9c6b8c179fad76d023ad4dedc272c5c13539b2443684f76928 May 26 16:44:39 minikube kubelet[2608]: E0526 16:44:39.590922 2608 pod_workers.go:191] Error syncing pod 544cd6ef-9dc0-4221-86cc-6f77c54823b3 ("storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(544cd6ef-9dc0-4221-86cc-6f77c54823b3)" * * ==> storage-provisioner [b969586e0213] <== * I0526 16:44:09.013870 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0526 16:44:39.018611 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout ```

Full output of failed command:

kubectl get po -A

``` NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-74ff55c5b-sdwgf 0/1 Running 0 6m18s kube-system etcd-minikube 1/1 Running 0 6m25s kube-system kube-apiserver-minikube 1/1 Running 0 6m25s kube-system kube-controller-manager-minikube 1/1 Running 0 6m25s kube-system kube-proxy-hh54g 1/1 Running 0 6m18s kube-system kube-scheduler-minikube 1/1 Running 0 6m25s kube-system storage-provisioner 0/1 Error 5 6m30s ```

kubectl logs -f storage-provisioner -n kube-system

``` I0526 16:49:19.992225 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0526 16:49:49.995341 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout ```

kubectl describe pod storage-provisioner -n kube-system

``` Name: storage-provisioner Namespace: kube-system Priority: 0 Node: minikube/192.168.49.2 Start Time: Wed, 26 May 2021 17:40:37 +0100 Labels: addonmanager.kubernetes.io/mode=Reconcile integration-test=storage-provisioner Annotations: Status: Running IP: 192.168.49.2 IPs: IP: 192.168.49.2 Containers: storage-provisioner: Container ID: docker://13e7202aff53ef86d412fc8c469038c21bf899b674370a97ee249596ac53a631 Image: gcr.io/k8s-minikube/storage-provisioner:v5 Image ID: docker-pullable://gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 Port: Host Port: Command: /storage-provisioner State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Wed, 26 May 2021 17:49:19 +0100 Finished: Wed, 26 May 2021 17:49:50 +0100 Ready: False Restart Count: 6 Environment: Mounts: /tmp from tmp (rw) /var/run/secrets/kubernetes.io/serviceaccount from storage-provisioner-token-pznrd (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: tmp: Type: HostPath (bare host directory volume) Path: /tmp HostPathType: Directory storage-provisioner-token-pznrd: Type: Secret (a volume populated by a Secret) SecretName: storage-provisioner-token-pznrd Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 11m (x4 over 11m) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate. Normal Scheduled 11m default-scheduler Successfully assigned kube-system/storage-provisioner to minikube Normal Pulled 7m33s (x5 over 11m) kubelet Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine Normal Created 7m33s (x5 over 11m) kubelet Created container storage-provisioner Normal Started 7m32s (x5 over 11m) kubelet Started container storage-provisioner Warning BackOff 59s (x32 over 10m) kubelet Back-off restarting failed `container ```
simao commented 3 years ago

It seems to be related to this: https://github.com/kubernetes-sigs/kind/issues/2240

Workaround for now is to run sudo sysctl net/netfilter/nf_conntrack_max=524288 before kube-proxy tries to do it.

sharifelgamal commented 3 years ago

This can most likely be fixed by doing something similar to https://github.com/kubernetes-sigs/kind/issues/2240 in our kubeadm configs.

medyagh commented 3 years ago

@carlskii @simao there is a WIP PR that might fix this, do u mind giving the binary in this PR a try ? https://github.com/kubernetes/minikube/pull/11957 and see if it helps to fix it ?

http://storage.googleapis.com/minikube-builds/11957/minikube-linux-amd64 http://storage.googleapis.com/minikube-builds/11957/minikube-darwin-amd64 http://storage.googleapis.com/minikube-builds/11957/minikube-windows-amd64.exe

Airblader commented 3 years ago

@medyagh I'm running into this issue as well. I built minikube from source right now (1.22.0.r584.g769ee3287-1), unfortunately the issue persists for me:

$ k -n kube-system logs storage-provisioner
I0901 11:06:09.955718       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0901 11:06:09.968639       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: network is unreachable

Edit: However, after downgrading to 1.22, kernel update + reboot, it seems to have worked.

sharifelgamal commented 2 years ago

@sudocraft which version of minikube are you using?

simao commented 2 years ago

I am using 1.23.1 and this still happens.

JElgar commented 8 months ago

Im using minikube 1.32.0 and getting the same issue. logs.txt