kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.39k stars 4.88k forks source link

failed to start node: controlPlane never updated to v1.18.0 (x509: certificate mismatch) #8998

Closed a-dyakov-mercuryo closed 4 years ago

a-dyakov-mercuryo commented 4 years ago

Steps to reproduce the issue:

  1. minikube start
  2. kubectl get pods Unable to connect to the server: x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206

Full output of failed command:

😄 minikube v1.12.1 on Microsoft Windows 10 Enterprise 10.0.19041 Build 19041 🆕 Kubernetes 1.18.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.18.3 ✨ Using the hyperv driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🔄 Restarting existing hyperv VM for "minikube" ... 🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.12 ... 🤦 Unable to restart cluster, will reset it: getting k8s client: client config: client config: context "minikube" does not exist 🔎 Verifying Kubernetes components... ❗ Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.0.206:8443/apis/storage.k8s.io/v1/storageclasses": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206] 🌟 Enabled addons: dashboard, default-storageclass, storage-provisioner

💣 failed to start node: startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.0

😿 minikube is exiting due to an error. If the above message is not useful, open an issue: 👉 https://github.com/kubernetes/minikube/issues/new/choose

Config $ minikube config view

$ minikube ip 192.168.0.206

Hyper-V Virtual Switch: External network

Optional: Full output of minikube logs command: minikube start --alsologtostderr

`` ``` I0814 12:11:30.250767 14356 out.go:170] Setting JSON to false I0814 12:11:30.256771 14356 start.go:101] hostinfo: {"hostname":"user-001","uptime":6372,"bootTime":1597389918,"procs":294,"os":"windows","platform":"Microsoft Windows 10 Enterprise","platformFamily":"Standalone Workstation","platformVersion":"10.0.19041 Build 19041","kernelVersion":"","virtualizationSystem":"","virtualizationRole":"","hostid":"f2091411-1955-44ce-a886-5ea04a0f27e0"} W0814 12:11:30.256771 14356 start.go:109] gopshost.Virtualization returned error: not implemented yet 😄 minikube v1.12.1 on Microsoft Windows 10 Enterprise 10.0.19041 Build 19041 🆕 Kubernetes 1.18.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.18.3 I0814 12:11:30.264768 14356 driver.go:257] Setting default libvirt URI to qemu:///system ✨ Using the hyperv driver based on existing profile I0814 12:11:30.544788 14356 start.go:217] selected driver: hyperv I0814 12:11:30.544788 14356 start.go:621] validating driver "hyperv" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.12.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch:kube HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.0.206 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[ambassador:false dashboard:true default-storageclass:true efk:false freshpod:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false] VerifyComponents:map[apiserver:true system_pods:true]} I0814 12:11:30.545781 14356 start.go:632] status for hyperv: {Installed:true Healthy:true NeedsImprovement:false Error: Fix: Doc:} I0814 12:11:30.545781 14356 start_flags.go:340] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.12.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch:kube HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.0.206 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[ambassador:false dashboard:true default-storageclass:true efk:false freshpod:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false] VerifyComponents:map[apiserver:true system_pods:true]} I0814 12:11:30.547766 14356 iso.go:118] acquiring lock: {Name:mkd4899cc4b1121dbe13b15e9fd512c4bd31edc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:} �� Starting control plane node minikube in cluster minikube I0814 12:11:30.551794 14356 preload.go:95] Checking if preload exists for k8s version v1.18.0 and runtime docker I0814 12:11:30.551794 14356 preload.go:103] Found local preload: C:\Users\User\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v4-v1.18.0-docker-overlay2-amd64.tar.lz4 I0814 12:11:30.551794 14356 cache.go:51] Caching tarball of preloaded images I0814 12:11:30.552771 14356 preload.go:129] Found C:\Users\User\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v4-v1.18.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0814 12:11:30.552771 14356 cache.go:54] Finished verifying existence of preloaded tar for v1.18.0 on docker I0814 12:11:30.552771 14356 profile.go:150] Saving config to C:\Users\User\.minikube\profiles\minikube\config.json ...I0814 12:11:30.557771 14356 cache.go:178] Successfully downloaded all kic artifacts I0814 12:11:30.558768 14356 start.go:241] acquiring machines lock for minikube: {Name:mk805e27b471fc37c3c36ec9967a8205fe8ac63b Clock:{} Delay:500ms Timeout:15m0s Cancel:} I0814 12:11:30.558768 14356 start.go:245] acquired machines lock for "minikube" in 0s I0814 12:11:30.558768 14356 start.go:89] Skipping create...Using existing machine configuration I0814 12:11:30.558768 14356 fix.go:53] fixHost starting: I0814 12:11:30.559767 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:31.095125 14356 main.go:115] libmachine: [stdout =====>] : Off I0814 12:11:31.095125 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:31.095125 14356 fix.go:105] recreateIfNeeded on minikube: state=Stopped err= W0814 12:11:31.095125 14356 fix.go:131] unexpected machine state, will restart: 🔄 Restarting existing hyperv VM for "minikube" ... I0814 12:11:31.101065 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM minikube I0814 12:11:32.567640 14356 main.go:115] libmachine: [stdout =====>] : I0814 12:11:32.568642 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:32.568642 14356 main.go:115] libmachine: Waiting for host to start... I0814 12:11:32.568642 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:33.281637 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:33.281637 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:33.281637 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:34.249639 14356 main.go:115] libmachine: [stdout =====>] : I0814 12:11:34.249639 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:35.259676 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:35.865638 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:35.865638 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:35.865638 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:36.732638 14356 main.go:115] libmachine: [stdout =====>] : I0814 12:11:36.732638 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:37.733364 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:38.513415 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:38.513415 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:38.513415 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:39.545458 14356 main.go:115] libmachine: [stdout =====>] : I0814 12:11:39.545458 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:40.545500 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:41.145388 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:41.145388 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:41.145388 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:41.932319 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:11:41.932319 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:41.937285 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:42.486655 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:42.486655 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:42.486655 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:43.295528 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:11:43.295528 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:43.295528 14356 profile.go:150] Saving config to C:\Users\User\.minikube\profiles\minikube\config.json ...I0814 12:11:43.299541 14356 machine.go:88] provisioning docker machine ... I0814 12:11:43.300542 14356 buildroot.go:163] provisioning hostname "minikube" I0814 12:11:43.300542 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:43.861505 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:43.861505 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:43.861505 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:44.665079 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:11:44.665079 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:44.668077 14356 main.go:115] libmachine: Using SSH client type: native I0814 12:11:44.668077 14356 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7b7070] 0x7b7040 [] 0s} 192.168.0.206 22 } I0814 12:11:44.669079 14356 main.go:115] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0814 12:11:44.798431 14356 main.go:115] libmachine: SSH cmd err, output: : minikube I0814 12:11:44.799433 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:45.355265 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:45.355265 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:45.355265 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:46.126540 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:11:46.126540 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:46.129445 14356 main.go:115] libmachine: Using SSH client type: native I0814 12:11:46.130556 14356 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7b7070] 0x7b7040 [] 0s} 192.168.0.206 22 } I0814 12:11:46.130556 14356 main.go:115] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0814 12:11:46.248740 14356 main.go:115] libmachine: SSH cmd err, output: : I0814 12:11:46.248740 14356 buildroot.go:169] set auth options {CertDir:C:\Users\User\.minikube CaCertPath:C:\Users\User\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\User\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\User\.minikube\machines\server.pem ServerKeyPath:C:\Users\User\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\User\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\User\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\User\.minikube} I0814 12:11:46.249755 14356 buildroot.go:171] setting up certificates I0814 12:11:46.249755 14356 provision.go:82] configureAuth start I0814 12:11:46.249755 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:46.795855 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:46.795855 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:46.795855 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:47.595807 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:11:47.595807 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:47.595807 14356 provision.go:131] copyHostCerts I0814 12:11:47.596705 14356 exec_runner.go:91] found C:\Users\User\.minikube/ca.pem, removing ... I0814 12:11:47.597715 14356 exec_runner.go:98] cp: C:\Users\User\.minikube\certs\ca.pem --> C:\Users\User\.minikube/ca.pem (1029 bytes) I0814 12:11:47.599710 14356 exec_runner.go:91] found C:\Users\User\.minikube/cert.pem, removing ... I0814 12:11:47.599710 14356 exec_runner.go:98] cp: C:\Users\User\.minikube\certs\cert.pem --> C:\Users\User\.minikube/cert.pem (1070 bytes) I0814 12:11:47.602707 14356 exec_runner.go:91] found C:\Users\User\.minikube/key.pem, removing ... I0814 12:11:47.602707 14356 exec_runner.go:98] cp: C:\Users\User\.minikube\certs\key.pem --> C:\Users\User\.minikube/key.pem (1675 bytes) I0814 12:11:47.605702 14356 provision.go:105] generating server cert: C:\Users\User\.minikube\machines\server.pem ca-key=C:\Users\User\.minikube\certs\ca.pem private-key=C:\Users\User\.minikube\certs\ca-key.pem org=User.minikube san=[192.168.0.206 localhost 127.0.0.1] I0814 12:11:47.726704 14356 provision.go:159] copyRemoteCerts I0814 12:11:47.762700 14356 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0814 12:11:47.762700 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:48.336724 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:48.336724 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:48.336724 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:49.109980 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:11:49.109980 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:49.109980 14356 sshutil.go:44] new ssh client: &{IP:192.168.0.206 Port:22 SSHKeyPath:C:\Users\User\.minikube\machines\minikube\id_rsa Username:docker} I0814 12:11:49.193953 14356 ssh_runner.go:188] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4312532s) I0814 12:11:49.194944 14356 ssh_runner.go:215] scp C:\Users\User\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1029 bytes) I0814 12:11:49.209948 14356 ssh_runner.go:215] scp C:\Users\User\.minikube\machines\server.pem --> /etc/docker/server.pem (1115 bytes) I0814 12:11:49.225947 14356 ssh_runner.go:215] scp C:\Users\User\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0814 12:11:49.238510 14356 provision.go:85] duration metric: configureAuth took 2.9887546s I0814 12:11:49.238510 14356 buildroot.go:186] setting minikube options for container-runtime I0814 12:11:49.239502 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:49.798025 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:49.798025 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:49.798025 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:50.581399 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:11:50.581399 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:50.584298 14356 main.go:115] libmachine: Using SSH client type: native I0814 12:11:50.585302 14356 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7b7070] 0x7b7040 [] 0s} 192.168.0.206 22 } I0814 12:11:50.585302 14356 main.go:115] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0814 12:11:50.709998 14356 main.go:115] libmachine: SSH cmd err, output: : tmpfs I0814 12:11:50.709998 14356 buildroot.go:70] root file system type: tmpfs I0814 12:11:50.709998 14356 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ... I0814 12:11:50.710998 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:51.263411 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:51.263411 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:51.263411 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:52.052632 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:11:52.052632 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:52.055527 14356 main.go:115] libmachine: Using SSH client type: native I0814 12:11:52.056527 14356 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7b7070] 0x7b7040 [] 0s} 192.168.0.206 22 } I0814 12:11:52.056527 14356 main.go:115] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket [Service] Type=notify # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0814 12:11:52.195727 14356 main.go:115] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket [Service] Type=notify # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0814 12:11:52.200724 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:52.778675 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:52.778793 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:52.778793 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:53.570193 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:11:53.570193 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:53.573193 14356 main.go:115] libmachine: Using SSH client type: native I0814 12:11:53.574271 14356 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7b7070] 0x7b7040 [] 0s} 192.168.0.206 22 } I0814 12:11:53.574271 14356 main.go:115] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0814 12:11:54.918494 14356 main.go:115] libmachine: SSH cmd err, output: : diff: can't stat '/lib/systemd/system/docker.service': No such file or directory Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service. I0814 12:11:54.919489 14356 machine.go:91] provisioned docker machine in 11.6189469s I0814 12:11:54.919489 14356 start.go:204] post-start starting for "minikube" (driver="hyperv") I0814 12:11:54.919489 14356 start.go:214] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0814 12:11:54.962523 14356 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0814 12:11:54.963498 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:55.508738 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:55.508738 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:55.508738 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:56.310039 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:11:56.310039 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:56.310039 14356 sshutil.go:44] new ssh client: &{IP:192.168.0.206 Port:22 SSHKeyPath:C:\Users\User\.minikube\machines\minikube\id_rsa Username:docker} I0814 12:11:56.398209 14356 ssh_runner.go:188] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.4347112s) I0814 12:11:56.401211 14356 ssh_runner.go:148] Run: cat /etc/os-release I0814 12:11:56.406209 14356 info.go:96] Remote host: Buildroot 2019.02.11 I0814 12:11:56.406209 14356 filesync.go:118] Scanning C:\Users\User\.minikube\addons for local assets ... I0814 12:11:56.407209 14356 filesync.go:118] Scanning C:\Users\User\.minikube\files for local assets ... I0814 12:11:56.407209 14356 start.go:207] post-start completed in 1.48772s I0814 12:11:56.407209 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:56.973003 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:56.973003 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:56.973003 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:57.742401 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:11:57.742401 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:57.745399 14356 main.go:115] libmachine: Using SSH client type: native I0814 12:11:57.746365 14356 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x7b7070] 0x7b7040 [] 0s} 192.168.0.206 22 } I0814 12:11:57.746365 14356 main.go:115] libmachine: About to run SSH command: date +%s.%N I0814 12:11:57.863423 14356 main.go:115] libmachine: SSH cmd err, output: : 1597396318.455966171 I0814 12:11:57.863423 14356 fix.go:209] guest clock: 1597396318.455966171 I0814 12:11:57.863423 14356 fix.go:222] Guest: 2020-08-14 12:11:58.455966171 +0300 MSK Remote: 2020-08-14 12:11:56.4072093 +0300 MSK m=+26.284441901 (delta=2.048756871s) I0814 12:11:57.864442 14356 fix.go:193] guest clock delta is within tolerance: 2.048756871s I0814 12:11:57.864442 14356 fix.go:55] fixHost completed within 27.3056738s I0814 12:11:57.864442 14356 start.go:76] releasing machines lock for "minikube", held for 27.3056738s I0814 12:11:57.865443 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:58.420727 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:58.420727 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:58.420727 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:11:59.201936 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:11:59.201936 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:59.203844 14356 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/ I0814 12:11:59.203844 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:59.262840 14356 ssh_runner.go:148] Run: systemctl --version I0814 12:11:59.264858 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:11:59.908839 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:11:59.908839 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:11:59.908839 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:12:00.061837 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:12:00.061837 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:12:00.061837 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:12:00.920836 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:12:00.920836 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:12:00.920836 14356 sshutil.go:44] new ssh client: &{IP:192.168.0.206 Port:22 SSHKeyPath:C:\Users\User\.minikube\machines\minikube\id_rsa Username:docker} I0814 12:12:00.979845 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:12:00.979845 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:12:00.979845 14356 sshutil.go:44] new ssh client: &{IP:192.168.0.206 Port:22 SSHKeyPath:C:\Users\User\.minikube\machines\minikube\id_rsa Username:docker} I0814 12:12:01.069839 14356 ssh_runner.go:188] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.8659955s) I0814 12:12:01.069839 14356 ssh_runner.go:188] Completed: systemctl --version: (1.8049815s) I0814 12:12:01.070838 14356 preload.go:95] Checking if preload exists for k8s version v1.18.0 and runtime docker I0814 12:12:01.070838 14356 preload.go:103] Found local preload: C:\Users\User\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v4-v1.18.0-docker-overlay2-amd64.tar.lz4 I0814 12:12:01.096869 14356 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I0814 12:12:01.139876 14356 docker.go:381] Got preloaded images: -- stdout -- nginx:latest nginx: 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_ng:develop-246 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_et:develop-427 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_pp:develop-1369 nginx: debian:latest 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_rd:develop-66 bitnami/minideb:stretch 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_ud-api:develop-204 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_ui:develop-226 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_ts:develop-99 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_sk:develop-117 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_er:develop-56 busybox:1.32 gcr.io/kubernetes-helm/tiller:v2.16.8 mysql:5.7.30 kubernetesui/dashboard:v2.0.1 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_et-gambling:develop-122 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_et-tg:develop-38 minio/minio:RELEASE.2020-04-10T03-34-42Z kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0 bitnami/rabbitmq:3.8.2-debian-10-r30 bitnami/redis:5.0.7-debian-10-r32 bitnami/rabbitmq-exporter:0.29.0-debian-10-r28 k8s.gcr.io/pause:3.2 k8s.gcr.io/coredns:1.6.7 memcached:1.5.20 openresty/openresty:1.15.8.2-5-alpine k8s.gcr.io/etcd:3.4.3-0 bitnami/postgresql:11.5.0-debian-9-r60 k8s.gcr.io/defaultbackend-amd64:1.5 gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -- /stdout -- I0814 12:12:01.142879 14356 docker.go:319] Images already preloaded, skipping extraction I0814 12:12:01.179908 14356 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd I0814 12:12:01.225909 14356 ssh_runner.go:148] Run: sudo systemctl cat docker.service I0814 12:12:01.271874 14356 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd I0814 12:12:01.316874 14356 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio I0814 12:12:01.361870 14356 ssh_runner.go:148] Run: sudo systemctl daemon-reload I0814 12:12:01.479665 14356 ssh_runner.go:148] Run: sudo systemctl start docker I0814 12:12:01.511679 14356 ssh_runner.go:148] Run: docker version --format {{.Server.Version}} 🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.12 ... I0814 12:12:01.577914 14356 ssh_runner.go:148] Run: grep 192.168.0.242 host.minikube.internal$ /etc/hosts I0814 12:12:01.580882 14356 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.0.242 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I0814 12:12:01.589880 14356 preload.go:95] Checking if preload exists for k8s version v1.18.0 and runtime docker I0814 12:12:01.589880 14356 preload.go:103] Found local preload: C:\Users\User\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v4-v1.18.0-docker-overlay2-amd64.tar.lz4 I0814 12:12:01.612878 14356 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I0814 12:12:01.647879 14356 docker.go:381] Got preloaded images: -- stdout -- nginx:latest nginx: 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_ng:develop-246 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_et:develop-427 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_pp:develop-1369 nginx: debian:latest 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_rd:develop-66 bitnami/minideb:stretch 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_ud-api:develop-204 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_ui:develop-226 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_ts:develop-99 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_sk:develop-117 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_er:develop-56 busybox:1.32 gcr.io/kubernetes-helm/tiller:v2.16.8 mysql:5.7.30 kubernetesui/dashboard:v2.0.1 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_et-gambling:develop-122 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_et-tg:develop-38 minio/minio:RELEASE.2020-04-10T03-34-42Z kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 bitnami/rabbitmq:3.8.2-debian-10-r30 bitnami/redis:5.0.7-debian-10-r32 bitnami/rabbitmq-exporter:0.29.0-debian-10-r28 k8s.gcr.io/pause:3.2 k8s.gcr.io/coredns:1.6.7 memcached:1.5.20 openresty/openresty:1.15.8.2-5-alpine k8s.gcr.io/etcd:3.4.3-0 bitnami/postgresql:11.5.0-debian-9-r60 k8s.gcr.io/defaultbackend-amd64:1.5 gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -- /stdout -- I0814 12:12:01.649877 14356 docker.go:319] Images already preloaded, skipping extraction I0814 12:12:01.670911 14356 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I0814 12:12:01.707392 14356 docker.go:381] Got preloaded images: -- stdout -- nginx:latest nginx: 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_ng:develop-246 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_et:develop-427 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_pp:develop-1369 nginx: debian:latest 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_rd:develop-66 bitnami/minideb:stretch 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_ud-api:develop-204 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_ui:develop-226 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_ts:develop-99 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_sk:develop-117 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_er:develop-56 busybox:1.32 gcr.io/kubernetes-helm/tiller:v2.16.8 mysql:5.7.30 kubernetesui/dashboard:v2.0.1 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_et-gambling:develop-122 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0 9855256464654.dkr.ecr.eu-central-1.amazonaws.com/сont_et-tg:develop-38 minio/minio:RELEASE.2020-04-10T03-34-42Z kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0 bitnami/rabbitmq:3.8.2-debian-10-r30 bitnami/redis:5.0.7-debian-10-r32 bitnami/rabbitmq-exporter:0.29.0-debian-10-r28 k8s.gcr.io/pause:3.2 k8s.gcr.io/coredns:1.6.7 memcached:1.5.20 openresty/openresty:1.15.8.2-5-alpine k8s.gcr.io/etcd:3.4.3-0 bitnami/postgresql:11.5.0-debian-9-r60 k8s.gcr.io/defaultbackend-amd64:1.5 gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -- /stdout -- I0814 12:12:01.710393 14356 cache_images.go:69] Images are preloaded, skipping loading I0814 12:12:01.733430 14356 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}} I0814 12:12:01.770989 14356 cni.go:74] Creating CNI manager for "" I0814 12:12:01.770989 14356 cni.go:117] CNI unnecessary in this configuration, recommending no CNI I0814 12:12:01.770989 14356 kubeadm.go:84] Using pod CIDR: I0814 12:12:01.770989 14356 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:192.168.0.206 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.0.206"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.0.206 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0814 12:12:01.771986 14356 kubeadm.go:154] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.0.206 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.0.206 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.0.206"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd controllerManager: extraArgs: "leader-elect": "false" scheduler: extraArgs: "leader-elect": "false" kubernetesVersion: v1.18.0 networking: dnsDomain: cluster.local podSubnet: "" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "" metricsBindAddress: 192.168.0.206:10249 I0814 12:12:01.775988 14356 kubeadm.go:787] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.0.206 [Install] config: {KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0814 12:12:01.811987 14356 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.0 I0814 12:12:01.819995 14356 binaries.go:43] Found k8s binaries, skipping transfer I0814 12:12:01.856019 14356 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0814 12:12:01.862024 14356 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (335 bytes) I0814 12:12:01.873850 14356 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes) I0814 12:12:01.885862 14356 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1741 bytes) I0814 12:12:01.901742 14356 ssh_runner.go:148] Run: grep 192.168.0.206 control-plane.minikube.internal$ /etc/hosts I0814 12:12:01.905742 14356 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.0.206 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I0814 12:12:01.948672 14356 ssh_runner.go:148] Run: sudo systemctl daemon-reload I0814 12:12:02.077165 14356 ssh_runner.go:148] Run: sudo systemctl start kubelet I0814 12:12:02.090161 14356 certs.go:52] Setting up C:\Users\User\.minikube\profiles\minikube for IP: 192.168.0.206 I0814 12:12:02.090161 14356 certs.go:169] skipping minikubeCA CA generation: C:\Users\User\.minikube\ca.key I0814 12:12:02.091188 14356 certs.go:169] skipping proxyClientCA CA generation: C:\Users\User\.minikube\proxy-client-ca.key I0814 12:12:02.093187 14356 certs.go:269] skipping minikube-user signed cert generation: C:\Users\User\.minikube\profiles\minikube\client.key I0814 12:12:02.094161 14356 certs.go:269] skipping minikube signed cert generation: C:\Users\User\.minikube\profiles\minikube\apiserver.key.f7957b51 I0814 12:12:02.094161 14356 certs.go:269] skipping aggregator signed cert generation: C:\Users\User\.minikube\profiles\minikube\proxy-client.key I0814 12:12:02.095204 14356 certs.go:348] found cert: C:\Users\User\.minikube\certs\C:\Users\User\.minikube\certs\ca-key.pem (1679 bytes) I0814 12:12:02.096166 14356 certs.go:348] found cert: C:\Users\User\.minikube\certs\C:\Users\User\.minikube\certs\ca.pem (1029 bytes) I0814 12:12:02.097162 14356 certs.go:348] found cert: C:\Users\User\.minikube\certs\C:\Users\User\.minikube\certs\cert.pem (1070 bytes) I0814 12:12:02.098164 14356 certs.go:348] found cert: C:\Users\User\.minikube\certs\C:\Users\User\.minikube\certs\key.pem (1675 bytes) I0814 12:12:02.099166 14356 ssh_runner.go:215] scp C:\Users\User\.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes) I0814 12:12:02.119164 14356 ssh_runner.go:215] scp C:\Users\User\.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0814 12:12:02.139165 14356 ssh_runner.go:215] scp C:\Users\User\.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes) I0814 12:12:02.152167 14356 ssh_runner.go:215] scp C:\Users\User\.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0814 12:12:02.167162 14356 ssh_runner.go:215] scp C:\Users\User\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes) I0814 12:12:02.181164 14356 ssh_runner.go:215] scp C:\Users\User\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0814 12:12:02.200161 14356 ssh_runner.go:215] scp C:\Users\User\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes) I0814 12:12:02.216161 14356 ssh_runner.go:215] scp C:\Users\User\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0814 12:12:02.231159 14356 ssh_runner.go:215] scp C:\Users\User\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes) I0814 12:12:02.244169 14356 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes) I0814 12:12:02.260161 14356 ssh_runner.go:148] Run: openssl version I0814 12:12:02.312162 14356 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0814 12:12:02.324164 14356 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0814 12:12:02.330167 14356 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 May 28 17:01 /usr/share/ca-certificates/minikubeCA.pem I0814 12:12:02.332165 14356 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0814 12:12:02.380164 14356 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0814 12:12:02.386162 14356 kubeadm.go:327] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.12.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch:kube HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.0.206 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[ambassador:false dashboard:true default-storageclass:true efk:false freshpod:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false] VerifyComponents:map[apiserver:true system_pods:true]} I0814 12:12:02.413173 14356 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0814 12:12:02.482683 14356 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0814 12:12:02.489679 14356 kubeadm.go:338] found existing configuration files, will attempt cluster restart I0814 12:12:02.489679 14356 kubeadm.go:512] restartCluster start I0814 12:12:02.531683 14356 ssh_runner.go:148] Run: sudo test -d /data/minikube I0814 12:12:02.538684 14356 kubeadm.go:122] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0814 12:12:02.593680 14356 ssh_runner.go:148] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0814 12:12:02.598680 14356 api_server.go:146] Checking apiserver status ... I0814 12:12:02.634708 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0814 12:12:02.642686 14356 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0814 12:12:02.642686 14356 kubeadm.go:491] needs reconfigure: apiserver in state Stopped I0814 12:12:02.678714 14356 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0814 12:12:02.684682 14356 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0814 12:12:02.720716 14356 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0814 12:12:02.726682 14356 kubeadm.go:573] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0814 12:12:02.726682 14356 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0814 12:12:02.937245 14356 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" I0814 12:12:03.690249 14356 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" I0814 12:12:03.756187 14356 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" I0814 12:12:03.811176 14356 api_server.go:48] waiting for apiserver process to appear ... I0814 12:12:03.847176 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:04.398390 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:04.921727 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:05.397461 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:05.937750 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:06.426284 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:06.930614 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:07.444933 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:07.920272 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:08.427144 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:08.903551 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:09.424489 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:09.910214 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:10.397779 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:10.932629 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:11.420156 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:11.923210 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:12.409217 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:12:12.419210 14356 api_server.go:68] duration metric: took 8.6080348s to wait for apiserver process to appear ... I0814 12:12:12.419210 14356 api_server.go:84] waiting for apiserver healthz status ... I0814 12:12:12.419210 14356 api_server.go:221] Checking apiserver healthz at https://192.168.0.206:8443/healthz ... I0814 12:12:17.795264 14356 api_server.go:241] https://192.168.0.206:8443/healthz returned 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} W0814 12:12:17.796261 14356 api_server.go:99] status: https://192.168.0.206:8443/healthz returned error 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} I0814 12:12:18.296327 14356 api_server.go:221] Checking apiserver healthz at https://192.168.0.206:8443/healthz ... I0814 12:12:18.316326 14356 api_server.go:241] https://192.168.0.206:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0814 12:12:18.323323 14356 api_server.go:99] status: https://192.168.0.206:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0814 12:12:18.808041 14356 api_server.go:221] Checking apiserver healthz at https://192.168.0.206:8443/healthz ... I0814 12:12:18.814083 14356 api_server.go:241] https://192.168.0.206:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0814 12:12:18.816051 14356 api_server.go:99] status: https://192.168.0.206:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0814 12:12:19.310212 14356 api_server.go:221] Checking apiserver healthz at https://192.168.0.206:8443/healthz ... I0814 12:12:19.331122 14356 api_server.go:241] https://192.168.0.206:8443/healthz returned 200: ok W0814 12:12:19.340124 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:12:19.846391 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:12:20.346238 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:12:20.856208 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:12:21.346058 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:12:21.852110 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:12:22.346971 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:12:22.848628 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 ... W0814 12:16:10.345936 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:16:10.850541 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:16:11.345146 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:16:11.854333 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:16:12.353952 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 I0814 12:16:12.842401 14356 kubeadm.go:516] restartCluster took 4m10.3527222s 🤦 Unable to restart cluster, will reset it: apiserver health: controlPlane never updated to v1.18.0 I0814 12:16:12.842401 14356 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm reset --cri-socket npipe:////./pipe/docker_engine --force" I0814 12:16:53.012627 14356 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm reset --cri-socket npipe:////./pipe/docker_engine --force": (40.1702256s) I0814 12:16:53.050624 14356 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet I0814 12:16:53.082624 14356 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} W0814 12:16:53.128154 14356 kubeadm.go:719] found 27 kube-system containers to stop I0814 12:16:53.128154 14356 docker.go:229] Stopping containers: [8b3ce993f951 3a0b432dd015 5c5be6418986 82a7903e6005 b4df3b53a91a 7b694073cb0c 21969d68cfea 1fa88023a868 ad134b71b093 331675f115be e59c515c9533 669f9ea3dae5 25e664e8b06f 7a9d13378166 e337c5476195 26ee3d095f7e 846079254d0a 4c579e2ab7b7 93cd4c8f8d2d 6ededab25202 e3ca506df8c3 85f88f113959 fe5acb914580 dd183ce12375 cbe9b89737bc 083772a0f24c 3ef6d2e73343] I0814 12:16:53.159151 14356 ssh_runner.go:148] Run: docker stop 8b3ce993f951 3a0b432dd015 5c5be6418986 82a7903e6005 b4df3b53a91a 7b694073cb0c 21969d68cfea 1fa88023a868 ad134b71b093 331675f115be e59c515c9533 669f9ea3dae5 25e664e8b06f 7a9d13378166 e337c5476195 26ee3d095f7e 846079254d0a 4c579e2ab7b7 93cd4c8f8d2d 6ededab25202 e3ca506df8c3 85f88f113959 fe5acb914580 dd183ce12375 cbe9b89737bc 083772a0f24c 3ef6d2e73343 I0814 12:17:03.467300 14356 ssh_runner.go:188] Completed: docker stop 8b3ce993f951 3a0b432dd015 5c5be6418986 82a7903e6005 b4df3b53a91a 7b694073cb0c 21969d68cfea 1fa88023a868 ad134b71b093 331675f115be e59c515c9533 669f9ea3dae5 25e664e8b06f 7a9d13378166 e337c5476195 26ee3d095f7e 846079254d0a 4c579e2ab7b7 93cd4c8f8d2d 6ededab25202 e3ca506df8c3 85f88f113959 fe5acb914580 dd183ce12375 cbe9b89737bc 083772a0f24c 3ef6d2e73343: (10.3071489s) I0814 12:17:03.507326 14356 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0814 12:17:03.550298 14356 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0814 12:17:03.557303 14356 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0814 12:17:03.557303 14356 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap" I0814 12:17:21.474189 14356 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": (17.9158943s) I0814 12:17:21.475187 14356 cni.go:74] Creating CNI manager for "" I0814 12:17:21.476030 14356 cni.go:117] CNI unnecessary in this configuration, recommending no CNI I0814 12:17:21.476188 14356 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0814 12:17:21.502197 14356 ops.go:35] apiserver oom_adj: -16 I0814 12:17:21.549189 14356 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl label nodes minikube.k8s.io/version=v1.12.1 minikube.k8s.io/commit=5664228288552de9f3a446ea4f51c6f29bbdd0e0-dirty minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_08_14T12_17_21_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0814 12:17:21.555191 14356 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0814 12:17:21.812727 14356 kubeadm.go:863] duration metric: took 336.5391ms to wait for elevateKubeSystemPrivileges. I0814 12:17:21.812727 14356 kubeadm.go:329] StartCluster complete in 5m19.4265652s I0814 12:17:21.812727 14356 settings.go:123] acquiring lock: {Name:mkbb158d68fd483b3bca5b8a92ada72bb65452d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 12:17:21.812727 14356 settings.go:131] Updating kubeconfig: C:\Users\User/.kube/config I0814 12:17:21.813729 14356 lock.go:35] WriteFile acquiring C:\Users\User/.kube/config: {Name:mk48588edd353fb36eae8f0f3a4cfcd2f9ebeded Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 12:17:21.816728 14356 start.go:195] Will wait wait-timeout for node ... 🔎 Verifying Kubernetes components... I0814 12:17:21.816728 14356 addons.go:347] enableAddons start: toEnable=map[ambassador:false dashboard:true default-storageclass:true efk:false freshpod:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false], additional=[] I0814 12:17:21.819730 14356 addons.go:53] Setting storage-provisioner=true in profile "minikube" I0814 12:17:21.820729 14356 addons.go:129] Setting addon storage-provisioner=true in "minikube" W0814 12:17:21.820729 14356 addons.go:138] addon storage-provisioner should already be in state true I0814 12:17:21.820729 14356 addons.go:53] Setting dashboard=true in profile "minikube" I0814 12:17:21.821730 14356 addons.go:129] Setting addon dashboard=true in "minikube" I0814 12:17:21.819730 14356 api_server.go:48] waiting for apiserver process to appear ... I0814 12:17:21.820729 14356 addons.go:53] Setting default-storageclass=true in profile "minikube" I0814 12:17:21.820729 14356 host.go:65] Checking if "minikube" exists ... I0814 12:17:21.822727 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state W0814 12:17:21.821730 14356 addons.go:138] addon dashboard should already be in state true I0814 12:17:21.821730 14356 addons.go:269] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0814 12:17:21.824727 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:17:21.827737 14356 host.go:65] Checking if "minikube" exists ... I0814 12:17:21.831744 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:17:21.938727 14356 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0814 12:17:21.958728 14356 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl scale deployment --replicas=1 coredns -n=kube-system I0814 12:17:22.013730 14356 api_server.go:68] duration metric: took 197.0019ms to wait for apiserver process to appear ... I0814 12:17:22.024728 14356 api_server.go:84] waiting for apiserver healthz status ... I0814 12:17:22.030728 14356 api_server.go:221] Checking apiserver healthz at https://192.168.0.206:8443/healthz ... I0814 12:17:22.129731 14356 api_server.go:241] https://192.168.0.206:8443/healthz returned 200: ok W0814 12:17:22.157727 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 I0814 12:17:22.378774 14356 start.go:548] successfully scaled coredns replicas to 1 W0814 12:17:22.679776 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 I0814 12:17:23.027292 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:17:23.027292 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:17:23.027292 14356 addons.go:236] installing /etc/kubernetes/addons/dashboard-ns.yaml I0814 12:17:23.028290 14356 ssh_runner.go:215] scp deploy/addons/dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes) I0814 12:17:23.029304 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:17:23.076294 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:17:23.079294 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:17:23.079294 14356 addons.go:236] installing /etc/kubernetes/addons/storage-provisioner.yaml I0814 12:17:23.080302 14356 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (1709 bytes) I0814 12:17:23.081299 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0814 12:17:23.168292 14356 main.go:115] libmachine: [stdout =====>] : Running W0814 12:17:23.173296 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 I0814 12:17:23.173296 14356 main.go:115] libmachine: [stderr =====>] : ❗ Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.0.206:8443/apis/storage.k8s.io/v1/storageclasses": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206] W0814 12:17:23.693293 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 I0814 12:17:23.978296 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:17:23.979292 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:17:23.980292 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0814 12:17:24.019291 14356 main.go:115] libmachine: [stdout =====>] : Running I0814 12:17:24.021304 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:17:24.022311 14356 main.go:115] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] W0814 12:17:24.179293 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:17:24.668291 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 I0814 12:17:25.065289 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:17:25.065289 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:17:25.065289 14356 sshutil.go:44] new ssh client: &{IP:192.168.0.206 Port:22 SSHKeyPath:C:\Users\User\.minikube\machines\minikube\id_rsa Username:docker} I0814 12:17:25.104288 14356 main.go:115] libmachine: [stdout =====>] : 192.168.0.206 I0814 12:17:25.104288 14356 main.go:115] libmachine: [stderr =====>] : I0814 12:17:25.104288 14356 sshutil.go:44] new ssh client: &{IP:192.168.0.206 Port:22 SSHKeyPath:C:\Users\User\.minikube\machines\minikube\id_rsa Username:docker} W0814 12:17:25.167290 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 I0814 12:17:25.175291 14356 addons.go:236] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml I0814 12:17:25.175291 14356 ssh_runner.go:215] scp deploy/addons/dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes) I0814 12:17:25.200291 14356 addons.go:236] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml I0814 12:17:25.201290 14356 ssh_runner.go:215] scp deploy/addons/dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes) I0814 12:17:25.224290 14356 addons.go:236] installing /etc/kubernetes/addons/dashboard-configmap.yaml I0814 12:17:25.224799 14356 ssh_runner.go:215] scp deploy/addons/dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes) I0814 12:17:25.242837 14356 addons.go:236] installing /etc/kubernetes/addons/dashboard-dp.yaml I0814 12:17:25.243809 14356 ssh_runner.go:215] scp deploy/addons/dashboard/dashboard-dp.yaml --> /etc/kubernetes/addons/dashboard-dp.yaml (4097 bytes) I0814 12:17:25.255475 14356 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0814 12:17:25.262039 14356 addons.go:236] installing /etc/kubernetes/addons/dashboard-role.yaml I0814 12:17:25.262039 14356 ssh_runner.go:215] scp deploy/addons/dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes) I0814 12:17:25.279039 14356 addons.go:236] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml I0814 12:17:25.279039 14356 ssh_runner.go:215] scp deploy/addons/dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes) I0814 12:17:25.301027 14356 addons.go:236] installing /etc/kubernetes/addons/dashboard-sa.yaml I0814 12:17:25.301027 14356 ssh_runner.go:215] scp deploy/addons/dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes) I0814 12:17:25.319027 14356 addons.go:236] installing /etc/kubernetes/addons/dashboard-secret.yaml I0814 12:17:25.320027 14356 ssh_runner.go:215] scp deploy/addons/dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1401 bytes) I0814 12:17:25.340042 14356 addons.go:236] installing /etc/kubernetes/addons/dashboard-svc.yaml I0814 12:17:25.341039 14356 ssh_runner.go:215] scp deploy/addons/dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes) I0814 12:17:25.407032 14356 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml W0814 12:17:25.665030 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 🌟 Enabled addons: dashboard, default-storageclass, storage-provisioner I0814 12:17:25.797542 14356 addons.go:349] enableAddons completed in 3.9808146s W0814 12:17:26.164054 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:17:26.663776 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:17:27.168830 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:17:27.667133 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 ... W0814 12:21:21.176939 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:21:21.665237 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:21:22.170331 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 W0814 12:21:22.178333 14356 api_server.go:117] api server version match failed: server version: Get "https://192.168.0.206:8443/version?timeout=32s": x509: certificate is valid for 192.168.0.241, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.0.206 I0814 12:21:22.180341 14356 exit.go:58] WithError(failed to start node)=startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.0 called from: goroutine 1 [running]: runtime/debug.Stack(0x0, 0x0, 0x0) /usr/local/go/src/runtime/debug/stack.go:24 +0xa4 k8s.io/minikube/pkg/minikube/exit.WithError(0x1bf1416, 0x14, 0x1f07aa0, 0xc000d7bba0) /app/pkg/minikube/exit/exit.go:58 +0x3b k8s.io/minikube/cmd/minikube/cmd.runStart(0x2d1d7e0, 0xc0009e4bb0, 0x0, 0x1) /app/cmd/minikube/cmd/start.go:206 +0x4ff github.com/spf13/cobra.(*Command).execute(0x2d1d7e0, 0xc0009e4ba0, 0x1, 0x1, 0x2d1d7e0, 0xc0009e4ba0) /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2a4 github.com/spf13/cobra.(*Command).ExecuteC(0x2d1c820, 0x0, 0x20, 0xc0005574c0) /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x350 github.com/spf13/cobra.(*Command).Execute(...) /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887 k8s.io/minikube/cmd/minikube/cmd.Execute() /app/cmd/minikube/cmd/root.go:106 +0x6ba main.main() /app/cmd/minikube/main.go:71 +0x126 W0814 12:21:22.185328 14356 out.go:232] failed to start node: startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.0 💣 failed to start node: startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.0 😿 minikube is exiting due to an error. If the above message is not useful, open an issue: 👉 https://github.com/kubernetes/minikube/issues/new/choose ```

$ minikube logs

```==> Docker <== -- Logs begin at Fri 2020-08-14 09:11:36 UTC, end at Fri 2020-08-14 09:23:18 UTC. -- Aug 14 09:12:54 minikube dockerd[2352]: time="2020-08-14T09:12:54.840783455Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:12:54 minikube dockerd[2352]: time="2020-08-14T09:12:54.979078356Z" level=info msg="shim reaped" id=82a7903e60059151846d412b9bec52ff5a6cdcc67158ae7ae8205d16dc1c19fa Aug 14 09:12:54 minikube dockerd[2352]: time="2020-08-14T09:12:54.989405056Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:13:07 minikube dockerd[2352]: time="2020-08-14T09:13:07.415215949Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8b3ce993f951d5c07920b3361e78ca1639d23a6caf016a312a79fd797a2c079e/shim.sock" debug=false pid=5288 Aug 14 09:13:11 minikube dockerd[2352]: time="2020-08-14T09:13:11.400245466Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fbd1903899fa36a2177de4a22cab3c1f420d030854f00dae9be5ddfe452cf32a/shim.sock" debug=false pid=5374 Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.161233040Z" level=info msg="shim reaped" id=8b3ce993f951d5c07920b3361e78ca1639d23a6caf016a312a79fd797a2c079e Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.171631165Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.187605203Z" level=info msg="shim reaped" id=21969d68cfea625f32639faf5a23481f8b096f9b0b783a87846fb341721077b2 Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.197779427Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.206935949Z" level=info msg="shim reaped" id=669f9ea3dae5e0c41e03ab79792b8e7f432da328ac7f6d084d57c1bc0404b86b Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.221596684Z" level=info msg="shim reaped" id=5c5be64189860b8ca2f6420f462bd5045813538ef733430a06a7412ecec95abf Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.224342591Z" level=info msg="shim reaped" id=ad134b71b09349a0480293bbf6b9904f26f6d5c175553264493f1c562e10f902 Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.230330405Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.230375105Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.236876720Z" level=info msg="shim reaped" id=7b694073cb0c3c56b0216f765f3707c284ad18cc2dc08153ad0ef103550c4711 Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.240595129Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.249407850Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.250543953Z" level=info msg="shim reaped" id=1fa88023a868dba331ad39730d8953aa9f1cbc9927ed7a3f403712f68b404738 Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.254868463Z" level=info msg="shim reaped" id=e337c5476195b5bc7c17d00e614e60bdaae3c5e0cb850798d732f942280d29e8 Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.256341867Z" level=info msg="shim reaped" id=331675f115be6032b52efedc8f7754b4a80f307805921326b08585acff5765eb Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.258739973Z" level=info msg="shim reaped" id=25e664e8b06fb07334e83aceda495a87a71fcc6629e9e69a8d92eec5bd330cb0 Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.261040178Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.273610908Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.274035509Z" level=info msg="shim reaped" id=7a9d133781668a0c135f8d0426a1c84c725e61cf27177ac78be93f7517290f58 Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.274183610Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.275144412Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.283463132Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.291907952Z" level=info msg="shim reaped" id=b4df3b53a91a7c3f03bac7d642bf8eefd0010769139c26b7c2f9535a14bb292c Aug 14 09:16:54 minikube dockerd[2352]: time="2020-08-14T09:16:54.302333077Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:16:58 minikube dockerd[2352]: time="2020-08-14T09:16:58.986716250Z" level=info msg="shim reaped" id=3a0b432dd0150f3685ed1a275de9b56df67feb12934085fedc8c8ea55deeaf56 Aug 14 09:16:58 minikube dockerd[2352]: time="2020-08-14T09:16:58.996951074Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:17:03 minikube dockerd[2352]: time="2020-08-14T09:17:03.927590514Z" level=info msg="Container e59c515c953340dd8c271c97c36c05a2c550233680642cecf72b6868929a9acf failed to exit within 10 seconds of signal 15 - using the force" Aug 14 09:17:04 minikube dockerd[2352]: time="2020-08-14T09:17:04.018584531Z" level=info msg="shim reaped" id=e59c515c953340dd8c271c97c36c05a2c550233680642cecf72b6868929a9acf Aug 14 09:17:04 minikube dockerd[2352]: time="2020-08-14T09:17:04.028896055Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:17:12 minikube dockerd[2352]: time="2020-08-14T09:17:12.545714593Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4e266028eff5218034f2ed341b5801cd8a0ae43347e526ef8fff8686a9386cd3/shim.sock" debug=false pid=7340 Aug 14 09:17:12 minikube dockerd[2352]: time="2020-08-14T09:17:12.678567109Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cfc8651c26c84ab4057f5cdeeb7c66f86311dcd9a4634fa7412a1943c363495a/shim.sock" debug=false pid=7412 Aug 14 09:17:12 minikube dockerd[2352]: time="2020-08-14T09:17:12.735383144Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/571701ee53fa3cf3ef4f9b2d1aa69d8d9bb6d1c9fda1f088b29a4c232d18f2d5/shim.sock" debug=false pid=7437 Aug 14 09:17:12 minikube dockerd[2352]: time="2020-08-14T09:17:12.789479472Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0ad3325fe3b81fd99a04b44fa5599bba15e71e74dcafd206584b859bcfbdcfc5/shim.sock" debug=false pid=7464 Aug 14 09:17:12 minikube dockerd[2352]: time="2020-08-14T09:17:12.944689340Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/607cdeada0ca1619a214d1d460b6661bfe7d7a632e6a06a160bd5a6d24d5eebf/shim.sock" debug=false pid=7514 Aug 14 09:17:13 minikube dockerd[2352]: time="2020-08-14T09:17:13.100384310Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6deda4887cce2fc4a2409b637aada25cca7f8c94fe2ba6cb642efde1a88b9d65/shim.sock" debug=false pid=7583 Aug 14 09:17:13 minikube dockerd[2352]: time="2020-08-14T09:17:13.168881073Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/67a122256fbd29c979073795c130adb745cf480fcc17249aeaca2cb34b03a099/shim.sock" debug=false pid=7610 Aug 14 09:17:13 minikube dockerd[2352]: time="2020-08-14T09:17:13.170760677Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c8ef9015f80e0d162710881c9cafed7fdad4007ca6e44406fcb3c33f04025b32/shim.sock" debug=false pid=7617 Aug 14 09:17:19 minikube dockerd[2352]: time="2020-08-14T09:17:19.904529847Z" level=info msg="shim reaped" id=2c703de0cf28a3a3e72fa722a22fbab32125070d1f97e2000a96450ac89d36fb Aug 14 09:17:19 minikube dockerd[2352]: time="2020-08-14T09:17:19.905831250Z" level=info msg="shim reaped" id=fbd1903899fa36a2177de4a22cab3c1f420d030854f00dae9be5ddfe452cf32a Aug 14 09:17:19 minikube dockerd[2352]: time="2020-08-14T09:17:19.915255672Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:17:19 minikube dockerd[2352]: time="2020-08-14T09:17:19.916382675Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:17:19 minikube dockerd[2352]: time="2020-08-14T09:17:19.971581906Z" level=info msg="shim reaped" id=a5817be2a2294c9eea9b331cdcc87b97d30c60566baa6def78c38dcca9033e77 Aug 14 09:17:19 minikube dockerd[2352]: time="2020-08-14T09:17:19.972172907Z" level=info msg="shim reaped" id=fe77b658b012b1c1b2e7d8ad099154a1fa14c81e83036bb7f9827e01c6ed8d52 Aug 14 09:17:19 minikube dockerd[2352]: time="2020-08-14T09:17:19.996408565Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:17:19 minikube dockerd[2352]: time="2020-08-14T09:17:19.996470165Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 14 09:17:30 minikube dockerd[2352]: time="2020-08-14T09:17:30.778584096Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/692a6e0b61ae0406048fa19ae23299886b005834d52d0f7760339124d1e8ba09/shim.sock" debug=false pid=8455 Aug 14 09:17:31 minikube dockerd[2352]: time="2020-08-14T09:17:31.399494065Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/58f95d4d33fbc753298a0d20ab9462234237671aca9f95c0b5a665e1f4df47fa/shim.sock" debug=false pid=8509 Aug 14 09:17:31 minikube dockerd[2352]: time="2020-08-14T09:17:31.446602176Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/022c95a08ddaf17aaca5802adfbe2dce368d7484cdf560b462736176c4b44ef9/shim.sock" debug=false pid=8527 Aug 14 09:17:31 minikube dockerd[2352]: time="2020-08-14T09:17:31.906377364Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fcc7971f0f91168b84fe5b39037236d9d490f9d7b756b953c48d4958a637f271/shim.sock" debug=false pid=8626 Aug 14 09:17:31 minikube dockerd[2352]: time="2020-08-14T09:17:31.926888912Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6b6f9d1b44502976354ea9f20a01eb5affb70b448cd822f3fe4ded6d69e23717/shim.sock" debug=false pid=8638 Aug 14 09:17:32 minikube dockerd[2352]: time="2020-08-14T09:17:32.363099544Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d659ea6c81b131b20bd468ad01763e75a13b1a2dfadc48ced457ec5107095df4/shim.sock" debug=false pid=8733 Aug 14 09:17:32 minikube dockerd[2352]: time="2020-08-14T09:17:32.712297870Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/60b0a5acd028dc807cf9e6c2d2fecc7b99e994063a77f20e8756ebbc62297f89/shim.sock" debug=false pid=8794 Aug 14 09:17:33 minikube dockerd[2352]: time="2020-08-14T09:17:33.795480132Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fcebfe7549abde3184102c932bbb2bc6c65497e6a49572d789319a79be92fe83/shim.sock" debug=false pid=8838 Aug 14 09:17:34 minikube dockerd[2352]: time="2020-08-14T09:17:34.326625689Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3bd8666d0cdd1d03ae3ed5b8d810deeef3308d2ca2ed4a927e70a5b884799a70/shim.sock" debug=false pid=8873 Aug 14 09:17:35 minikube dockerd[2352]: time="2020-08-14T09:17:35.078823067Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e45905a28e417f50b02210eb022d253c84fd093f9b9409bb7cb0f65974b98fd0/shim.sock" debug=false pid=8938 ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID e45905a28e417 4689081edb103 5 minutes ago Running storage-provisioner 0 3bd8666d0cdd1 fcebfe7549abd 86262685d9abb 5 minutes ago Running dashboard-metrics-scraper 0 6b6f9d1b44502 60b0a5acd028d 85d666cddd043 5 minutes ago Running kubernetes-dashboard 0 fcc7971f0f911 d659ea6c81b13 67da37a9a360e 5 minutes ago Running coredns 0 022c95a08ddaf 58f95d4d33fbc 43940c34f24f3 5 minutes ago Running kube-proxy 0 692a6e0b61ae0 c8ef9015f80e0 a31f78c7c8ce1 6 minutes ago Running kube-scheduler 19 571701ee53fa3 67a122256fbd2 303ce5db0e90d 6 minutes ago Running etcd 8 0ad3325fe3b81 6deda4887cce2 d3e55153f52fb 6 minutes ago Running kube-controller-manager 19 cfc8651c26c84 607cdeada0ca1 74060cea7f704 6 minutes ago Running kube-apiserver 8 4e266028eff52 1fa88023a868d a31f78c7c8ce1 11 minutes ago Exited kube-scheduler 18 669f9ea3dae5e ad134b71b0934 303ce5db0e90d 11 minutes ago Exited etcd 7 e337c5476195b 331675f115be6 d3e55153f52fb 11 minutes ago Exited kube-controller-manager 18 25e664e8b06fb e59c515c95334 74060cea7f704 11 minutes ago Exited kube-apiserver 7 7a9d133781668 ==> coredns [d659ea6c81b1] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=5664228288552de9f3a446ea4f51c6f29bbdd0e0-dirty minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_08_14T12_17_21_0700 minikube.k8s.io/version=v1.12.1 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 14 Aug 2020 09:17:18 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Fri, 14 Aug 2020 09:23:18 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 14 Aug 2020 09:22:31 +0000 Fri, 14 Aug 2020 09:17:13 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 14 Aug 2020 09:22:31 +0000 Fri, 14 Aug 2020 09:17:13 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 14 Aug 2020 09:22:31 +0000 Fri, 14 Aug 2020 09:17:13 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 14 Aug 2020 09:22:31 +0000 Fri, 14 Aug 2020 09:17:29 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.0.206 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 17784752Ki hugepages-2Mi: 0 memory: 3066816Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 17784752Ki hugepages-2Mi: 0 memory: 3066816Ki pods: 110 System Info: Machine ID: c86e212c69594445b1633c53722d0443 System UUID: c513b50e-5400-834e-934f-ca0b5ad17b8b Boot ID: bf262f84-f161-4c8b-93f7-50328336ef77 Kernel Version: 4.19.114 OS Image: Buildroot 2019.02.11 Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.12 Kubelet Version: v1.18.0 Kube-Proxy Version: v1.18.0 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-66bff467f8-95nmn 100m (5%) 0 (0%) 70Mi (2%) 170Mi (5%) 5m51s kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m49s kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 5m49s kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 5m49s kube-system kube-proxy-v5bg7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m51s kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m49s kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m53s kubernetes-dashboard dashboard-metrics-scraper-dc6947fbf-m8gpw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m51s kubernetes-dashboard kubernetes-dashboard-6dbb54fd95-vhfss 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m50s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (32%) 0 (0%) memory 70Mi (2%) 170Mi (5%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 6m7s (x5 over 6m8s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6m7s (x5 over 6m8s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m7s (x5 over 6m8s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 5m51s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 5m50s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 5m50s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 5m50s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeNotReady 5m50s kubelet, minikube Node minikube status is now: NodeNotReady Normal NodeAllocatableEnforced 5m50s kubelet, minikube Updated Node Allocatable limit across pods Normal NodeReady 5m50s kubelet, minikube Node minikube status is now: NodeReady Normal Starting 5m48s kube-proxy, minikube Starting kube-proxy. ==> dmesg <== [Aug14 09:11] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.039962] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. [ +0.043902] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ +0.011371] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug, * this clock source is slow. Consider trying other clock sources [ +3.039513] Unstable clock detected, switching default tracing clock to "global" If you want to keep using the local clock, then add: "trace_clock=local" on the kernel command line [ +0.000052] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +0.538335] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons [ +0.765294] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument [ +0.010302] systemd-fstab-generator[1229]: Ignoring "noauto" for root device [ +0.013869] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ +0.000004] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [ +2.101826] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [ +0.175705] vboxguest: loading out-of-tree module taints kernel. [ +0.005371] vboxguest: PCI device not found, probably running on physical hardware. [ +13.130020] systemd-fstab-generator[2332]: Ignoring "noauto" for root device [ +0.098001] systemd-fstab-generator[2342]: Ignoring "noauto" for root device [Aug14 09:12] systemd-fstab-generator[2655]: Ignoring "noauto" for root device [ +0.591720] systemd-fstab-generator[2726]: Ignoring "noauto" for root device [ +8.674913] kauditd_printk_skb: 107 callbacks suppressed [ +11.846084] kauditd_printk_skb: 32 callbacks suppressed [ +32.151591] kauditd_printk_skb: 71 callbacks suppressed [Aug14 09:13] kauditd_printk_skb: 2 callbacks suppressed [ +31.375385] NFSD: Unable to end grace period: -110 [Aug14 09:17] systemd-fstab-generator[6670]: Ignoring "noauto" for root device [ +17.069205] systemd-fstab-generator[8089]: Ignoring "noauto" for root device [ +14.648882] kauditd_printk_skb: 17 callbacks suppressed ==> etcd [67a122256fbd] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-08-14 09:17:13.439568 I | etcdmain: etcd Version: 3.4.3 2020-08-14 09:17:13.439612 I | etcdmain: Git SHA: 3cf2f69b5 2020-08-14 09:17:13.439617 I | etcdmain: Go Version: go1.12.12 2020-08-14 09:17:13.439623 I | etcdmain: Go OS/Arch: linux/amd64 2020-08-14 09:17:13.439663 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-08-14 09:17:13.439752 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-08-14 09:17:13.445664 I | embed: name = minikube 2020-08-14 09:17:13.445682 I | embed: data dir = /var/lib/minikube/etcd 2020-08-14 09:17:13.445689 I | embed: member dir = /var/lib/minikube/etcd/member 2020-08-14 09:17:13.445694 I | embed: heartbeat = 100ms 2020-08-14 09:17:13.445700 I | embed: election = 1000ms 2020-08-14 09:17:13.445705 I | embed: snapshot count = 10000 2020-08-14 09:17:13.445716 I | embed: advertise client URLs = https://192.168.0.206:2379 2020-08-14 09:17:13.549235 I | etcdserver: starting member dcae48a1b4fed254 in cluster 873b6578f27a03ff raft2020/08/14 09:17:13 INFO: dcae48a1b4fed254 switched to configuration voters=() raft2020/08/14 09:17:13 INFO: dcae48a1b4fed254 became follower at term 0 raft2020/08/14 09:17:13 INFO: newRaft dcae48a1b4fed254 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2020/08/14 09:17:13 INFO: dcae48a1b4fed254 became follower at term 1 raft2020/08/14 09:17:13 INFO: dcae48a1b4fed254 switched to configuration voters=(15901727193655333460) 2020-08-14 09:17:13.575461 W | auth: simple token is not cryptographically signed 2020-08-14 09:17:13.591743 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] 2020-08-14 09:17:13.602323 I | etcdserver: dcae48a1b4fed254 as single-node; fast-forwarding 9 ticks (election ticks 10) 2020-08-14 09:17:13.605445 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-08-14 09:17:13.605740 I | embed: listening for metrics on http://127.0.0.1:2381 2020-08-14 09:17:13.605895 I | embed: listening for peers on 192.168.0.206:2380 raft2020/08/14 09:17:13 INFO: dcae48a1b4fed254 switched to configuration voters=(15901727193655333460) 2020-08-14 09:17:13.606315 I | etcdserver/membership: added member dcae48a1b4fed254 [https://192.168.0.206:2380] to cluster 873b6578f27a03ff raft2020/08/14 09:17:14 INFO: dcae48a1b4fed254 is starting a new election at term 1 raft2020/08/14 09:17:14 INFO: dcae48a1b4fed254 became candidate at term 2 raft2020/08/14 09:17:14 INFO: dcae48a1b4fed254 received MsgVoteResp from dcae48a1b4fed254 at term 2 raft2020/08/14 09:17:14 INFO: dcae48a1b4fed254 became leader at term 2 raft2020/08/14 09:17:14 INFO: raft.node: dcae48a1b4fed254 elected leader dcae48a1b4fed254 at term 2 2020-08-14 09:17:14.351604 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.0.206:2379]} to cluster 873b6578f27a03ff 2020-08-14 09:17:14.351675 I | embed: ready to serve client requests 2020-08-14 09:17:14.351921 I | embed: ready to serve client requests 2020-08-14 09:17:14.354279 I | embed: serving client requests on 192.168.0.206:2379 2020-08-14 09:17:14.354483 I | etcdserver: setting up the initial cluster version to 3.4 2020-08-14 09:17:14.355482 I | embed: serving client requests on 127.0.0.1:2379 2020-08-14 09:17:14.357120 N | etcdserver/membership: set the initial cluster version to 3.4 2020-08-14 09:17:14.357269 I | etcdserver/api: enabled capabilities for version 3.4 2020-08-14 09:17:29.457418 W | etcdserver: read-only range request "key:\"/registry/endpointslices/kubernetes-dashboard/dashboard-metrics-scraper-qlzxw\" " with result "range_response_count:1 size:952" took too long (181.503629ms) to execute 2020-08-14 09:17:29.458959 W | etcdserver: read-only range request "key:\"/registry/endpointslices/kube-system/kube-dns-7d22n\" " with result "range_response_count:1 size:849" took too long (192.853156ms) to execute 2020-08-14 09:17:29.461331 W | etcdserver: read-only range request "key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" " with result "range_response_count:1 size:4809" took too long (104.593348ms) to execute 2020-08-14 09:17:29.468931 W | etcdserver: read-only range request "key:\"/registry/minions/minikube\" " with result "range_response_count:1 size:9678" took too long (111.988465ms) to execute 2020-08-14 09:17:29.810980 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (135.27932ms) to execute ==> etcd [ad134b71b093] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-08-14 09:12:13.202708 I | etcdmain: etcd Version: 3.4.3 2020-08-14 09:12:13.202741 I | etcdmain: Git SHA: 3cf2f69b5 2020-08-14 09:12:13.202745 I | etcdmain: Go Version: go1.12.12 2020-08-14 09:12:13.202750 I | etcdmain: Go OS/Arch: linux/amd64 2020-08-14 09:12:13.202754 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2 2020-08-14 09:12:13.202794 N | etcdmain: the server is already initialized as member before, starting as etcd member... [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-08-14 09:12:13.202819 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-08-14 09:12:13.214433 I | embed: name = minikube 2020-08-14 09:12:13.214466 I | embed: data dir = /var/lib/minikube/etcd 2020-08-14 09:12:13.214473 I | embed: member dir = /var/lib/minikube/etcd/member 2020-08-14 09:12:13.214478 I | embed: heartbeat = 100ms 2020-08-14 09:12:13.214481 I | embed: election = 1000ms 2020-08-14 09:12:13.214485 I | embed: snapshot count = 10000 2020-08-14 09:12:13.214493 I | embed: advertise client URLs = https://192.168.0.206:2379 2020-08-14 09:12:13.214497 I | embed: initial advertise peer URLs = https://192.168.0.206:2380 2020-08-14 09:12:13.214502 I | embed: initial cluster = 2020-08-14 09:12:13.255116 I | etcdserver: restarting member dcae48a1b4fed254 in cluster 873b6578f27a03ff at commit index 1469 raft2020/08/14 09:12:13 INFO: dcae48a1b4fed254 switched to configuration voters=() raft2020/08/14 09:12:13 INFO: dcae48a1b4fed254 became follower at term 3 raft2020/08/14 09:12:13 INFO: newRaft dcae48a1b4fed254 [peers: [], term: 3, commit: 1469, applied: 0, lastindex: 1469, lastterm: 3] 2020-08-14 09:12:13.269543 I | mvcc: restore compact to 831 2020-08-14 09:12:13.273283 W | auth: simple token is not cryptographically signed 2020-08-14 09:12:13.279773 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] raft2020/08/14 09:12:13 INFO: dcae48a1b4fed254 switched to configuration voters=(15901727193655333460) 2020-08-14 09:12:13.289410 I | etcdserver/membership: added member dcae48a1b4fed254 [https://192.168.0.206:2380] to cluster 873b6578f27a03ff 2020-08-14 09:12:13.289511 N | etcdserver/membership: set the initial cluster version to 3.4 2020-08-14 09:12:13.289554 I | etcdserver/api: enabled capabilities for version 3.4 2020-08-14 09:12:13.291276 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-08-14 09:12:13.291508 I | embed: listening for metrics on http://127.0.0.1:2381 2020-08-14 09:12:13.295691 I | embed: listening for peers on 192.168.0.206:2380 raft2020/08/14 09:12:15 INFO: dcae48a1b4fed254 is starting a new election at term 3 raft2020/08/14 09:12:15 INFO: dcae48a1b4fed254 became candidate at term 4 raft2020/08/14 09:12:15 INFO: dcae48a1b4fed254 received MsgVoteResp from dcae48a1b4fed254 at term 4 raft2020/08/14 09:12:15 INFO: dcae48a1b4fed254 became leader at term 4 raft2020/08/14 09:12:15 INFO: raft.node: dcae48a1b4fed254 elected leader dcae48a1b4fed254 at term 4 2020-08-14 09:12:15.359610 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.0.206:2379]} to cluster 873b6578f27a03ff 2020-08-14 09:12:15.359712 I | embed: ready to serve client requests 2020-08-14 09:12:15.360271 I | embed: ready to serve client requests 2020-08-14 09:12:15.362895 I | embed: serving client requests on 192.168.0.206:2379 2020-08-14 09:12:15.364919 I | embed: serving client requests on 127.0.0.1:2379 2020-08-14 09:16:13.548389 I | embed: rejected connection from "192.168.0.206:43116" (error "remote error: tls: bad certificate", ServerName "") 2020-08-14 09:16:14.557244 I | embed: rejected connection from "192.168.0.206:43122" (error "remote error: tls: bad certificate", ServerName "") 2020-08-14 09:16:16.191759 I | embed: rejected connection from "192.168.0.206:43128" (error "remote error: tls: bad certificate", ServerName "") 2020-08-14 09:16:18.320126 I | embed: rejected connection from "192.168.0.206:43140" (error "remote error: tls: bad certificate", ServerName "") 2020-08-14 09:16:22.660926 I | embed: rejected connection from "192.168.0.206:43160" (error "remote error: tls: bad certificate", ServerName "") 2020-08-14 09:16:27.990776 I | embed: rejected connection from "192.168.0.206:43180" (error "remote error: tls: bad certificate", ServerName "") 2020-08-14 09:16:38.401840 I | embed: rejected connection from "192.168.0.206:43220" (error "remote error: tls: bad certificate", ServerName "") 2020-08-14 09:16:53.863232 N | pkg/osutil: received terminated signal, shutting down... 2020-08-14 09:16:53.876691 I | etcdserver: skipped leadership transfer for single voting member cluster ==> kernel <== 09:23:19 up 11 min, 0 users, load average: 0.07, 0.30, 0.24 Linux minikube 4.19.114 #1 SMP Mon Jul 6 11:11:02 PDT 2020 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2019.02.11" ==> kube-apiserver [607cdeada0ca] <== I0814 09:20:53.759493 1 log.go:172] http: TLS handshake error from 192.168.0.242:56147: remote error: tls: bad certificate I0814 09:20:54.265138 1 log.go:172] http: TLS handshake error from 192.168.0.242:56148: remote error: tls: bad certificate I0814 09:20:54.758890 1 log.go:172] http: TLS handshake error from 192.168.0.242:56149: remote error: tls: bad certificate I0814 09:20:55.276333 1 log.go:172] http: TLS handshake error from 192.168.0.242:56150: remote error: tls: bad certificate ... I0814 09:21:21.266906 1 log.go:172] http: TLS handshake error from 192.168.0.242:56203: remote error: tls: bad certificate I0814 09:21:21.772123 1 log.go:172] http: TLS handshake error from 192.168.0.242:56204: remote error: tls: bad certificate I0814 09:21:22.260276 1 log.go:172] http: TLS handshake error from 192.168.0.242:56205: remote error: tls: bad certificate I0814 09:21:22.765684 1 log.go:172] http: TLS handshake error from 192.168.0.242:56206: remote error: tls: bad certificate I0814 09:21:22.773244 1 log.go:172] http: TLS handshake error from 192.168.0.242:56207: remote error: tls: bad certificate ==> kube-apiserver [e59c515c9533] <== W0814 09:17:02.218936 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0814 09:17:02.240224 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0814 09:17:02.305294 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0814 09:17:02.342744 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... ... W0814 09:17:03.785965 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0814 09:17:03.811896 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0814 09:17:03.815280 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0814 09:17:03.895816 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0814 09:17:03.943475 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... ==> kube-controller-manager [331675f115be] <== I0814 09:12:23.841611 1 core.go:239] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true. W0814 09:12:23.841615 1 controllermanager.go:525] Skipping "route" I0814 09:12:23.841667 1 node_lifecycle_controller.go:546] Starting node controller I0814 09:12:23.841678 1 shared_informer.go:223] Waiting for caches to sync for taint I0814 09:12:23.905178 1 controllermanager.go:533] Started "persistentvolume-binder" I0814 09:12:23.905219 1 pv_controller_base.go:295] Starting persistent volume controller I0814 09:12:23.905224 1 shared_informer.go:223] Waiting for caches to sync for persistent volume I0814 09:12:24.055734 1 controllermanager.go:533] Started "serviceaccount" I0814 09:12:24.055784 1 serviceaccounts_controller.go:117] Starting service account controller I0814 09:12:24.055790 1 shared_informer.go:223] Waiting for caches to sync for service account I0814 09:12:24.207370 1 controllermanager.go:533] Started "daemonset" I0814 09:12:24.207574 1 daemon_controller.go:257] Starting daemon sets controller I0814 09:12:24.207585 1 shared_informer.go:223] Waiting for caches to sync for daemon sets I0814 09:12:24.356139 1 controllermanager.go:533] Started "job" I0814 09:12:24.356223 1 job_controller.go:144] Starting job controller I0814 09:12:24.356233 1 shared_informer.go:223] Waiting for caches to sync for job I0814 09:12:24.506117 1 controllermanager.go:533] Started "csrsigning" I0814 09:12:24.506168 1 certificate_controller.go:119] Starting certificate controller "csrsigning" I0814 09:12:24.506176 1 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning I0814 09:12:24.506204 1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key I0814 09:12:24.656573 1 controllermanager.go:533] Started "bootstrapsigner" I0814 09:12:24.656936 1 shared_informer.go:223] Waiting for caches to sync for resource quota I0814 09:12:24.657006 1 shared_informer.go:223] Waiting for caches to sync for bootstrap_signer W0814 09:12:24.693469 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0814 09:12:24.706181 1 shared_informer.go:230] Caches are synced for TTL I0814 09:12:24.708364 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I0814 09:12:24.711105 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I0814 09:12:24.755250 1 shared_informer.go:230] Caches are synced for PV protection I0814 09:12:24.755988 1 shared_informer.go:230] Caches are synced for service account I0814 09:12:24.757124 1 shared_informer.go:230] Caches are synced for bootstrap_signer I0814 09:12:24.759225 1 shared_informer.go:230] Caches are synced for namespace I0814 09:12:25.031667 1 shared_informer.go:230] Caches are synced for disruption I0814 09:12:25.031687 1 disruption.go:339] Sending events to api server. I0814 09:12:25.031743 1 shared_informer.go:230] Caches are synced for GC I0814 09:12:25.041807 1 shared_informer.go:230] Caches are synced for taint I0814 09:12:25.041889 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: I0814 09:12:25.042003 1 taint_manager.go:187] Starting NoExecuteTaintManager W0814 09:12:25.042942 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. I0814 09:12:25.042976 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. I0814 09:12:25.043132 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"16bd26ba-5a1f-4009-b62f-5eb8a5d3e0a5", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I0814 09:12:25.055603 1 shared_informer.go:230] Caches are synced for HPA I0814 09:12:25.056346 1 shared_informer.go:230] Caches are synced for job I0814 09:12:25.056347 1 shared_informer.go:230] Caches are synced for endpoint_slice I0814 09:12:25.070319 1 shared_informer.go:230] Caches are synced for deployment I0814 09:12:25.104135 1 shared_informer.go:230] Caches are synced for ReplicaSet I0814 09:12:25.108333 1 shared_informer.go:230] Caches are synced for ReplicationController I0814 09:12:25.108562 1 shared_informer.go:230] Caches are synced for daemon sets I0814 09:12:25.256261 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I0814 09:12:25.262032 1 shared_informer.go:230] Caches are synced for resource quota I0814 09:12:25.267136 1 shared_informer.go:230] Caches are synced for stateful set I0814 09:12:25.281941 1 shared_informer.go:230] Caches are synced for PVC protection I0814 09:12:25.296977 1 shared_informer.go:230] Caches are synced for endpoint I0814 09:12:25.305346 1 shared_informer.go:230] Caches are synced for persistent volume I0814 09:12:25.308111 1 shared_informer.go:230] Caches are synced for expand I0814 09:12:25.321252 1 shared_informer.go:230] Caches are synced for garbage collector I0814 09:12:25.321269 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0814 09:12:25.323283 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I0814 09:12:25.323326 1 shared_informer.go:230] Caches are synced for garbage collector I0814 09:12:25.325663 1 shared_informer.go:230] Caches are synced for attach detach I0814 09:12:25.357129 1 shared_informer.go:230] Caches are synced for resource quota ==> kube-controller-manager [6deda4887cce] <== I0814 09:17:27.693530 1 node_lifecycle_controller.go:384] Sending events to api server. I0814 09:17:27.693924 1 taint_manager.go:163] Sending events to api server. I0814 09:17:27.694412 1 node_lifecycle_controller.go:512] Controller will reconcile labels. I0814 09:17:27.694461 1 controllermanager.go:533] Started "nodelifecycle" I0814 09:17:27.694500 1 node_lifecycle_controller.go:546] Starting node controller I0814 09:17:27.694505 1 shared_informer.go:223] Waiting for caches to sync for taint I0814 09:17:27.843283 1 node_lifecycle_controller.go:78] Sending events to api server E0814 09:17:27.843317 1 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided W0814 09:17:27.843328 1 controllermanager.go:525] Skipping "cloud-node-lifecycle" I0814 09:17:28.094029 1 controllermanager.go:533] Started "deployment" I0814 09:17:28.094182 1 deployment_controller.go:153] Starting deployment controller I0814 09:17:28.094236 1 shared_informer.go:223] Waiting for caches to sync for deployment I0814 09:17:28.344664 1 controllermanager.go:533] Started "statefulset" I0814 09:17:28.344839 1 stateful_set.go:146] Starting stateful set controller I0814 09:17:28.344971 1 shared_informer.go:223] Waiting for caches to sync for stateful set I0814 09:17:28.345220 1 shared_informer.go:223] Waiting for caches to sync for resource quota I0814 09:17:28.383719 1 shared_informer.go:223] Waiting for caches to sync for garbage collector W0814 09:17:28.435807 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0814 09:17:28.446482 1 shared_informer.go:230] Caches are synced for GC I0814 09:17:28.446547 1 shared_informer.go:230] Caches are synced for TTL I0814 09:17:28.454668 1 shared_informer.go:230] Caches are synced for stateful set I0814 09:17:28.456031 1 shared_informer.go:230] Caches are synced for job I0814 09:17:28.480461 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I0814 09:17:28.481044 1 shared_informer.go:230] Caches are synced for endpoint_slice I0814 09:17:28.500501 1 shared_informer.go:230] Caches are synced for ReplicationController I0814 09:17:28.500551 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I0814 09:17:28.500469 1 shared_informer.go:230] Caches are synced for PVC protection I0814 09:17:28.503849 1 shared_informer.go:230] Caches are synced for HPA I0814 09:17:28.503864 1 shared_informer.go:230] Caches are synced for endpoint I0814 09:17:28.503890 1 shared_informer.go:230] Caches are synced for bootstrap_signer I0814 09:17:28.519331 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I0814 09:17:28.570723 1 shared_informer.go:230] Caches are synced for service account I0814 09:17:28.579264 1 shared_informer.go:230] Caches are synced for namespace I0814 09:17:28.695223 1 shared_informer.go:230] Caches are synced for taint I0814 09:17:28.695291 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: W0814 09:17:28.695364 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. I0814 09:17:28.695411 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. I0814 09:17:28.695707 1 taint_manager.go:187] Starting NoExecuteTaintManager I0814 09:17:28.696097 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"b3e80210-93c2-46be-8523-2c97d368e0f2", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I0814 09:17:28.735853 1 shared_informer.go:230] Caches are synced for daemon sets I0814 09:17:28.745047 1 shared_informer.go:230] Caches are synced for disruption I0814 09:17:28.745090 1 disruption.go:339] Sending events to api server. I0814 09:17:28.783564 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"337f32bd-ec65-4ee3-887f-54704dd7b53a", APIVersion:"apps/v1", ResourceVersion:"205", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-v5bg7 I0814 09:17:28.794138 1 shared_informer.go:230] Caches are synced for ReplicaSet I0814 09:17:28.794579 1 shared_informer.go:230] Caches are synced for deployment I0814 09:17:28.841902 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"040edfca-8e77-458d-8db7-beb51366dfba", APIVersion:"apps/v1", ResourceVersion:"272", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-dc6947fbf to 1 I0814 09:17:28.856887 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"0a8a981c-aec9-4f90-8409-a4aa830aca03", APIVersion:"apps/v1", ResourceVersion:"273", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-6dbb54fd95 to 1 I0814 09:17:28.857345 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"ff2bed0b-0534-4692-bf7e-505f5407c70d", APIVersion:"apps/v1", ResourceVersion:"226", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1 I0814 09:17:28.864302 1 shared_informer.go:230] Caches are synced for PV protection I0814 09:17:28.864346 1 shared_informer.go:230] Caches are synced for expand I0814 09:17:28.895097 1 shared_informer.go:230] Caches are synced for attach detach I0814 09:17:28.897469 1 shared_informer.go:230] Caches are synced for persistent volume I0814 09:17:28.955353 1 shared_informer.go:230] Caches are synced for garbage collector I0814 09:17:28.955376 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0814 09:17:29.005342 1 shared_informer.go:230] Caches are synced for garbage collector I0814 09:17:29.018324 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-dc6947fbf", UID:"89fb114b-f6d6-4fe6-952a-97d1e90562ce", APIVersion:"apps/v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-dc6947fbf-m8gpw I0814 09:17:29.033327 1 shared_informer.go:230] Caches are synced for resource quota I0814 09:17:29.047302 1 shared_informer.go:230] Caches are synced for resource quota I0814 09:17:29.184979 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6c7f0c1f-6f17-43d9-8f1f-0915edc24487", APIVersion:"apps/v1", ResourceVersion:"351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-95nmn I0814 09:17:29.264078 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6dbb54fd95", UID:"99089aa4-d65d-434d-a671-201f49c09071", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-6dbb54fd95-vhfss ==> kube-proxy [58f95d4d33fb] <== W0814 09:17:31.901607 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy I0814 09:17:31.918426 1 node.go:136] Successfully retrieved node IP: 192.168.0.206 I0814 09:17:31.918464 1 server_others.go:186] Using iptables Proxier. W0814 09:17:31.918473 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I0814 09:17:31.918478 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I0814 09:17:31.918750 1 server.go:583] Version: v1.18.0 I0814 09:17:31.919168 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0814 09:17:31.919398 1 config.go:315] Starting service config controller I0814 09:17:31.919408 1 shared_informer.go:223] Waiting for caches to sync for service config I0814 09:17:31.919426 1 config.go:133] Starting endpoints config controller I0814 09:17:31.919443 1 shared_informer.go:223] Waiting for caches to sync for endpoints config I0814 09:17:32.029122 1 shared_informer.go:230] Caches are synced for service config I0814 09:17:32.029308 1 shared_informer.go:230] Caches are synced for endpoints config ==> kube-scheduler [1fa88023a868] <== I0814 09:12:13.390294 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0814 09:12:13.390362 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0814 09:12:14.492489 1 serving.go:313] Generated self-signed cert in-memory I0814 09:12:18.513010 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0814 09:12:18.513047 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0814 09:12:18.520103 1 authorization.go:47] Authorization is disabled W0814 09:12:18.520231 1 authentication.go:40] Authentication is disabled I0814 09:12:18.520274 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0814 09:12:18.524962 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0814 09:12:18.525209 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0814 09:12:18.525415 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0814 09:12:18.526393 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0814 09:12:18.526531 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0814 09:12:18.526919 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0814 09:12:18.626532 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0814 09:12:18.627202 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kube-scheduler [c8ef9015f80e] <== I0814 09:17:13.743494 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0814 09:17:13.743728 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0814 09:17:14.639665 1 serving.go:313] Generated self-signed cert in-memory W0814 09:17:18.314746 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0814 09:17:18.314911 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0814 09:17:18.314990 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. W0814 09:17:18.315037 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0814 09:17:18.389459 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0814 09:17:18.389480 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0814 09:17:18.390758 1 authorization.go:47] Authorization is disabled W0814 09:17:18.390776 1 authentication.go:40] Authentication is disabled I0814 09:17:18.390784 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0814 09:17:18.404426 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0814 09:17:18.404667 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0814 09:17:18.404696 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0814 09:17:18.407115 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file E0814 09:17:18.407717 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0814 09:17:18.407983 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0814 09:17:18.408104 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0814 09:17:18.411745 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0814 09:17:18.415263 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0814 09:17:18.415467 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0814 09:17:18.420791 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0814 09:17:18.420966 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0814 09:17:18.421133 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0814 09:17:18.421223 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0814 09:17:18.421326 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0814 09:17:18.421485 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0814 09:17:18.421640 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0814 09:17:18.444264 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0814 09:17:18.444466 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0814 09:17:18.444678 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0814 09:17:18.444768 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0814 09:17:18.444888 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope I0814 09:17:21.007295 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== -- Logs begin at Fri 2020-08-14 09:11:36 UTC, end at Fri 2020-08-14 09:23:19 UTC. -- Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.230084 8098 topology_manager.go:233] [topologymanager] Topology Admit Handler Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.238686 8098 topology_manager.go:233] [topologymanager] Topology Admit Handler Aug 14 09:17:30 minikube kubelet[8098]: W0814 09:17:30.244933 8098 pod_container_deletor.go:77] Container "fe77b658b012b1c1b2e7d8ad099154a1fa14c81e83036bb7f9827e01c6ed8d52" not found in pod's containers Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.245503 8098 topology_manager.go:233] [topologymanager] Topology Admit Handler Aug 14 09:17:30 minikube kubelet[8098]: W0814 09:17:30.260441 8098 pod_container_deletor.go:77] Container "21969d68cfea625f32639faf5a23481f8b096f9b0b783a87846fb341721077b2" not found in pod's containers Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.260555 8098 topology_manager.go:233] [topologymanager] Topology Admit Handler Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.271437 8098 topology_manager.go:233] [topologymanager] Topology Admit Handler Aug 14 09:17:30 minikube kubelet[8098]: W0814 09:17:30.286817 8098 pod_container_deletor.go:77] Container "2c703de0cf28a3a3e72fa722a22fbab32125070d1f97e2000a96450ac89d36fb" not found in pod's containers Aug 14 09:17:30 minikube kubelet[8098]: W0814 09:17:30.286985 8098 pod_container_deletor.go:77] Container "3ceb790db75f2ca0b6952f07ed4edbd09420131570e4247de671cb9bfdef8401" not found in pod's containers Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.287116 8098 topology_manager.go:233] [topologymanager] Topology Admit Handler Aug 14 09:17:30 minikube kubelet[8098]: W0814 09:17:30.295910 8098 pod_container_deletor.go:77] Container "7b694073cb0c3c56b0216f765f3707c284ad18cc2dc08153ad0ef103550c4711" not found in pod's containers Aug 14 09:17:30 minikube kubelet[8098]: W0814 09:17:30.296020 8098 pod_container_deletor.go:77] Container "b4df3b53a91a7c3f03bac7d642bf8eefd0010769139c26b7c2f9535a14bb292c" not found in pod's containers Aug 14 09:17:30 minikube kubelet[8098]: W0814 09:17:30.296135 8098 pod_container_deletor.go:77] Container "25e664e8b06fb07334e83aceda495a87a71fcc6629e9e69a8d92eec5bd330cb0" not found in pod's containers Aug 14 09:17:30 minikube kubelet[8098]: W0814 09:17:30.296192 8098 pod_container_deletor.go:77] Container "669f9ea3dae5e0c41e03ab79792b8e7f432da328ac7f6d084d57c1bc0404b86b" not found in pod's containers Aug 14 09:17:30 minikube kubelet[8098]: W0814 09:17:30.296253 8098 pod_container_deletor.go:77] Container "e337c5476195b5bc7c17d00e614e60bdaae3c5e0cb850798d732f942280d29e8" not found in pod's containers Aug 14 09:17:30 minikube kubelet[8098]: W0814 09:17:30.296402 8098 pod_container_deletor.go:77] Container "7a9d133781668a0c135f8d0426a1c84c725e61cf27177ac78be93f7517290f58" not found in pod's containers Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.325597 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/98502603-4bd5-48fb-995b-551dbf54c3f4-tmp-volume") pod "kubernetes-dashboard-6dbb54fd95-vhfss" (UID: "98502603-4bd5-48fb-995b-551dbf54c3f4") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.325648 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/9c6e01a66db16459ebd03a2d6c6b5e6e-etcd-certs") pod "etcd-minikube" (UID: "9c6e01a66db16459ebd03a2d6c6b5e6e") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.325687 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/9c6e01a66db16459ebd03a2d6c6b5e6e-etcd-data") pod "etcd-minikube" (UID: "9c6e01a66db16459ebd03a2d6c6b5e6e") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.325730 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/f2549006db9945816d5ddf370853d3e0-ca-certs") pod "kube-apiserver-minikube" (UID: "f2549006db9945816d5ddf370853d3e0") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.325764 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/f2549006db9945816d5ddf370853d3e0-k8s-certs") pod "kube-apiserver-minikube" (UID: "f2549006db9945816d5ddf370853d3e0") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.325801 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/f2549006db9945816d5ddf370853d3e0-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "f2549006db9945816d5ddf370853d3e0") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.325839 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/c19d186374fe95821be9f5e93a67037f-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "c19d186374fe95821be9f5e93a67037f") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.325877 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-kwbpd" (UniqueName: "kubernetes.io/secret/e09424b6-fd7a-4adc-9794-2129b1dcd451-kubernetes-dashboard-token-kwbpd") pod "dashboard-metrics-scraper-dc6947fbf-m8gpw" (UID: "e09424b6-fd7a-4adc-9794-2129b1dcd451") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.325917 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-kwbpd" (UniqueName: "kubernetes.io/secret/98502603-4bd5-48fb-995b-551dbf54c3f4-kubernetes-dashboard-token-kwbpd") pod "kubernetes-dashboard-6dbb54fd95-vhfss" (UID: "98502603-4bd5-48fb-995b-551dbf54c3f4") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.325958 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/c19d186374fe95821be9f5e93a67037f-k8s-certs") pod "kube-controller-manager-minikube" (UID: "c19d186374fe95821be9f5e93a67037f") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.325993 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/c19d186374fe95821be9f5e93a67037f-kubeconfig") pod "kube-controller-manager-minikube" (UID: "c19d186374fe95821be9f5e93a67037f") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.326022 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c19d186374fe95821be9f5e93a67037f-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "c19d186374fe95821be9f5e93a67037f") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.326086 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/f1eb95ef0910a5d13f4832efbd979304-kubeconfig") pod "kube-scheduler-minikube" (UID: "f1eb95ef0910a5d13f4832efbd979304") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.326119 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f50c4532-e97b-49dc-9eeb-271bcad0964e-kube-proxy") pod "kube-proxy-v5bg7" (UID: "f50c4532-e97b-49dc-9eeb-271bcad0964e") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.326148 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/c19d186374fe95821be9f5e93a67037f-ca-certs") pod "kube-controller-manager-minikube" (UID: "c19d186374fe95821be9f5e93a67037f") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.326174 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/f50c4532-e97b-49dc-9eeb-271bcad0964e-lib-modules") pod "kube-proxy-v5bg7" (UID: "f50c4532-e97b-49dc-9eeb-271bcad0964e") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.326202 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-kthz4" (UniqueName: "kubernetes.io/secret/f50c4532-e97b-49dc-9eeb-271bcad0964e-kube-proxy-token-kthz4") pod "kube-proxy-v5bg7" (UID: "f50c4532-e97b-49dc-9eeb-271bcad0964e") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.326233 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6a915e26-bd7f-42aa-90f8-dc4486970ee8-config-volume") pod "coredns-66bff467f8-95nmn" (UID: "6a915e26-bd7f-42aa-90f8-dc4486970ee8") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.326272 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-wj2c2" (UniqueName: "kubernetes.io/secret/6a915e26-bd7f-42aa-90f8-dc4486970ee8-coredns-token-wj2c2") pod "coredns-66bff467f8-95nmn" (UID: "6a915e26-bd7f-42aa-90f8-dc4486970ee8") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.326306 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/e09424b6-fd7a-4adc-9794-2129b1dcd451-tmp-volume") pod "dashboard-metrics-scraper-dc6947fbf-m8gpw" (UID: "e09424b6-fd7a-4adc-9794-2129b1dcd451") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.326341 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/f50c4532-e97b-49dc-9eeb-271bcad0964e-xtables-lock") pod "kube-proxy-v5bg7" (UID: "f50c4532-e97b-49dc-9eeb-271bcad0964e") Aug 14 09:17:30 minikube kubelet[8098]: I0814 09:17:30.326354 8098 reconciler.go:157] Reconciler: start to sync state Aug 14 09:17:32 minikube kubelet[8098]: W0814 09:17:32.018165 8098 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-95nmn through plugin: invalid network status for Aug 14 09:17:32 minikube kubelet[8098]: W0814 09:17:32.542208 8098 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6dbb54fd95-vhfss through plugin: invalid network status for Aug 14 09:17:33 minikube kubelet[8098]: W0814 09:17:33.068129 8098 pod_container_deletor.go:77] Container "6b6f9d1b44502976354ea9f20a01eb5affb70b448cd822f3fe4ded6d69e23717" not found in pod's containers Aug 14 09:17:33 minikube kubelet[8098]: W0814 09:17:33.075751 8098 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-dc6947fbf-m8gpw through plugin: invalid network status for Aug 14 09:17:33 minikube kubelet[8098]: W0814 09:17:33.181099 8098 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-95nmn through plugin: invalid network status for Aug 14 09:17:33 minikube kubelet[8098]: W0814 09:17:33.222450 8098 pod_container_deletor.go:77] Container "022c95a08ddaf17aaca5802adfbe2dce368d7484cdf560b462736176c4b44ef9" not found in pod's containers Aug 14 09:17:33 minikube kubelet[8098]: W0814 09:17:33.241422 8098 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6dbb54fd95-vhfss through plugin: invalid network status for Aug 14 09:17:33 minikube kubelet[8098]: W0814 09:17:33.254604 8098 pod_container_deletor.go:77] Container "fcc7971f0f91168b84fe5b39037236d9d490f9d7b756b953c48d4958a637f271" not found in pod's containers Aug 14 09:17:33 minikube kubelet[8098]: I0814 09:17:33.564311 8098 topology_manager.go:233] [topologymanager] Topology Admit Handler Aug 14 09:17:33 minikube kubelet[8098]: I0814 09:17:33.690449 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/56a22d59-7c25-47fd-b409-35f8879f5898-tmp") pod "storage-provisioner" (UID: "56a22d59-7c25-47fd-b409-35f8879f5898") Aug 14 09:17:33 minikube kubelet[8098]: I0814 09:17:33.690502 8098 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-x4hv6" (UniqueName: "kubernetes.io/secret/56a22d59-7c25-47fd-b409-35f8879f5898-storage-provisioner-token-x4hv6") pod "storage-provisioner" (UID: "56a22d59-7c25-47fd-b409-35f8879f5898") Aug 14 09:17:34 minikube kubelet[8098]: W0814 09:17:34.302129 8098 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-dc6947fbf-m8gpw through plugin: invalid network status for Aug 14 09:17:34 minikube kubelet[8098]: W0814 09:17:34.606505 8098 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6dbb54fd95-vhfss through plugin: invalid network status for Aug 14 09:17:34 minikube kubelet[8098]: W0814 09:17:34.631891 8098 pod_container_deletor.go:77] Container "3bd8666d0cdd1d03ae3ed5b8d810deeef3308d2ca2ed4a927e70a5b884799a70" not found in pod's containers Aug 14 09:17:34 minikube kubelet[8098]: W0814 09:17:34.653983 8098 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-95nmn through plugin: invalid network status for Aug 14 09:17:35 minikube kubelet[8098]: W0814 09:17:35.714755 8098 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-dc6947fbf-m8gpw through plugin: invalid network status for Aug 14 09:18:29 minikube kubelet[8098]: I0814 09:18:29.244633 8098 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: fbd1903899fa36a2177de4a22cab3c1f420d030854f00dae9be5ddfe452cf32a Aug 14 09:18:29 minikube kubelet[8098]: I0814 09:18:29.262213 8098 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 3a0b432dd0150f3685ed1a275de9b56df67feb12934085fedc8c8ea55deeaf56 Aug 14 09:18:29 minikube kubelet[8098]: W0814 09:18:29.285087 8098 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6dbb54fd95-lwc2m through plugin: invalid network status for Aug 14 09:18:29 minikube kubelet[8098]: I0814 09:18:29.285381 8098 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 8b3ce993f951d5c07920b3361e78ca1639d23a6caf016a312a79fd797a2c079e Aug 14 09:18:29 minikube kubelet[8098]: I0814 09:18:29.300312 8098 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a5817be2a2294c9eea9b331cdcc87b97d30c60566baa6def78c38dcca9033e77 Aug 14 09:18:29 minikube kubelet[8098]: I0814 09:18:29.315979 8098 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5c5be64189860b8ca2f6420f462bd5045813538ef733430a06a7412ecec95abf ==> kubernetes-dashboard [60b0a5acd028] <== 2020/08/14 09:17:32 Starting overwatch 2020/08/14 09:17:32 Using namespace: kubernetes-dashboard 2020/08/14 09:17:32 Using in-cluster config to connect to apiserver 2020/08/14 09:17:32 Using secret token for csrf signing 2020/08/14 09:17:32 Initializing csrf token from kubernetes-dashboard-csrf secret 2020/08/14 09:17:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf 2020/08/14 09:17:32 Successful initial request to the apiserver, version: v1.18.0 2020/08/14 09:17:32 Generating JWE encryption key 2020/08/14 09:17:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting 2020/08/14 09:17:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard 2020/08/14 09:17:33 Initializing JWE encryption key from synchronized object 2020/08/14 09:17:33 Creating in-cluster Sidecar client 2020/08/14 09:17:33 Serving insecurely on HTTP port: 9090 2020/08/14 09:17:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds. 2020/08/14 09:17:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds. 2020/08/14 09:18:03 Successful request to sidecar ==> storage-provisioner [e45905a28e41] <== ```
tstromberg commented 4 years ago

This issue appears to be a duplicate of #8981, do you mind if we move the conversation there?

Ths way we can centralize the content relating to the issue. If you feel that this issue is not in fact a duplicate, please re-open it using /reopen. If you have additional information to share, please add it to the new issue.

Thank you for reporting this!

tstromberg commented 4 years ago

If it helps, I think minikube delete may fix your error.

a-dyakov-mercuryo commented 4 years ago

I tried before, didn't help

a-dyakov-mercuryo commented 4 years ago

Maybe the reason is IP dynamically changing? With virtual switch as External network IP depends on extrnal DHCP (router, gateway)