kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.23k stars 4.87k forks source link

Error on start minikube after upgrade to Fedora 34 #11542

Closed paulopatto closed 3 years ago

paulopatto commented 3 years ago

Steps to reproduce the issue:

  1. minikube start

Full output of minikube logs command:

* * ==> Audit <== * |---------|------|---------|------|---------|------------|----------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------|---------|------|---------|------------|----------| |---------|------|---------|------|---------|------------|----------| * * ==> Last Start <== * Log file created at: 2021/05/31 00:21:46 Running on machine: harlech-shadow-v Binary: Built with gc go1.16.1 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0531 00:21:46.804160 1621073 out.go:291] Setting OutFile to fd 1 ... I0531 00:21:46.804223 1621073 out.go:343] isatty.IsTerminal(1) = true I0531 00:21:46.804226 1621073 out.go:304] Setting ErrFile to fd 2... I0531 00:21:46.804229 1621073 out.go:343] isatty.IsTerminal(2) = true I0531 00:21:46.804323 1621073 root.go:316] Updating PATH: /home/paulopatto/.minikube/bin I0531 00:21:46.804538 1621073 out.go:298] Setting JSON to false I0531 00:21:46.817846 1621073 start.go:108] hostinfo: {"hostname":"harlech-shadow-v","uptime":695377,"bootTime":1621735930,"procs":441,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"34","kernelVersion":"5.11.20-300.fc34.x86_64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"94779dc1-25a7-419d-9353-0925e41bb7dd"} I0531 00:21:46.817912 1621073 start.go:118] virtualization: kvm host I0531 00:21:46.821602 1621073 out.go:170] ๐Ÿ˜„ minikube v1.20.0 on Fedora 34 I0531 00:21:46.825695 1621073 out.go:170] ๐Ÿ†• Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2 I0531 00:21:46.825727 1621073 driver.go:322] Setting default libvirt URI to qemu:///system I0531 00:21:46.868335 1621073 docker.go:119] docker version: linux-20.10.6 I0531 00:21:46.868410 1621073 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0531 00:21:46.951778 1621073 info.go:261] docker info: {ID:VCHD:UMYC:YBP2:YRWA:7WE4:KHKV:IZKB:HK6S:DXFZ:XXMS:HQUI:KHVS Containers:5 ContainersRunning:2 ContainersPaused:0 ContainersStopped:3 Images:59 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:45 SystemTime:2021-05-31 00:21:46.900788654 -0300 -03 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.20-300.fc34.x86_64 OperatingSystem:Fedora 34 (Workstation Edition) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:16671195136 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:harlech-shadow-v Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.]] Warnings:}} I0531 00:21:46.951853 1621073 docker.go:225] overlay module found I0531 00:21:46.954081 1621073 out.go:170] โœจ Using the docker driver based on existing profile I0531 00:21:46.954100 1621073 start.go:276] selected driver: docker I0531 00:21:46.954104 1621073 start.go:718] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0531 00:21:46.954165 1621073 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0531 00:21:46.954521 1621073 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0531 00:21:47.040414 1621073 info.go:261] docker info: {ID:VCHD:UMYC:YBP2:YRWA:7WE4:KHKV:IZKB:HK6S:DXFZ:XXMS:HQUI:KHVS Containers:5 ContainersRunning:2 ContainersPaused:0 ContainersStopped:3 Images:59 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:45 SystemTime:2021-05-31 00:21:46.985833354 -0300 -03 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.20-300.fc34.x86_64 OperatingSystem:Fedora 34 (Workstation Edition) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:16671195136 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:harlech-shadow-v Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.]] Warnings:}} I0531 00:21:47.053577 1621073 cni.go:93] Creating CNI manager for "" I0531 00:21:47.053590 1621073 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet I0531 00:21:47.053600 1621073 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk" I0531 00:21:47.053604 1621073 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk" I0531 00:21:47.053608 1621073 start_flags.go:273] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0531 00:21:47.055872 1621073 out.go:170] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0531 00:21:47.055909 1621073 cache.go:111] Beginning downloading kic base image for docker with containerd W0531 00:21:47.055918 1621073 out.go:424] no arguments passed for "๐Ÿšœ Pulling base image ...\n" - returning raw string W0531 00:21:47.055942 1621073 out.go:424] no arguments passed for "๐Ÿšœ Pulling base image ...\n" - returning raw string I0531 00:21:47.057899 1621073 out.go:170] ๐Ÿšœ Pulling base image ... I0531 00:21:47.057958 1621073 preload.go:98] Checking if preload exists for k8s version v1.18.0 and runtime containerd I0531 00:21:47.058092 1621073 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory I0531 00:21:47.058116 1621073 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull I0531 00:21:47.058123 1621073 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull I0531 00:21:47.058164 1621073 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon I0531 00:21:47.124325 1621073 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull I0531 00:21:47.124335 1621073 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull W0531 00:21:47.573660 1621073 preload.go:119] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.18.0-containerd-overlay2-amd64.tar.lz4 status code: 404 I0531 00:21:47.574125 1621073 profile.go:148] Saving config to /home/paulopatto/.minikube/profiles/minikube/config.json ... I0531 00:21:47.574470 1621073 cache.go:108] acquiring lock: {Name:mkfac591e1b57745d384d9ca146f580f47acbd7b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0531 00:21:47.574468 1621073 cache.go:108] acquiring lock: {Name:mk5af3727879960e9ad022f91626739e861258ed Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0531 00:21:47.574678 1621073 cache.go:108] acquiring lock: {Name:mkbfa3c0e1756a1613436984e0168bc068a811fa Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0531 00:21:47.574821 1621073 cache.go:108] acquiring lock: {Name:mk7a01445eb3957031991ac3d5f25c765479bfd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0531 00:21:47.574928 1621073 cache.go:108] acquiring lock: {Name:mkea9b39615dd71da338912b667602fd15311e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0531 00:21:47.575142 1621073 cache.go:108] acquiring lock: {Name:mk52cf373271438224379f72c42380f0d1ad66d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0531 00:21:47.575209 1621073 cache.go:194] Successfully downloaded all kic artifacts I0531 00:21:47.575295 1621073 start.go:313] acquiring machines lock for minikube: {Name:mkb403e76fc9671991eef4ee4fd95f1800353582 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0531 00:21:47.575273 1621073 cache.go:108] acquiring lock: {Name:mk17cb8a279129c0adb30184aa4a0fc75752d53b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0531 00:21:47.575134 1621073 cache.go:108] acquiring lock: {Name:mkfdb03d2819be02680ac5bf174a831499b623b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0531 00:21:47.575414 1621073 cache.go:116] /home/paulopatto/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists I0531 00:21:47.575329 1621073 cache.go:108] acquiring lock: {Name:mka9f2dae9bb798597ee7005d18bde35506e5183 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0531 00:21:47.575420 1621073 cache.go:116] /home/paulopatto/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists I0531 00:21:47.575484 1621073 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.7" -> "/home/paulopatto/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 1.035499ms I0531 00:21:47.575491 1621073 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/paulopatto/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 355.326ยตs I0531 00:21:47.575517 1621073 cache.go:116] /home/paulopatto/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists I0531 00:21:47.575519 1621073 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.7 -> /home/paulopatto/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded I0531 00:21:47.575528 1621073 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/paulopatto/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded I0531 00:21:47.575526 1621073 start.go:317] acquired machines lock for "minikube" in 190.578ยตs I0531 00:21:47.575576 1621073 start.go:93] Skipping create...Using existing machine configuration I0531 00:21:47.575574 1621073 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/paulopatto/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 317.611ยตs I0531 00:21:47.575601 1621073 fix.go:55] fixHost starting: m01 I0531 00:21:47.575606 1621073 cache.go:116] /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists I0531 00:21:47.575606 1621073 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/paulopatto/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded I0531 00:21:47.575585 1621073 cache.go:108] acquiring lock: {Name:mk76ea7e7a3dcccba4dd42af0e9ed4aa4e152ee9 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0531 00:21:47.575646 1621073 cache.go:116] /home/paulopatto/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists I0531 00:21:47.575665 1621073 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 1.1025ms I0531 00:21:47.575710 1621073 cache.go:116] /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists I0531 00:21:47.575728 1621073 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded I0531 00:21:47.575753 1621073 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/paulopatto/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 434.961ยตs I0531 00:21:47.575772 1621073 cache.go:116] /home/paulopatto/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists I0531 00:21:47.575789 1621073 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/paulopatto/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded I0531 00:21:47.575774 1621073 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 728.164ยตs I0531 00:21:47.575811 1621073 cache.go:116] /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists I0531 00:21:47.575822 1621073 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded I0531 00:21:47.575826 1621073 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/paulopatto/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 1.384278ms I0531 00:21:47.575861 1621073 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/paulopatto/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded I0531 00:21:47.575863 1621073 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 952.663ยตs I0531 00:21:47.575883 1621073 cache.go:116] /home/paulopatto/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists I0531 00:21:47.575891 1621073 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded I0531 00:21:47.575929 1621073 cache.go:116] /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists I0531 00:21:47.575937 1621073 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/paulopatto/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 363.862ยตs I0531 00:21:47.575966 1621073 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/paulopatto/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded I0531 00:21:47.575974 1621073 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 1.227158ms I0531 00:21:47.576000 1621073 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded I0531 00:21:47.576025 1621073 cache.go:88] Successfully saved all images to host disk. I0531 00:21:47.576601 1621073 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0531 00:21:47.627777 1621073 fix.go:108] recreateIfNeeded on minikube: state=Running err= W0531 00:21:47.627789 1621073 fix.go:134] unexpected machine state, will restart: I0531 00:21:47.630213 1621073 out.go:170] ๐Ÿƒ Updating the running docker "minikube" container ... I0531 00:21:47.630240 1621073 machine.go:88] provisioning docker machine ... I0531 00:21:47.630256 1621073 ubuntu.go:169] provisioning hostname "minikube" I0531 00:21:47.630313 1621073 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0531 00:21:47.666592 1621073 main.go:128] libmachine: Using SSH client type: native I0531 00:21:47.666816 1621073 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 49155 } I0531 00:21:47.666824 1621073 main.go:128] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0531 00:21:47.810588 1621073 main.go:128] libmachine: SSH cmd err, output: : minikube I0531 00:21:47.810786 1621073 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0531 00:21:47.870191 1621073 main.go:128] libmachine: Using SSH client type: native I0531 00:21:47.870364 1621073 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 49155 } I0531 00:21:47.870379 1621073 main.go:128] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0531 00:21:48.006125 1621073 main.go:128] libmachine: SSH cmd err, output: : I0531 00:21:48.006135 1621073 ubuntu.go:175] set auth options {CertDir:/home/paulopatto/.minikube CaCertPath:/home/paulopatto/.minikube/certs/ca.pem CaPrivateKeyPath:/home/paulopatto/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/paulopatto/.minikube/machines/server.pem ServerKeyPath:/home/paulopatto/.minikube/machines/server-key.pem ClientKeyPath:/home/paulopatto/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/paulopatto/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/paulopatto/.minikube} I0531 00:21:48.006148 1621073 ubuntu.go:177] setting up certificates I0531 00:21:48.006153 1621073 provision.go:83] configureAuth start I0531 00:21:48.006221 1621073 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0531 00:21:48.041769 1621073 provision.go:137] copyHostCerts I0531 00:21:48.041806 1621073 exec_runner.go:145] found /home/paulopatto/.minikube/ca.pem, removing ... I0531 00:21:48.041810 1621073 exec_runner.go:190] rm: /home/paulopatto/.minikube/ca.pem I0531 00:21:48.041859 1621073 exec_runner.go:152] cp: /home/paulopatto/.minikube/certs/ca.pem --> /home/paulopatto/.minikube/ca.pem (1046 bytes) I0531 00:21:48.041940 1621073 exec_runner.go:145] found /home/paulopatto/.minikube/cert.pem, removing ... I0531 00:21:48.041943 1621073 exec_runner.go:190] rm: /home/paulopatto/.minikube/cert.pem I0531 00:21:48.041971 1621073 exec_runner.go:152] cp: /home/paulopatto/.minikube/certs/cert.pem --> /home/paulopatto/.minikube/cert.pem (1086 bytes) I0531 00:21:48.042024 1621073 exec_runner.go:145] found /home/paulopatto/.minikube/key.pem, removing ... I0531 00:21:48.042027 1621073 exec_runner.go:190] rm: /home/paulopatto/.minikube/key.pem I0531 00:21:48.042052 1621073 exec_runner.go:152] cp: /home/paulopatto/.minikube/certs/key.pem --> /home/paulopatto/.minikube/key.pem (1679 bytes) I0531 00:21:48.042088 1621073 provision.go:111] generating server cert: /home/paulopatto/.minikube/machines/server.pem ca-key=/home/paulopatto/.minikube/certs/ca.pem private-key=/home/paulopatto/.minikube/certs/ca-key.pem org=paulopatto.minikube san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0531 00:21:48.114940 1621073 provision.go:165] copyRemoteCerts I0531 00:21:48.114984 1621073 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0531 00:21:48.115014 1621073 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0531 00:21:48.148248 1621073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49155 SSHKeyPath:/home/paulopatto/.minikube/machines/minikube/id_rsa Username:docker} I0531 00:21:48.248059 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/machines/server.pem --> /etc/docker/server.pem (1164 bytes) I0531 00:21:48.306863 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0531 00:21:48.329422 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1046 bytes) I0531 00:21:48.349819 1621073 provision.go:86] duration metric: configureAuth took 343.658825ms I0531 00:21:48.349833 1621073 ubuntu.go:193] setting minikube options for container-runtime I0531 00:21:48.350002 1621073 machine.go:91] provisioned docker machine in 719.756474ms I0531 00:21:48.350008 1621073 start.go:267] post-start starting for "minikube" (driver="docker") I0531 00:21:48.350012 1621073 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0531 00:21:48.350069 1621073 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0531 00:21:48.350108 1621073 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0531 00:21:48.388085 1621073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49155 SSHKeyPath:/home/paulopatto/.minikube/machines/minikube/id_rsa Username:docker} I0531 00:21:48.489774 1621073 ssh_runner.go:149] Run: cat /etc/os-release I0531 00:21:48.499070 1621073 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0531 00:21:48.499113 1621073 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0531 00:21:48.499139 1621073 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0531 00:21:48.499151 1621073 info.go:137] Remote host: Ubuntu 19.10 I0531 00:21:48.499170 1621073 filesync.go:118] Scanning /home/paulopatto/.minikube/addons for local assets ... I0531 00:21:48.499299 1621073 filesync.go:118] Scanning /home/paulopatto/.minikube/files for local assets ... I0531 00:21:48.499359 1621073 start.go:270] post-start completed in 149.341766ms I0531 00:21:48.499469 1621073 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0531 00:21:48.499583 1621073 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0531 00:21:48.547351 1621073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49155 SSHKeyPath:/home/paulopatto/.minikube/machines/minikube/id_rsa Username:docker} I0531 00:21:48.637208 1621073 fix.go:57] fixHost completed within 1.061598814s I0531 00:21:48.637236 1621073 start.go:80] releasing machines lock for "minikube", held for 1.061691308s I0531 00:21:48.637450 1621073 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0531 00:21:48.697810 1621073 ssh_runner.go:149] Run: systemctl --version I0531 00:21:48.697846 1621073 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0531 00:21:48.697872 1621073 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0531 00:21:48.697916 1621073 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0531 00:21:48.740298 1621073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49155 SSHKeyPath:/home/paulopatto/.minikube/machines/minikube/id_rsa Username:docker} I0531 00:21:48.742534 1621073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49155 SSHKeyPath:/home/paulopatto/.minikube/machines/minikube/id_rsa Username:docker} I0531 00:21:53.845756 1621073 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (5.14780375s) W0531 00:21:53.845823 1621073 start.go:637] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 28 stdout: stderr: curl: (28) Resolving timed out after 2000 milliseconds I0531 00:21:53.845865 1621073 ssh_runner.go:189] Completed: systemctl --version: (5.148010797s) I0531 00:21:53.846090 1621073 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio W0531 00:21:53.846147 1621073 out.go:235] โ— This container is having trouble accessing https://k8s.gcr.io W0531 00:21:53.846202 1621073 out.go:424] no arguments passed for "๐Ÿ’ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/\n" - returning raw string W0531 00:21:53.846233 1621073 out.go:235] ๐Ÿ’ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0531 00:21:53.879239 1621073 ssh_runner.go:149] Run: sudo systemctl stop -f crio I0531 00:21:53.924811 1621073 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0531 00:21:53.937537 1621073 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0531 00:21:53.947613 1621073 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0531 00:21:53.960944 1621073 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5tayIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLmxpbnV4XQogICAgc2hpbSA9ICJjb250YWluZXJkLXNoaW0iCiAgICBydW50aW1lID0gInJ1bmMiCiAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgbm9fc2hpbSA9IGZhbHNlCiAgICBzaGltX2RlYnVnID0gZmFsc2UKICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml" I0531 00:21:53.975947 1621073 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0531 00:21:53.982621 1621073 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0531 00:21:53.989294 1621073 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0531 00:21:54.040922 1621073 ssh_runner.go:149] Run: sudo systemctl restart containerd I0531 00:21:54.053643 1621073 start.go:368] Will wait 60s for socket path /run/containerd/containerd.sock I0531 00:21:54.053716 1621073 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock I0531 00:21:54.056693 1621073 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1 stdout: stderr: stat: cannot stat '/run/containerd/containerd.sock': No such file or directory I0531 00:21:55.161978 1621073 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock I0531 00:21:55.173537 1621073 start.go:393] Will wait 60s for crictl version I0531 00:21:55.173747 1621073 ssh_runner.go:149] Run: sudo crictl version I0531 00:21:55.239947 1621073 start.go:402] Version: 0.1.0 RuntimeName: containerd RuntimeVersion: v1.3.2 RuntimeApiVersion: v1alpha2 I0531 00:21:55.240095 1621073 ssh_runner.go:149] Run: containerd --version I0531 00:21:55.293963 1621073 out.go:170] ๐Ÿ“ฆ Preparing Kubernetes v1.18.0 on containerd 1.3.2 ... I0531 00:21:55.294096 1621073 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0531 00:21:55.345559 1621073 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0531 00:21:55.345611 1621073 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs... I0531 00:21:55.345622 1621073 cli_runner.go:115] Run: docker network inspect minikube W0531 00:21:55.381062 1621073 cli_runner.go:162] docker network inspect minikube returned with exit code 1 I0531 00:21:55.381084 1621073 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0531 00:21:55.381092 1621073 network_create.go:254] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0531 00:21:55.381098 1621073 network.go:41] The container minikube is not attached to a network, this could be because the cluster was created by minikube /var/lib/minikube/images/kube-proxy_v1.18.0 (48857088 bytes) I0531 00:22:03.021874 1621073 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.18.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.18.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.18.0': No such file or directory I0531 00:22:03.021885 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 --> /var/lib/minikube/images/kube-scheduler_v1.18.0 (34077696 bytes) I0531 00:22:03.021921 1621073 ssh_runner.go:306] existence check for /var/lib/minikube/images/coredns_1.6.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.6.7: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/coredns_1.6.7': No such file or directory I0531 00:22:03.021937 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 --> /var/lib/minikube/images/coredns_1.6.7 (13600256 bytes) I0531 00:22:03.021944 1621073 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.18.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.18.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.18.0': No such file or directory I0531 00:22:03.021956 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 --> /var/lib/minikube/images/kube-controller-manager_v1.18.0 (49124864 bytes) I0531 00:22:03.021977 1621073 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.18.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.18.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.18.0': No such file or directory I0531 00:22:03.021988 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 --> /var/lib/minikube/images/kube-apiserver_v1.18.0 (51090432 bytes) I0531 00:22:03.022035 1621073 ssh_runner.go:306] existence check for /var/lib/minikube/images/pause_3.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/pause_3.2': No such file or directory I0531 00:22:03.022051 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/cache/images/k8s.gcr.io/pause_3.2 --> /var/lib/minikube/images/pause_3.2 (301056 bytes) I0531 00:22:03.022080 1621073 ssh_runner.go:306] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory I0531 00:22:03.022088 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes) I0531 00:22:03.127207 1621073 containerd.go:267] Loading image: /var/lib/minikube/images/pause_3.2 I0531 00:22:03.127256 1621073 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.2 I0531 00:22:03.322679 1621073 cache_images.go:293] Transferred and loaded /home/paulopatto/.minikube/cache/images/k8s.gcr.io/pause_3.2 from cache I0531 00:22:03.322708 1621073 containerd.go:267] Loading image: /var/lib/minikube/images/coredns_1.6.7 I0531 00:22:03.322760 1621073 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.6.7 I0531 00:22:04.816752 1621073 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.6.7: (1.493976813s) I0531 00:22:04.816766 1621073 cache_images.go:293] Transferred and loaded /home/paulopatto/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 from cache I0531 00:22:04.816772 1621073 containerd.go:267] Loading image: /var/lib/minikube/images/kube-scheduler_v1.18.0 I0531 00:22:04.816833 1621073 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.18.0 I0531 00:22:07.295916 1621073 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.18.0: (2.479069346s) I0531 00:22:07.295926 1621073 cache_images.go:293] Transferred and loaded /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 from cache I0531 00:22:07.295935 1621073 containerd.go:267] Loading image: /var/lib/minikube/images/kube-proxy_v1.18.0 I0531 00:22:07.295984 1621073 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.18.0 I0531 00:22:09.334597 1621073 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.18.0: (2.038600897s) I0531 00:22:09.334606 1621073 cache_images.go:293] Transferred and loaded /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 from cache I0531 00:22:09.334614 1621073 containerd.go:267] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.18.0 I0531 00:22:09.334654 1621073 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.18.0 I0531 00:22:11.135305 1621073 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.18.0: (1.800596224s) I0531 00:22:11.135336 1621073 cache_images.go:293] Transferred and loaded /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 from cache I0531 00:22:11.135362 1621073 containerd.go:267] Loading image: /var/lib/minikube/images/kube-apiserver_v1.18.0 I0531 00:22:11.135477 1621073 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.18.0 I0531 00:22:13.115787 1621073 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.18.0: (1.980288478s) I0531 00:22:13.115796 1621073 cache_images.go:293] Transferred and loaded /home/paulopatto/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 from cache I0531 00:22:13.115812 1621073 containerd.go:267] Loading image: /var/lib/minikube/images/etcd_3.4.3-0 I0531 00:22:13.115858 1621073 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.3-0 I0531 00:22:18.042200 1621073 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.3-0: (4.926305453s) I0531 00:22:18.042229 1621073 cache_images.go:293] Transferred and loaded /home/paulopatto/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 from cache I0531 00:22:18.042253 1621073 cache_images.go:113] Successfully loaded all cached images I0531 00:22:18.042263 1621073 cache_images.go:82] LoadImages completed in 22.264000671s I0531 00:22:18.042363 1621073 ssh_runner.go:149] Run: sudo crictl info I0531 00:22:18.135766 1621073 cni.go:93] Creating CNI manager for "" I0531 00:22:18.135789 1621073 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet I0531 00:22:18.135807 1621073 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0531 00:22:18.135832 1621073 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0531 00:22:18.136047 1621073 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.17.0.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /run/containerd/containerd.sock name: "minikube" kubeletExtraArgs: node-ip: 172.17.0.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "172.17.0.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.18.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0531 00:22:18.136209 1621073 kubeadm.go:901] kubelet [Unit] Wants=containerd.service [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=minikube --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.18.0 ClusterName:minikube Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0531 00:22:18.136335 1621073 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.18.0 I0531 00:22:18.156138 1621073 binaries.go:44] Found k8s binaries, skipping transfer I0531 00:22:18.156276 1621073 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0531 00:22:18.176643 1621073 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes) I0531 00:22:18.214215 1621073 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0531 00:22:18.250266 1621073 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1841 bytes) I0531 00:22:18.286766 1621073 ssh_runner.go:149] Run: grep 172.17.0.2 control-plane.minikube.internal$ /etc/hosts I0531 00:22:18.295055 1621073 certs.go:52] Setting up /home/paulopatto/.minikube/profiles/minikube for IP: 172.17.0.2 I0531 00:22:18.295149 1621073 certs.go:171] skipping minikubeCA CA generation: /home/paulopatto/.minikube/ca.key I0531 00:22:18.295190 1621073 certs.go:171] skipping proxyClientCA CA generation: /home/paulopatto/.minikube/proxy-client-ca.key I0531 00:22:18.295319 1621073 certs.go:282] skipping minikube-user signed cert generation: /home/paulopatto/.minikube/profiles/minikube/client.key I0531 00:22:18.295368 1621073 certs.go:282] skipping minikube signed cert generation: /home/paulopatto/.minikube/profiles/minikube/apiserver.key.7b749c5f I0531 00:22:18.295415 1621073 certs.go:282] skipping aggregator signed cert generation: /home/paulopatto/.minikube/profiles/minikube/proxy-client.key I0531 00:22:18.295711 1621073 certs.go:361] found cert: /home/paulopatto/.minikube/certs/home/paulopatto/.minikube/certs/ca-key.pem (1679 bytes) I0531 00:22:18.295814 1621073 certs.go:361] found cert: /home/paulopatto/.minikube/certs/home/paulopatto/.minikube/certs/ca.pem (1046 bytes) I0531 00:22:18.295905 1621073 certs.go:361] found cert: /home/paulopatto/.minikube/certs/home/paulopatto/.minikube/certs/cert.pem (1086 bytes) I0531 00:22:18.295990 1621073 certs.go:361] found cert: /home/paulopatto/.minikube/certs/home/paulopatto/.minikube/certs/key.pem (1679 bytes) I0531 00:22:18.298718 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes) I0531 00:22:18.349048 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0531 00:22:18.399031 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes) I0531 00:22:18.449532 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0531 00:22:18.493282 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes) I0531 00:22:18.512419 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0531 00:22:18.529889 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes) I0531 00:22:18.546472 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0531 00:22:18.563330 1621073 ssh_runner.go:316] scp /home/paulopatto/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes) I0531 00:22:18.581135 1621073 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (744 bytes) I0531 00:22:18.593861 1621073 ssh_runner.go:149] Run: openssl version I0531 00:22:18.598460 1621073 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0531 00:22:18.605817 1621073 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0531 00:22:18.609041 1621073 certs.go:402] hashing: -rw-r--r-- 1 root root 1066 May 3 2020 /usr/share/ca-certificates/minikubeCA.pem I0531 00:22:18.609083 1621073 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0531 00:22:18.613557 1621073 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0531 00:22:18.620278 1621073 kubeadm.go:381] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0531 00:22:18.620341 1621073 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]} I0531 00:22:18.620387 1621073 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I0531 00:22:18.635241 1621073 cri.go:76] found id: "" I0531 00:22:18.635300 1621073 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0531 00:22:18.642188 1621073 kubeadm.go:392] found existing configuration files, will attempt cluster restart I0531 00:22:18.642196 1621073 kubeadm.go:591] restartCluster start I0531 00:22:18.642237 1621073 ssh_runner.go:149] Run: sudo test -d /data/minikube I0531 00:22:18.648726 1621073 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0531 00:22:18.649510 1621073 kubeconfig.go:93] found "minikube" server: "https://172.17.0.2:8443" I0531 00:22:18.653036 1621073 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0531 00:22:18.661356 1621073 kubeadm.go:559] needs reconfigure: configs differ: -- stdout -- --- /var/tmp/minikube/kubeadm.yaml 2021-05-31 03:07:32.317957641 +0000 +++ /var/tmp/minikube/kubeadm.yaml.new 2021-05-31 03:22:18.283345965 +0000 @@ -11,7 +11,7 @@ - signing - authentication nodeRegistration: - criSocket: /var/run/dockershim.sock + criSocket: /run/containerd/containerd.sock name: "minikube" kubeletExtraArgs: node-ip: 172.17.0.2 -- /stdout -- I0531 00:22:18.661370 1621073 kubeadm.go:1024] stopping kube-system containers ... I0531 00:22:18.661377 1621073 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]} I0531 00:22:18.661419 1621073 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I0531 00:22:18.673852 1621073 cri.go:76] found id: "" I0531 00:22:18.673902 1621073 ssh_runner.go:149] Run: sudo systemctl stop kubelet I0531 00:22:18.684869 1621073 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0531 00:22:18.691972 1621073 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0531 00:22:18.692014 1621073 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0531 00:22:18.698986 1621073 kubeadm.go:667] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0531 00:22:18.698994 1621073 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" E0531 00:22:18.973282 1621073 kubeadm.go:671] sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml failed - will try once more: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1 stdout: [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk stderr: W0531 03:22:18.738839 17002 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] error execution phase certs/apiserver-kubelet-client: failed to write or validate certificate "apiserver-kubelet-client": failure loading apiserver-kubelet-client certificate: failed to load certificate: the certificate has expired To see the stack trace of this error execute with --v=5 or higher I0531 00:22:18.973305 1621073 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0531 00:22:19.114542 1621073 kubeadm.go:595] restartCluster took 472.341824ms W0531 00:22:19.114625 1621073 out.go:235] ๐Ÿคฆ Unable to restart cluster, will reset it: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1 stdout: [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk stderr: W0531 03:22:19.014231 17014 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] error execution phase certs/apiserver-kubelet-client: failed to write or validate certificate "apiserver-kubelet-client": failure loading apiserver-kubelet-client certificate: failed to load certificate: the certificate has expired To see the stack trace of this error execute with --v=5 or higher I0531 00:22:19.114659 1621073 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force" I0531 00:22:19.322001 1621073 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet I0531 00:22:19.352757 1621073 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]} I0531 00:22:19.352912 1621073 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I0531 00:22:19.379775 1621073 cri.go:76] found id: "" I0531 00:22:19.379871 1621073 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0531 00:22:19.409058 1621073 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0531 00:22:19.409141 1621073 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0531 00:22:19.423960 1621073 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0531 00:22:19.424000 1621073 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" W0531 00:22:20.041352 1621073 out.go:424] no arguments passed for " โ–ช Generating certificates and keys ..." - returning raw string W0531 00:22:20.041374 1621073 out.go:424] no arguments passed for " โ–ช Generating certificates and keys ..." - returning raw string I0531 00:22:20.069270 1621073 out.go:197] โ–ช Generating certificates and keys ... W0531 00:22:20.114614 1621073 out.go:235] ๐Ÿ’ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk stderr: W0531 03:22:19.571928 17064 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase certs/apiserver-kubelet-client: failed to write or validate certificate "apiserver-kubelet-client": failure loading apiserver-kubelet-client certificate: failed to load certificate: the certificate has expired To see the stack trace of this error execute with --v=5 or higher I0531 00:22:20.114671 1621073 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force" I0531 00:22:20.165863 1621073 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet I0531 00:22:20.176581 1621073 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]} I0531 00:22:20.176631 1621073 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I0531 00:22:20.189366 1621073 cri.go:76] found id: "" I0531 00:22:20.189393 1621073 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0531 00:22:20.189447 1621073 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0531 00:22:20.196091 1621073 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0531 00:22:20.196112 1621073 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" W0531 00:22:20.670185 1621073 out.go:424] no arguments passed for " โ–ช Generating certificates and keys ..." - returning raw string W0531 00:22:20.670196 1621073 out.go:424] no arguments passed for " โ–ช Generating certificates and keys ..." - returning raw string I0531 00:22:20.672487 1621073 out.go:197] โ–ช Generating certificates and keys ... I0531 00:22:20.672635 1621073 kubeadm.go:383] StartCluster complete in 2.052361472s I0531 00:22:20.672649 1621073 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0531 00:22:20.672699 1621073 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0531 00:22:20.682799 1621073 cri.go:76] found id: "" I0531 00:22:20.682807 1621073 logs.go:270] 0 containers: [] W0531 00:22:20.682811 1621073 logs.go:272] No container was found matching "kube-apiserver" I0531 00:22:20.682814 1621073 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0531 00:22:20.682855 1621073 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0531 00:22:20.693709 1621073 cri.go:76] found id: "" I0531 00:22:20.693717 1621073 logs.go:270] 0 containers: [] W0531 00:22:20.693721 1621073 logs.go:272] No container was found matching "etcd" I0531 00:22:20.693725 1621073 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0531 00:22:20.693781 1621073 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0531 00:22:20.712154 1621073 cri.go:76] found id: "" I0531 00:22:20.712163 1621073 logs.go:270] 0 containers: [] W0531 00:22:20.712167 1621073 logs.go:272] No container was found matching "coredns" I0531 00:22:20.712171 1621073 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0531 00:22:20.712237 1621073 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0531 00:22:20.724281 1621073 cri.go:76] found id: "" I0531 00:22:20.724290 1621073 logs.go:270] 0 containers: [] W0531 00:22:20.724295 1621073 logs.go:272] No container was found matching "kube-scheduler" I0531 00:22:20.724299 1621073 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0531 00:22:20.724345 1621073 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0531 00:22:20.736344 1621073 cri.go:76] found id: "" I0531 00:22:20.736353 1621073 logs.go:270] 0 containers: [] W0531 00:22:20.736358 1621073 logs.go:272] No container was found matching "kube-proxy" I0531 00:22:20.736364 1621073 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0531 00:22:20.736415 1621073 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0531 00:22:20.748194 1621073 cri.go:76] found id: "" I0531 00:22:20.748203 1621073 logs.go:270] 0 containers: [] W0531 00:22:20.748208 1621073 logs.go:272] No container was found matching "kubernetes-dashboard" I0531 00:22:20.748211 1621073 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0531 00:22:20.748266 1621073 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0531 00:22:20.761383 1621073 cri.go:76] found id: "" I0531 00:22:20.761392 1621073 logs.go:270] 0 containers: [] W0531 00:22:20.761396 1621073 logs.go:272] No container was found matching "storage-provisioner" I0531 00:22:20.761400 1621073 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0531 00:22:20.761452 1621073 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0531 00:22:20.773083 1621073 cri.go:76] found id: "" I0531 00:22:20.773092 1621073 logs.go:270] 0 containers: [] W0531 00:22:20.773098 1621073 logs.go:272] No container was found matching "kube-controller-manager" I0531 00:22:20.773107 1621073 logs.go:123] Gathering logs for container status ... I0531 00:22:20.773116 1621073 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0531 00:22:20.787212 1621073 logs.go:123] Gathering logs for kubelet ... I0531 00:22:20.787223 1621073 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0531 00:22:20.838498 1621073 logs.go:123] Gathering logs for dmesg ... I0531 00:22:20.838509 1621073 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0531 00:22:20.875886 1621073 logs.go:123] Gathering logs for describe nodes ... I0531 00:22:20.875895 1621073 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W0531 00:22:20.979384 1621073 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: ** stderr ** The connection to the server localhost:8443 was refused - did you specify the right host or port? ** /stderr ** I0531 00:22:20.979391 1621073 logs.go:123] Gathering logs for containerd ... I0531 00:22:20.979398 1621073 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" W0531 00:22:21.004282 1621073 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk stderr: W0531 03:22:20.235467 17224 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase certs/apiserver-kubelet-client: failed to write or validate certificate "apiserver-kubelet-client": failure loading apiserver-kubelet-client certificate: failed to load certificate: the certificate has expired To see the stack trace of this error execute with --v=5 or higher W0531 00:22:21.004292 1621073 out.go:424] no arguments passed for "\n" - returning raw string W0531 00:22:21.004305 1621073 out.go:235] W0531 00:22:21.004450 1621073 out.go:235] ๐Ÿ’ฃ Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk stderr: W0531 03:22:20.235467 17224 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase certs/apiserver-kubelet-client: failed to write or validate certificate "apiserver-kubelet-client": failure loading apiserver-kubelet-client certificate: failed to load certificate: the certificate has expired To see the stack trace of this error execute with --v=5 or higher W0531 00:22:21.004473 1621073 out.go:424] no arguments passed for "\n" - returning raw string W0531 00:22:21.004477 1621073 out.go:235] W0531 00:22:21.004487 1621073 out.go:424] no arguments passed for "๐Ÿ˜ฟ If the above advice does not help, please let us know:\n" - returning raw string W0531 00:22:21.004492 1621073 out.go:424] no arguments passed for "๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose\n\n" - returning raw string W0531 00:22:21.004494 1621073 out.go:424] no arguments passed for "Please attach the following file to the GitHub issue:\n" - returning raw string W0531 00:22:21.004539 1621073 out.go:424] no arguments passed for "๐Ÿ˜ฟ If the above advice does not help, please let us know:\n๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose\n\nPlease attach the following file to the GitHub issue:\n- /home/paulopatto/.minikube/logs/lastStart.txt\n\n" - returning raw string W0531 00:22:21.005396 1621073 out.go:235] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ W0531 00:22:21.005415 1621073 out.go:235] โ”‚ โ”‚ W0531 00:22:21.005428 1621073 out.go:235] โ”‚ ๐Ÿ˜ฟ If the above advice does not help, please let us know: โ”‚ W0531 00:22:21.005434 1621073 out.go:235] โ”‚ ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose โ”‚ W0531 00:22:21.005438 1621073 out.go:235] โ”‚ โ”‚ W0531 00:22:21.005444 1621073 out.go:235] โ”‚ Please attach the following file to the GitHub issue: โ”‚ W0531 00:22:21.005450 1621073 out.go:235] โ”‚ - /home/paulopatto/.minikube/logs/lastStart.txt โ”‚ W0531 00:22:21.005456 1621073 out.go:235] โ”‚ โ”‚ W0531 00:22:21.005463 1621073 out.go:235] โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ W0531 00:22:21.005478 1621073 out.go:235] I0531 00:22:21.011738 1621073 out.go:170] W0531 00:22:21.011941 1621073 out.go:235] โŒ Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.18.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk stderr: W0531 03:22:20.235467 17224 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase certs/apiserver-kubelet-client: failed to write or validate certificate "apiserver-kubelet-client": failure loading apiserver-kubelet-client certificate: failed to load certificate: the certificate has expired To see the stack trace of this error execute with --v=5 or higher W0531 00:22:21.011982 1621073 out.go:424] no arguments passed for "\n" - returning raw string W0531 00:22:21.011991 1621073 out.go:235] W0531 00:22:21.012003 1621073 out.go:424] no arguments passed for "๐Ÿ˜ฟ If the above advice does not help, please let us know:\n" - returning raw string W0531 00:22:21.012008 1621073 out.go:424] no arguments passed for "๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose\n\n" - returning raw string W0531 00:22:21.012012 1621073 out.go:424] no arguments passed for "Please attach the following file to the GitHub issue:\n" - returning raw string W0531 00:22:21.012054 1621073 out.go:424] no arguments passed for "๐Ÿ˜ฟ If the above advice does not help, please let us know:\n๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose\n\nPlease attach the following file to the GitHub issue:\n- /home/paulopatto/.minikube/logs/lastStart.txt\n\n" - returning raw string W0531 00:22:21.012794 1621073 out.go:235] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ W0531 00:22:21.012820 1621073 out.go:235] โ”‚ โ”‚ W0531 00:22:21.012829 1621073 out.go:235] โ”‚ ๐Ÿ˜ฟ If the above advice does not help, please let us know: โ”‚ W0531 00:22:21.012836 1621073 out.go:235] โ”‚ ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose โ”‚ W0531 00:22:21.012847 1621073 out.go:235] โ”‚ โ”‚ W0531 00:22:21.012853 1621073 out.go:235] โ”‚ Please attach the following file to the GitHub issue: โ”‚ W0531 00:22:21.012859 1621073 out.go:235] โ”‚ - /home/paulopatto/.minikube/logs/lastStart.txt โ”‚ W0531 00:22:21.012865 1621073 out.go:235] โ”‚ โ”‚ W0531 00:22:21.012871 1621073 out.go:235] โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ W0531 00:22:21.012880 1621073 out.go:235] * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * * ==> containerd <== * -- Logs begin at Mon 2021-05-31 03:07:19 UTC, end at Mon 2021-05-31 03:24:41 UTC. -- May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107833173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107847078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107858664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107868649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107879062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107891520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107902440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107912938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107924077Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107954904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107967994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107980055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.107990541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108112691Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: PodAnnotations:[] Root: Options: PrivilegedWithoutHostDevices:false} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] Root: Options: PrivilegedWithoutHostDevices:false} Runtimes:map[runc:{Type:io.containerd.runc.v1 Engine: PodAnnotations:[] Root: Options: PrivilegedWithoutHostDevices:false}] NoPivot:true} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[]} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.2 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108145169Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`" May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108165400Z" level=info msg="Connect containerd service" May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108273236Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108383565Z" level=error msg="Failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config" May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108706501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108826198Z" level=info msg="Start subscribing containerd event" May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108866399Z" level=info msg="Start recovering state" May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108882536Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108919089Z" level=info msg=serving... address=/run/containerd/containerd.sock May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108926881Z" level=info msg="Start event monitor" May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108946768Z" level=info msg="Start snapshots syncer" May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108958263Z" level=info msg="Start streaming server" May 31 03:21:54 minikube containerd[16072]: time="2021-05-31T03:21:54.108929584Z" level=info msg="containerd successfully booted in 0.028760s" May 31 03:21:58 minikube containerd[16072]: time="2021-05-31T03:21:58.363569659Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kubernetesui/metrics-scraper:v1.0.4,Labels:map[string]string{},XXX_unrecognized:[],}" May 31 03:21:58 minikube containerd[16072]: time="2021-05-31T03:21:58.379221552Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:21:58 minikube containerd[16072]: time="2021-05-31T03:21:58.380849474Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/kubernetesui/metrics-scraper:v1.0.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:00 minikube containerd[16072]: time="2021-05-31T03:22:00.011034280Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kubernetesui/dashboard:v2.1.0,Labels:map[string]string{},XXX_unrecognized:[],}" May 31 03:22:00 minikube containerd[16072]: time="2021-05-31T03:22:00.018759464Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:00 minikube containerd[16072]: time="2021-05-31T03:22:00.019656955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/kubernetesui/dashboard:v2.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:02 minikube containerd[16072]: time="2021-05-31T03:22:02.656121107Z" level=info msg="ImageCreate event &ImageCreate{Name:gcr.io/k8s-minikube/storage-provisioner:v5,Labels:map[string]string{},XXX_unrecognized:[],}" May 31 03:22:02 minikube containerd[16072]: time="2021-05-31T03:22:02.667445129Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:02 minikube containerd[16072]: time="2021-05-31T03:22:02.668333856Z" level=info msg="ImageUpdate event &ImageUpdate{Name:gcr.io/k8s-minikube/storage-provisioner:v5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:03 minikube containerd[16072]: time="2021-05-31T03:22:03.253295252Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/pause:3.2,Labels:map[string]string{},XXX_unrecognized:[],}" May 31 03:22:03 minikube containerd[16072]: time="2021-05-31T03:22:03.260042617Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:03 minikube containerd[16072]: time="2021-05-31T03:22:03.260381346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/pause:3.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:04 minikube containerd[16072]: time="2021-05-31T03:22:04.110561351Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/coredns:1.6.7,Labels:map[string]string{},XXX_unrecognized:[],}" May 31 03:22:04 minikube containerd[16072]: time="2021-05-31T03:22:04.130566333Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:04 minikube containerd[16072]: time="2021-05-31T03:22:04.132812207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/coredns:1.6.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:06 minikube containerd[16072]: time="2021-05-31T03:22:06.038567795Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/kube-scheduler:v1.18.0,Labels:map[string]string{},XXX_unrecognized:[],}" May 31 03:22:06 minikube containerd[16072]: time="2021-05-31T03:22:06.051073381Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a31f78c7c8ce146a60cc178c528dd08ca89320f2883e7eb804d7f7b062ed6466,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:06 minikube containerd[16072]: time="2021-05-31T03:22:06.052504755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-scheduler:v1.18.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:08 minikube containerd[16072]: time="2021-05-31T03:22:08.271254144Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/kube-proxy:v1.18.0,Labels:map[string]string{},XXX_unrecognized:[],}" May 31 03:22:08 minikube containerd[16072]: time="2021-05-31T03:22:08.284227051Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:43940c34f24f39bc9a00b4f9dbcab51a3b28952a7c392c119b877fcb48fe65a3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:08 minikube containerd[16072]: time="2021-05-31T03:22:08.285909854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-proxy:v1.18.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:10 minikube containerd[16072]: time="2021-05-31T03:22:10.101522551Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/kube-controller-manager:v1.18.0,Labels:map[string]string{},XXX_unrecognized:[],}" May 31 03:22:10 minikube containerd[16072]: time="2021-05-31T03:22:10.130236541Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d3e55153f52fb62421dae9ad1a8690a3fd1b30f1b808e50a69a8e7ed5565e72e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:10 minikube containerd[16072]: time="2021-05-31T03:22:10.132053813Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-controller-manager:v1.18.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:12 minikube containerd[16072]: time="2021-05-31T03:22:12.010283783Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/kube-apiserver:v1.18.0,Labels:map[string]string{},XXX_unrecognized:[],}" May 31 03:22:12 minikube containerd[16072]: time="2021-05-31T03:22:12.026139970Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:74060cea7f70476f300d9f04fe2c3b3a2e84589e0579382a8df8c82161c3735c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:12 minikube containerd[16072]: time="2021-05-31T03:22:12.028901561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-apiserver:v1.18.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:14 minikube containerd[16072]: time="2021-05-31T03:22:14.625496922Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/etcd:3.4.3-0,Labels:map[string]string{},XXX_unrecognized:[],}" May 31 03:22:14 minikube containerd[16072]: time="2021-05-31T03:22:14.638985042Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:14 minikube containerd[16072]: time="2021-05-31T03:22:14.640856734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/etcd:3.4.3-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 31 03:22:18 minikube containerd[16072]: time="2021-05-31T03:22:18.127945999Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config" May 31 03:22:19 minikube containerd[16072]: time="2021-05-31T03:22:19.581861664Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config" May 31 03:22:20 minikube containerd[16072]: time="2021-05-31T03:22:20.246789581Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config" * * ==> describe nodes <== * * ==> dmesg <== * [ +0.000050] systemd-sysv-generator[1177902]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.436082] systemd-sysv-generator[1178421]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000032] systemd-sysv-generator[1178421]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [May24 15:49] pcieport 0000:00:1c.5: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID) [ +0.000002] pcieport 0000:00:1c.5: device [8086:9d15] error status/mask=00001000/00002000 [ +0.000087] pcieport 0000:00:1c.5: [12] Timeout [May24 15:50] pcieport 0000:00:1c.5: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID) [ +0.000006] pcieport 0000:00:1c.5: device [8086:9d15] error status/mask=00001000/00002000 [ +0.000006] pcieport 0000:00:1c.5: [12] Timeout [May24 15:51] systemd-sysv-generator[1274492]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000047] systemd-sysv-generator[1274492]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.230528] systemd-sysv-generator[1274523]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000025] systemd-sysv-generator[1274523]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [May24 17:22] pcieport 0000:00:1c.5: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) [ +0.000006] pcieport 0000:00:1c.5: device [8086:9d15] error status/mask=00000001/00002000 [ +0.000007] pcieport 0000:00:1c.5: [ 0] RxErr [May24 17:41] pcieport 0000:00:1c.5: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID) [ +0.000007] pcieport 0000:00:1c.5: device [8086:9d15] error status/mask=00001000/00002000 [ +0.000008] pcieport 0000:00:1c.5: [12] Timeout [May24 18:49] pcieport 0000:00:1c.5: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID) [ +0.000237] pcieport 0000:00:1c.5: device [8086:9d15] error status/mask=00003000/00002000 [ +0.000007] pcieport 0000:00:1c.5: [12] Timeout [May24 19:08] pcieport 0000:00:1c.5: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID) [ +0.000008] pcieport 0000:00:1c.5: device [8086:9d15] error status/mask=00003000/00002000 [ +0.000010] pcieport 0000:00:1c.5: [12] Timeout [May24 19:37] pcieport 0000:00:1c.5: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID) [ +0.000006] pcieport 0000:00:1c.5: device [8086:9d15] error status/mask=00003000/00002000 [ +0.000007] pcieport 0000:00:1c.5: [12] Timeout [May24 20:09] pcieport 0000:00:1c.5: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID) [ +0.000007] pcieport 0000:00:1c.5: device [8086:9d15] error status/mask=00003000/00002000 [ +0.000008] pcieport 0000:00:1c.5: [12] Timeout [May24 20:26] pcieport 0000:00:1c.5: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID) [ +0.000006] pcieport 0000:00:1c.5: device [8086:9d15] error status/mask=00003000/00002000 [ +0.000009] pcieport 0000:00:1c.5: [12] Timeout [May24 22:31] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality. [May24 22:32] process 'docker/tmp/qemu-check947598620/check' started with executable stack [May24 22:33] NVRM: API mismatch: the client has the version 465.31, but NVRM: this kernel module has the version 465.27. Please NVRM: make sure that this kernel module and all NVIDIA driver NVRM: components have the same version. [May24 22:38] systemd-sysv-generator[1483629]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000048] systemd-sysv-generator[1483629]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [May24 22:44] systemd-sysv-generator[1496504]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000056] systemd-sysv-generator[1496504]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [May24 22:47] systemd-sysv-generator[1502909]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000049] systemd-sysv-generator[1502909]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [May24 22:51] systemd-sysv-generator[1513442]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000044] systemd-sysv-generator[1513442]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +12.211028] systemd-sysv-generator[1514738]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000053] systemd-sysv-generator[1514738]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [May24 23:15] systemd-sysv-generator[1566993]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000029] systemd-sysv-generator[1566993]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +1.541938] systemd-sysv-generator[1567107]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000060] systemd-sysv-generator[1567107]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [May24 23:16] systemd-sysv-generator[1572569]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000056] systemd-sysv-generator[1572569]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [May24 23:25] NVRM: API mismatch: the client has the version 465.31, but NVRM: this kernel module has the version 465.27. Please NVRM: make sure that this kernel module and all NVIDIA driver NVRM: components have the same version. * * ==> kernel <== * 03:24:41 up 8 days, 1:12, 0 users, load average: 1.25, 1.90, 2.81 Linux minikube 5.11.20-300.fc34.x86_64 #1 SMP Wed May 12 12:45:10 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 19.10" * * ==> kubelet <== * -- Logs begin at Mon 2021-05-31 03:07:19 UTC, end at Mon 2021-05-31 03:24:41 UTC. -- May 31 03:24:37 minikube systemd[1]: kubelet.service: Service RestartSec=600ms expired, scheduling restart. May 31 03:24:37 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 181. May 31 03:24:37 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. May 31 03:24:37 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. May 31 03:24:37 minikube kubelet[20139]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 31 03:24:37 minikube kubelet[20139]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 31 03:24:37 minikube kubelet[20139]: I0531 03:24:37.811827 20139 server.go:417] Version: v1.18.0 May 31 03:24:37 minikube kubelet[20139]: I0531 03:24:37.812061 20139 plugins.go:100] No cloud provider specified. May 31 03:24:37 minikube kubelet[20139]: I0531 03:24:37.812083 20139 server.go:837] Client rotation is on, will bootstrap in background May 31 03:24:37 minikube kubelet[20139]: F0531 03:24:37.812124 20139 server.go:274] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory May 31 03:24:37 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION May 31 03:24:37 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. May 31 03:24:38 minikube systemd[1]: kubelet.service: Service RestartSec=600ms expired, scheduling restart. May 31 03:24:38 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 182. May 31 03:24:38 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. May 31 03:24:38 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. May 31 03:24:38 minikube kubelet[20153]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 31 03:24:38 minikube kubelet[20153]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 31 03:24:38 minikube kubelet[20153]: I0531 03:24:38.526120 20153 server.go:417] Version: v1.18.0 May 31 03:24:38 minikube kubelet[20153]: I0531 03:24:38.526332 20153 plugins.go:100] No cloud provider specified. May 31 03:24:38 minikube kubelet[20153]: I0531 03:24:38.526346 20153 server.go:837] Client rotation is on, will bootstrap in background May 31 03:24:38 minikube kubelet[20153]: F0531 03:24:38.526377 20153 server.go:274] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory May 31 03:24:38 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION May 31 03:24:38 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. May 31 03:24:39 minikube systemd[1]: kubelet.service: Service RestartSec=600ms expired, scheduling restart. May 31 03:24:39 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 183. May 31 03:24:39 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. May 31 03:24:39 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. May 31 03:24:39 minikube kubelet[20167]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 31 03:24:39 minikube kubelet[20167]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 31 03:24:39 minikube kubelet[20167]: I0531 03:24:39.306374 20167 server.go:417] Version: v1.18.0 May 31 03:24:39 minikube kubelet[20167]: I0531 03:24:39.306570 20167 plugins.go:100] No cloud provider specified. May 31 03:24:39 minikube kubelet[20167]: I0531 03:24:39.306587 20167 server.go:837] Client rotation is on, will bootstrap in background May 31 03:24:39 minikube kubelet[20167]: F0531 03:24:39.306625 20167 server.go:274] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory May 31 03:24:39 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION May 31 03:24:39 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. May 31 03:24:39 minikube systemd[1]: kubelet.service: Service RestartSec=600ms expired, scheduling restart. May 31 03:24:39 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 184. May 31 03:24:39 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. May 31 03:24:39 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. May 31 03:24:40 minikube kubelet[20183]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 31 03:24:40 minikube kubelet[20183]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 31 03:24:40 minikube kubelet[20183]: I0531 03:24:40.053197 20183 server.go:417] Version: v1.18.0 May 31 03:24:40 minikube kubelet[20183]: I0531 03:24:40.053425 20183 plugins.go:100] No cloud provider specified. May 31 03:24:40 minikube kubelet[20183]: I0531 03:24:40.053438 20183 server.go:837] Client rotation is on, will bootstrap in background May 31 03:24:40 minikube kubelet[20183]: F0531 03:24:40.053469 20183 server.go:274] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory May 31 03:24:40 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION May 31 03:24:40 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. May 31 03:24:40 minikube systemd[1]: kubelet.service: Service RestartSec=600ms expired, scheduling restart. May 31 03:24:40 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 185. May 31 03:24:40 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. May 31 03:24:40 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. May 31 03:24:40 minikube kubelet[20197]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 31 03:24:40 minikube kubelet[20197]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 31 03:24:40 minikube kubelet[20197]: I0531 03:24:40.816571 20197 server.go:417] Version: v1.18.0 May 31 03:24:40 minikube kubelet[20197]: I0531 03:24:40.816807 20197 plugins.go:100] No cloud provider specified. May 31 03:24:40 minikube kubelet[20197]: I0531 03:24:40.816824 20197 server.go:837] Client rotation is on, will bootstrap in background May 31 03:24:40 minikube kubelet[20197]: F0531 03:24:40.816855 20197 server.go:274] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory May 31 03:24:40 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION May 31 03:24:40 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
RA489 commented 3 years ago

/kind support

RA489 commented 3 years ago

@paulopatto could you please provide what error you got?

yann-soubeyrand commented 3 years ago

Hello @RA489,

Iโ€™m also running Minikube on Fedora 34 (Silverblue variant), and Iโ€™m unable to start it:

๐Ÿ˜„  minikube v1.21.0 sur Fedora 34
โœจ  Utilisation du pilote kvm2 basรฉ sur la configuration de l'utilisateur
๐Ÿ‘  Dรฉmarrage du noeud de plan de contrรดle minikube dans le cluster minikube
๐Ÿ”ฅ  Crรฉation de VM kvm2 (CPUs=2, Mรฉmoire=3900MB, Disque=20000MB)...
๐Ÿณ  Prรฉparation de Kubernetes v1.20.7 sur Docker 20.10.6...
    โ–ช Gรฉnรฉration des certificats et des clรฉs
    โ–ช Dรฉmarrage du plan de contrรดle ...
    โ–ช Configuration des rรจgles RBAC ...| E0707 12:19:16.121848   74356 start.go:135] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: timed out waiting for the condition

๐Ÿ”Ž  Vรฉrification des composants Kubernetes...
โ—  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Unauthorized]
    โ–ช Utilisation de l'image gcr.io/k8s-minikube/storage-provisioner:v5
๐ŸŒŸ  Addons activรฉs: storage-provisioner

โŒ  Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.7

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚                                                                    โ”‚
โ”‚    ๐Ÿ˜ฟ  If the above advice does not help, please let us know:      โ”‚
โ”‚    ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose    โ”‚
โ”‚                                                                    โ”‚
โ”‚    Please attach the following file to the GitHub issue:           โ”‚
โ”‚    - /home/yann/.minikube/logs/lastStart.txt                       โ”‚
โ”‚                                                                    โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

Here are the logs: lastStart.txt

yann-soubeyrand commented 3 years ago

Today itโ€™s working for me :thinking:

paulopatto commented 3 years ago

The same erro of @yann-soubeyrand

medyagh commented 3 years ago

@paulopatto Thank you for sharing your experience! If you don't mind, could you provide minikube logs ?

Please attach (drag) thelogs.txt file to this issue which can be generated by running this command:

$ minikube logs --file=logs.txt

This will help us isolate the problem further. Thank you!

/triage needs-information /kind support

spowelljr commented 3 years ago

Hi @paulopatto, we haven't heard back from you, do you still have this issue? There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but feel free to reopen when you feel ready to provide more information.