kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.23k stars 4.87k forks source link

CrashLoopBackOff #12049

Closed Ultimo12 closed 3 years ago

Ultimo12 commented 3 years ago

Steps to reproduce the issue:

1.

Full output of minikube logs command:

Full output of failed command:

minikube_addons_3f375e07af22351aa68de6b727e38b60a1302a10_0.log

RA489 commented 3 years ago

@Ultimo12 Do you mind adding some additional details. Here is additional information that would be helpful:

Thank you for sharing your experience!

RA489 commented 3 years ago

/triage needs-information

Ultimo12 commented 3 years ago

@.:~$ minikube start --alsologtostderr -v=4 I0726 10:18:32.841857 10870 out.go:291] Setting OutFile to fd 1 ... I0726 10:18:32.841963 10870 out.go:343] isatty.IsTerminal(1) = true I0726 10:18:32.841968 10870 out.go:304] Setting ErrFile to fd 2... I0726 10:18:32.841974 10870 out.go:343] isatty.IsTerminal(2) = true I0726 10:18:32.842035 10870 root.go:316] Updating PATH: /home/sergio/.minikube/bin I0726 10:18:32.842152 10870 out.go:298] Setting JSON to false I0726 10:18:32.861577 10870 start.go:108] hostinfo: {"hostname":"sergio-desktop","uptime":6450,"bootTime":1627281063,"procs":349,"os":"linux","platform":"linuxmint","platformFamily":"debian","platformVersion":"20.1","kernelVersion":"5.4.0-80-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"93997209-5cf1-459c-8b22-cc608eff85da"} I0726 10:18:32.861615 10870 start.go:118] virtualization: kvm host I0726 10:18:32.862778 10870 out.go:170] ๐Ÿ˜„ minikube v1.20.0 on Linuxmint 20.1 ๐Ÿ˜„ minikube v1.20.0 on Linuxmint 20.1 I0726 10:18:32.863003 10870 driver.go:322] Setting default libvirt URI to qemu:///system I0726 10:18:32.883518 10870 docker.go:119] docker version: linux-20.10.2 I0726 10:18:32.883575 10870 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0726 10:18:32.904403 10870 info.go:261] docker info: {ID:ME64:NR5F:KZM7:XXIC:QDGH:ON3Q:NGLA:LFDZ:C4NR:WD2B:QE35:JH4P Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-26 10:18:32.899960592 +0200 CEST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-80-generic OperatingSystem:Linux Mint 20.1 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[ 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33537769472 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:sergio-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support WARNING: No blkio weight support WARNING: No blkio weight_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0726 10:18:32.904459 10870 docker.go:225] overlay module found I0726 10:18:32.905287 10870 out.go:170] โœจ Using the docker driver based on existing profile โœจ Using the docker driver based on existing profile I0726 10:18:32.905305 10870 start.go:276] selected driver: docker I0726 10:18:32.905309 10870 start.go:718] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage: @.:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR: 192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0726 10:18:32.905361 10870 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0726 10:18:32.905505 10870 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0726 10:18:32.926000 10870 info.go:261] docker info: {ID:ME64:NR5F:KZM7:XXIC:QDGH:ON3Q:NGLA:LFDZ:C4NR:WD2B:QE35:JH4P Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-26 10:18:32.921814522 +0200 CEST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-80-generic OperatingSystem:Linux Mint 20.1 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[ 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33537769472 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:sergio-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support WARNING: No blkio weight support WARNING: No blkio weight_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0726 10:18:32.926535 10870 cni.go:93] Creating CNI manager for "" I0726 10:18:32.926547 10870 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0726 10:18:32.926553 10870 start_flags.go:273] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage: @.:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR: 192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0726 10:18:32.927466 10870 out.go:170] ๐Ÿ‘ Starting control plane node minikube in cluster minikube ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0726 10:18:32.927478 10870 cache.go:111] Beginning downloading kic base image for docker with docker W0726 10:18:32.927483 10870 out.go:424] no arguments passed for "๐Ÿšœ Pulling base image ...\n" - returning raw string W0726 10:18:32.927490 10870 out.go:424] no arguments passed for "๐Ÿšœ Pulling base image ...\n" - returning raw string I0726 10:18:32.928116 10870 out.go:170] ๐Ÿšœ Pulling base image ... ๐Ÿšœ Pulling base image ... I0726 10:18:32.928128 10870 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker I0726 10:18:32.928142 10870 preload.go:106] Found local preload: /home/sergio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0726 10:18:32.928146 10870 cache.go:54] Caching tarball of preloaded images I0726 10:18:32.928153 10870 preload.go:132] Found /home/sergio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0726 10:18:32.928157 10870 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker I0726 10:18:32.928206 10870 image.go:116] Checking for @.:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory I0726 10:18:32.928218 10870 image.go:119] Found @.:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull I0726 10:18:32.928222 10870 cache.go:131] @.:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull I0726 10:18:32.928231 10870 profile.go:148] Saving config to /home/sergio/.minikube/profiles/minikube/config.json ... I0726 10:18:32.928238 10870 image.go:130] Checking for @.:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon I0726 10:18:32.946267 10870 image.go:134] Found @.:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull I0726 10:18:32.946282 10870 cache.go:155] @.***:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull I0726 10:18:32.946290 10870 cache.go:194] Successfully downloaded all kic artifacts I0726 10:18:32.946307 10870 start.go:313] acquiring machines lock for minikube: {Name:mk8f00c895be60665f583b206e763223f58e94f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0726 10:18:32.946384 10870 start.go:317] acquired machines lock for "minikube" in 58.547ยตs I0726 10:18:32.946396 10870 start.go:93] Skipping create...Using existing machine configuration I0726 10:18:32.946400 10870 fix.go:55] fixHost starting: I0726 10:18:32.946532 10870 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0726 10:18:32.964870 10870 fix.go:108] recreateIfNeeded on minikube: state=Stopped err= W0726 10:18:32.964896 10870 fix.go:134] unexpected machine state, will restart: I0726 10:18:32.965906 10870 out.go:170] ๐Ÿ”„ Restarting existing docker container for "minikube" ... ๐Ÿ”„ Restarting existing docker container for "minikube" ... I0726 10:18:32.965934 10870 cli_runner.go:115] Run: docker start minikube I0726 10:18:33.250835 10870 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0726 10:18:33.269906 10870 kic.go:414] container "minikube" state is running. I0726 10:18:33.270106 10870 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0726 10:18:33.288325 10870 profile.go:148] Saving config to /home/sergio/.minikube/profiles/minikube/config.json ... I0726 10:18:33.288440 10870 machine.go:88] provisioning docker machine ... I0726 10:18:33.288453 10870 ubuntu.go:169] provisioning hostname "minikube" I0726 10:18:33.288474 10870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0726 10:18:33.306443 10870 main.go:128] libmachine: Using SSH client type: native I0726 10:18:33.306548 10870 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 49162 } I0726 10:18:33.306557 10870 main.go:128] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0726 10:18:33.306912 10870 main.go:128] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46208->127.0.0.1:49162: read: connection reset by peer I0726 10:18:36.468762 10870 main.go:128] libmachine: SSH cmd err, output:

: minikube I0726 10:18:36.468930 10870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0726 10:18:36.491455 10870 main.go:128] libmachine: Using SSH client type: native I0726 10:18:36.491544 10870 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 49162 } I0726 10:18:36.491556 10870 main.go:128] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0726 10:18:36.620292 10870 main.go:128] libmachine: SSH cmd err, output: : I0726 10:18:36.620361 10870 ubuntu.go:175] set auth options {CertDir:/home/sergio/.minikube CaCertPath:/home/sergio/.minikube/certs/ca.pem CaPrivateKeyPath:/home/sergio/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/sergio/.minikube/machines/server.pem ServerKeyPath:/home/sergio/.minikube/machines/server-key.pem ClientKeyPath:/home/sergio/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/sergio/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/sergio/.minikube} I0726 10:18:36.620427 10870 ubuntu.go:177] setting up certificates I0726 10:18:36.620474 10870 provision.go:83] configureAuth start I0726 10:18:36.620568 10870 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0726 10:18:36.647677 10870 provision.go:137] copyHostCerts I0726 10:18:36.647710 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/certs/ca.pem -> /home/sergio/.minikube/ca.pem I0726 10:18:36.647734 10870 exec_runner.go:145] found /home/sergio/.minikube/ca.pem, removing ... I0726 10:18:36.647747 10870 exec_runner.go:190] rm: /home/sergio/.minikube/ca.pem I0726 10:18:36.647804 10870 exec_runner.go:152] cp: /home/sergio/.minikube/certs/ca.pem --> /home/sergio/.minikube/ca.pem (1078 bytes) I0726 10:18:36.647893 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/certs/cert.pem -> /home/sergio/.minikube/cert.pem I0726 10:18:36.647909 10870 exec_runner.go:145] found /home/sergio/.minikube/cert.pem, removing ... I0726 10:18:36.647917 10870 exec_runner.go:190] rm: /home/sergio/.minikube/cert.pem I0726 10:18:36.647943 10870 exec_runner.go:152] cp: /home/sergio/.minikube/certs/cert.pem --> /home/sergio/.minikube/cert.pem (1123 bytes) I0726 10:18:36.647995 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/certs/key.pem -> /home/sergio/.minikube/key.pem I0726 10:18:36.648013 10870 exec_runner.go:145] found /home/sergio/.minikube/key.pem, removing ... I0726 10:18:36.648019 10870 exec_runner.go:190] rm: /home/sergio/.minikube/key.pem I0726 10:18:36.648047 10870 exec_runner.go:152] cp: /home/sergio/.minikube/certs/key.pem --> /home/sergio/.minikube/key.pem (1675 bytes) I0726 10:18:36.648100 10870 provision.go:111] generating server cert: /home/sergio/.minikube/machines/server.pem ca-key=/home/sergio/.minikube/certs/ca.pem private-key=/home/sergio/.minikube/certs/ca-key.pem org=sergio.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0726 10:18:36.731527 10870 provision.go:165] copyRemoteCerts I0726 10:18:36.731563 10870 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0726 10:18:36.731584 10870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0726 10:18:36.749668 10870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/sergio/.minikube/machines/minikube/id_rsa Username:docker} I0726 10:18:36.836079 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/certs/ca.pem -> /etc/docker/ca.pem I0726 10:18:36.836173 10870 ssh_runner.go:316] scp /home/sergio/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0726 10:18:36.873300 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/machines/server.pem -> /etc/docker/server.pem I0726 10:18:36.873345 10870 ssh_runner.go:316] scp /home/sergio/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes) I0726 10:18:36.888449 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem I0726 10:18:36.888502 10870 ssh_runner.go:316] scp /home/sergio/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0726 10:18:36.918051 10870 provision.go:86] duration metric: configureAuth took 297.547552ms I0726 10:18:36.918099 10870 ubuntu.go:193] setting minikube options for container-runtime I0726 10:18:36.918539 10870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0726 10:18:36.938793 10870 main.go:128] libmachine: Using SSH client type: native I0726 10:18:36.938875 10870 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 49162 } I0726 10:18:36.938882 10870 main.go:128] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0726 10:18:37.065360 10870 main.go:128] libmachine: SSH cmd err, output: : overlay I0726 10:18:37.065410 10870 ubuntu.go:71] root file system type: overlay I0726 10:18:37.065754 10870 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ... I0726 10:18:37.065854 10870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0726 10:18:37.086515 10870 main.go:128] libmachine: Using SSH client type: native I0726 10:18:37.086605 10870 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 49162 } I0726 10:18:37.086647 10870 main.go:128] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0726 10:18:37.228308 10870 main.go:128] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0726 10:18:37.228548 10870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0726 10:18:37.249857 10870 main.go:128] libmachine: Using SSH client type: native I0726 10:18:37.249974 10870 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 49162 } I0726 10:18:37.249991 10870 main.go:128] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0726 10:18:37.380434 10870 main.go:128] libmachine: SSH cmd err, output: : I0726 10:18:37.380499 10870 machine.go:91] provisioned docker machine in 4.092041235s I0726 10:18:37.380531 10870 start.go:267] post-start starting for "minikube" (driver="docker") I0726 10:18:37.380571 10870 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0726 10:18:37.380704 10870 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0726 10:18:37.380822 10870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0726 10:18:37.399958 10870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/sergio/.minikube/machines/minikube/id_rsa Username:docker} I0726 10:18:37.484425 10870 ssh_runner.go:149] Run: cat /etc/os-release I0726 10:18:37.490384 10870 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0726 10:18:37.490452 10870 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0726 10:18:37.490508 10870 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0726 10:18:37.490539 10870 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0726 10:18:37.490580 10870 filesync.go:118] Scanning /home/sergio/.minikube/addons for local assets ... I0726 10:18:37.490677 10870 filesync.go:118] Scanning /home/sergio/.minikube/files for local assets ... I0726 10:18:37.490733 10870 start.go:270] post-start completed in 110.166317ms I0726 10:18:37.490802 10870 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0726 10:18:37.490878 10870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0726 10:18:37.508758 10870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/sergio/.minikube/machines/minikube/id_rsa Username:docker} I0726 10:18:37.589943 10870 fix.go:57] fixHost completed within 4.643532062s I0726 10:18:37.589986 10870 start.go:80] releasing machines lock for "minikube", held for 4.64359011s I0726 10:18:37.590131 10870 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0726 10:18:37.615000 10870 ssh_runner.go:149] Run: systemctl --version I0726 10:18:37.615027 10870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0726 10:18:37.615043 10870 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0726 10:18:37.615066 10870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0726 10:18:37.634688 10870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/sergio/.minikube/machines/minikube/id_rsa Username:docker} I0726 10:18:37.636329 10870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/sergio/.minikube/machines/minikube/id_rsa Username:docker} I0726 10:18:37.712948 10870 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd I0726 10:18:37.880519 10870 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0726 10:18:37.902442 10870 cruntime.go:225] skipping containerd shutdown because we are bound to it I0726 10:18:37.902544 10870 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0726 10:18:37.908678 10870 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0726 10:18:37.914801 10870 ssh_runner.go:149] Run: sudo systemctl unmask docker.service I0726 10:18:37.957799 10870 ssh_runner.go:149] Run: sudo systemctl enable docker.socket I0726 10:18:38.009158 10870 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0726 10:18:38.014025 10870 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0726 10:18:38.059817 10870 ssh_runner.go:149] Run: sudo systemctl start docker I0726 10:18:38.064625 10870 ssh_runner.go:149] Run: docker version --format {{.Server.Version}} I0726 10:18:38.087536 10870 out.go:197] ๐Ÿณ Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... ๐Ÿณ Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...| I0726 10:18:38.087572 10870 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0726 10:18:38.105875 10870 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0726 10:18:38.107360 10870 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0726 10:18:38.111530 10870 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker I0726 10:18:38.111546 10870 preload.go:106] Found local preload: /home/sergio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0726 10:18:38.111569 10870 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0726 10:18:38.130035 10870 docker.go:528] Got preloaded images: -- stdout -- gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 kubernetesui/dashboard:v2.1.0 jettech/kube-webhook-certgen: k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I0726 10:18:38.130065 10870 docker.go:465] Images already preloaded, skipping extraction I0726 10:18:38.130113 10870 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0726 10:18:38.148813 10870 docker.go:528] Got preloaded images: -- stdout -- gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 kubernetesui/dashboard:v2.1.0 jettech/kube-webhook-certgen: k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I0726 10:18:38.148830 10870 cache_images.go:74] Images are preloaded, skipping loading I0726 10:18:38.148868 10870 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}} / I0726 10:18:38.191271 10870 cni.go:93] Creating CNI manager for "" I0726 10:18:38.191283 10870 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0726 10:18:38.191290 10870 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0726 10:18:38.191299 10870 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: 10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0726 10:18:38.191367 10870 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.2 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0726 10:18:38.191442 10870 kubeadm.go:901] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0726 10:18:38.191474 10870 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0726 10:18:38.194575 10870 binaries.go:44] Found k8s binaries, skipping transfer I0726 10:18:38.194609 10870 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0726 10:18:38.197538 10870 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I0726 10:18:38.202969 10870 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0726 10:18:38.208605 10870 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1840 bytes) I0726 10:18:38.214359 10870 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0726 10:18:38.215576 10870 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0726 10:18:38.219577 10870 certs.go:52] Setting up /home/sergio/.minikube/profiles/minikube for IP: 192.168.49.2 I0726 10:18:38.219598 10870 certs.go:171] skipping minikubeCA CA generation: /home/sergio/.minikube/ca.key I0726 10:18:38.219608 10870 certs.go:171] skipping proxyClientCA CA generation: /home/sergio/.minikube/proxy-client-ca.key I0726 10:18:38.219635 10870 certs.go:282] skipping minikube-user signed cert generation: /home/sergio/.minikube/profiles/minikube/client.key I0726 10:18:38.219645 10870 certs.go:282] skipping minikube signed cert generation: /home/sergio/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0726 10:18:38.219655 10870 certs.go:282] skipping aggregator signed cert generation: /home/sergio/.minikube/profiles/minikube/proxy-client.key I0726 10:18:38.219662 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt I0726 10:18:38.219670 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key I0726 10:18:38.219677 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt I0726 10:18:38.219689 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key I0726 10:18:38.219696 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt I0726 10:18:38.219703 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/ca.key -> /var/lib/minikube/certs/ca.key I0726 10:18:38.219710 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt I0726 10:18:38.219717 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key I0726 10:18:38.219741 10870 certs.go:361] found cert: /home/sergio/.minikube/certs/home/sergio/.minikube/certs/ca-key.pem (1675 bytes) I0726 10:18:38.219763 10870 certs.go:361] found cert: /home/sergio/.minikube/certs/home/sergio/.minikube/certs/ca.pem (1078 bytes) I0726 10:18:38.219779 10870 certs.go:361] found cert: /home/sergio/.minikube/certs/home/sergio/.minikube/certs/cert.pem (1123 bytes) I0726 10:18:38.219792 10870 certs.go:361] found cert: /home/sergio/.minikube/certs/home/sergio/.minikube/certs/key.pem (1675 bytes) I0726 10:18:38.219806 10870 vm_assets.go:96] NewFileAsset: /home/sergio/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem I0726 10:18:38.220360 10870 ssh_runner.go:316] scp /home/sergio/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0726 10:18:38.228355 10870 ssh_runner.go:316] scp /home/sergio/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0726 10:18:38.236178 10870 ssh_runner.go:316] scp /home/sergio/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0726 10:18:38.244021 10870 ssh_runner.go:316] scp /home/sergio/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0726 10:18:38.251917 10870 ssh_runner.go:316] scp /home/sergio/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0726 10:18:38.260061 10870 ssh_runner.go:316] scp /home/sergio/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0726 10:18:38.267887 10870 ssh_runner.go:316] scp /home/sergio/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0726 10:18:38.275648 10870 ssh_runner.go:316] scp /home/sergio/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0726 10:18:38.283522 10870 ssh_runner.go:316] scp /home/sergio/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) - I0726 10:18:38.291552 10870 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (740 bytes) I0726 10:18:38.297259 10870 ssh_runner.go:149] Run: openssl version I0726 10:18:38.299462 10870 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0726 10:18:38.302898 10870 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0726 10:18:38.304243 10870 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 Jul 25 16:33 /usr/share/ca-certificates/minikubeCA.pem I0726 10:18:38.304262 10870 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0726 10:18:38.306404 10870 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0726 10:18:38.309414 10870 kubeadm.go:381] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage: ***@***.***:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR: 192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0726 10:18:38.309479 10870 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0726 10:18:38.328126 10870 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0726 10:18:38.332771 10870 kubeadm.go:392] found existing configuration files, will attempt cluster restart I0726 10:18:38.332784 10870 kubeadm.go:591] restartCluster start I0726 10:18:38.332806 10870 ssh_runner.go:149] Run: sudo test -d /data/minikube I0726 10:18:38.335593 10870 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0726 10:18:38.335912 10870 kubeconfig.go:117] verify returned: extract IP: "minikube" does not appear in /home/sergio/.kube/config I0726 10:18:38.335958 10870 kubeconfig.go:128] "minikube" context is missing from /home/sergio/.kube/config - will repair! I0726 10:18:38.336113 10870 lock.go:36] WriteFile acquiring /home/sergio/.kube/config: {Name:mk530b5cd91fb4f676ac80590f73329b83472169 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0726 10:18:38.336564 10870 kapi.go:59] client config for minikube: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/sergio/.minikube/profiles/minikube/client.crt", KeyFile:"/home/sergio/.minikube/profiles/minikube/client.key", CAFile:"/home/sergio/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a8820), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)} I0726 10:18:38.336807 10870 cert_rotation.go:137] Starting client certificate rotation controller I0726 10:18:38.337272 10870 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0726 10:18:38.340135 10870 api_server.go:148] Checking apiserver status ... I0726 10:18:38.340154 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0726 10:18:38.346377 10870 api_server.go:152] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0726 10:18:38.346389 10870 kubeadm.go:570] needs reconfigure: apiserver in state Stopped I0726 10:18:38.346394 10870 kubeadm.go:1024] stopping kube-system containers ... I0726 10:18:38.346416 10870 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0726 10:18:38.365769 10870 docker.go:366] Stopping containers: [8f475150b5d4 f1d8b9bad821 c67cc01f7414 3f06630a7757 c1a0f3491c55 ef2c1aade7c0 e2b3adbd3c3c ae618fa2686e dbf8ccaad662 b6c3dfadb1d2 395574fcb9b1 aeeac2715009 c47e0d9ff264 b7e453591d38 c9268d761e4e 08f7cd95ddd6 b5ffee458d56 b62ee673453f eaf532267b6d af5d338577c8 4b61d48fa6c7 4cc6189ddc1b bd09cf091fbf e97eb562abe7 e25d0fc3eda1] I0726 10:18:38.365823 10870 ssh_runner.go:149] Run: docker stop 8f475150b5d4 f1d8b9bad821 c67cc01f7414 3f06630a7757 c1a0f3491c55 ef2c1aade7c0 e2b3adbd3c3c ae618fa2686e dbf8ccaad662 b6c3dfadb1d2 395574fcb9b1 aeeac2715009 c47e0d9ff264 b7e453591d38 c9268d761e4e 08f7cd95ddd6 b5ffee458d56 b62ee673453f eaf532267b6d af5d338577c8 4b61d48fa6c7 4cc6189ddc1b bd09cf091fbf e97eb562abe7 e25d0fc3eda1 I0726 10:18:38.383945 10870 ssh_runner.go:149] Run: sudo systemctl stop kubelet I0726 10:18:38.388665 10870 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf \ I0726 10:18:38.391686 10870 kubeadm.go:154] found existing configuration files: -rw------- 1 root root 5611 Jul 25 16:35 /etc/kubernetes/admin.conf -rw------- 1 root root 5632 Jul 26 08:16 /etc/kubernetes/controller-manager.conf -rw------- 1 root root 1971 Jul 25 16:35 /etc/kubernetes/kubelet.conf -rw------- 1 root root 5580 Jul 26 08:16 /etc/kubernetes/scheduler.conf I0726 10:18:38.391709 10870 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0726 10:18:38.394592 10870 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0726 10:18:38.397473 10870 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0726 10:18:38.400277 10870 kubeadm.go:165] " https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1 stdout: stderr: I0726 10:18:38.400296 10870 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0726 10:18:38.403050 10870 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0726 10:18:38.405875 10870 kubeadm.go:165] " https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1 stdout: stderr: I0726 10:18:38.405892 10870 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0726 10:18:38.408772 10870 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0726 10:18:38.411749 10870 kubeadm.go:667] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0726 10:18:38.411757 10870 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0726 10:18:38.479176 10870 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" | I0726 10:18:38.936566 10870 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" / I0726 10:18:39.061698 10870 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" - I0726 10:18:39.141462 10870 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" \ I0726 10:18:39.243852 10870 api_server.go:50] waiting for apiserver process to appear ... I0726 10:18:39.243882 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* | I0726 10:18:39.751129 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* / I0726 10:18:40.250880 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* - I0726 10:18:40.750396 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* \ I0726 10:18:41.250536 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* | I0726 10:18:41.750946 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* / I0726 10:18:42.250821 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* - I0726 10:18:42.750682 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* \ I0726 10:18:43.250609 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* | I0726 10:18:43.751187 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* / I0726 10:18:44.250784 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* - I0726 10:18:44.751403 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* \ I0726 10:18:45.251151 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* | I0726 10:18:45.751363 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* / I0726 10:18:46.250780 10870 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0726 10:18:46.308216 10870 api_server.go:70] duration metric: took 7.064355369s to wait for apiserver process to appear ... I0726 10:18:46.308274 10870 api_server.go:86] waiting for apiserver healthz status ... I0726 10:18:46.308338 10870 api_server.go:223] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... / I0726 10:18:49.065162 10870 api_server.go:249] https://192.168.49.2:8443/healthz returned 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} W0726 10:18:49.065182 10870 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} - I0726 10:18:49.565375 10870 api_server.go:223] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0726 10:18:49.568454 10870 api_server.go:249] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0726 10:18:49.568477 10870 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed \ I0726 10:18:50.066218 10870 api_server.go:223] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0726 10:18:50.069009 10870 api_server.go:249] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0726 10:18:50.069032 10870 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed | I0726 10:18:50.566331 10870 api_server.go:223] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0726 10:18:50.569690 10870 api_server.go:249] https://192.168.49.2:8443/healthz returned 200: ok I0726 10:18:50.573009 10870 api_server.go:139] control plane version: v1.20.2 I0726 10:18:50.573017 10870 api_server.go:129] duration metric: took 4.264698376s to wait for apiserver health ... I0726 10:18:50.573025 10870 cni.go:93] Creating CNI manager for "" I0726 10:18:50.573030 10870 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0726 10:18:50.573034 10870 system_pods.go:43] waiting for kube-system pods to appear ... I0726 10:18:50.578723 10870 system_pods.go:59] 7 kube-system pods found I0726 10:18:50.578747 10870 system_pods.go:61] "coredns-74ff55c5b-tgr6s" [1438d889-f595-4fc1-8c18-311e7f79cb18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0726 10:18:50.578753 10870 system_pods.go:61] "etcd-minikube" [6f6815ef-e70e-4341-9795-47e451b5c175] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0726 10:18:50.578760 10870 system_pods.go:61] "kube-apiserver-minikube" [018d5012-5e71-43c9-9d80-d72582811132] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0726 10:18:50.578765 10870 system_pods.go:61] "kube-controller-manager-minikube" [9982a6dc-99dd-4362-9866-348b30d1d0d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0726 10:18:50.578770 10870 system_pods.go:61] "kube-proxy-zs8qb" [6e0b3c95-9747-4b56-89ef-583814493442] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy]) I0726 10:18:50.578776 10870 system_pods.go:61] "kube-scheduler-minikube" [98790ee4-7131-4c64-a6b2-c8d0fd36516a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0726 10:18:50.578780 10870 system_pods.go:61] "storage-provisioner" [a520f36c-e20c-4484-921e-e4340e86bde3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0726 10:18:50.578785 10870 system_pods.go:74] duration metric: took 5.746807ms to wait for pod list to return data ... I0726 10:18:50.578791 10870 node_conditions.go:102] verifying NodePressure condition ... I0726 10:18:50.581007 10870 node_conditions.go:122] node storage ephemeral capacity is 229184876Ki I0726 10:18:50.581028 10870 node_conditions.go:123] node cpu capacity is 12 I0726 10:18:50.581040 10870 node_conditions.go:105] duration metric: took 2.244366ms to run NodePressure ... I0726 10:18:50.581053 10870 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" / I0726 10:18:50.705814 10870 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0726 10:18:50.713705 10870 ops.go:34] apiserver oom_adj: -16 I0726 10:18:50.713717 10870 kubeadm.go:595] restartCluster took 12.380927783s I0726 10:18:50.713722 10870 kubeadm.go:383] StartCluster complete in 12.404311413s I0726 10:18:50.713733 10870 settings.go:142] acquiring lock: {Name:mk92da59dfffc00867d2848486c4ed9963bfca83 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0726 10:18:50.713777 10870 settings.go:150] Updating kubeconfig: /home/sergio/.kube/config I0726 10:18:50.714108 10870 lock.go:36] WriteFile acquiring /home/sergio/.kube/config: {Name:mk530b5cd91fb4f676ac80590f73329b83472169 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0726 10:18:50.714560 10870 kapi.go:59] client config for minikube: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/sergio/.minikube/profiles/minikube/client.crt", KeyFile:"/home/sergio/.minikube/profiles/minikube/client.key", CAFile:"/home/sergio/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
medyagh commented 3 years ago

@Ultimo12 the log file is missing the important part

do u mind trying the "latest" version of minikube and then Please attach (drag) thelogs.txt file to this issue which can be generated by running this command:

$ minikube logs --file=logs.txt

This will help us isolate the problem further. Thank you!

/triage needs-information /kind support

spowelljr commented 3 years ago

Hi @Ultimo12, we haven't heard back from you, do you still have this issue? There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but feel free to reopen when you feel ready to provide more information.