kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.48k stars 4.89k forks source link

Kubernetes version to "v1.20.3+" make minikube -1- minute slower ! kubelet-check] Initial timeout of 40s passed. #10545

Closed medyagh closed 3 years ago

medyagh commented 3 years ago

on my mac machine minikube v1.17.0 takes 40 second but on HEAD is taking arround 2 minutes and most of the wait is in booting up control plane (kubeadm init) and it only happens for v1.20.3+

I0220 15:44:24.521975   82905 out.go:140]     ▪ Booting up control plane ...

    ▪ Booting up control plane ...I0220 15:44:24.522119   82905 command_runner.go:123] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0220 15:44:24.522344   82905 command_runner.go:123] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0220 15:44:24.522442   82905 command_runner.go:123] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0220 15:44:24.522659   82905 command_runner.go:123] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0220 15:44:24.522894   82905 command_runner.go:123] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
| I0220 15:45:04.513403   82905 command_runner.go:123] > [kubelet-check] Initial timeout of 40s passed.
/ I0220 15:45:28.017602   82905 command_runner.go:123] > [apiclient] All control plane components are healthy after 63.506128 seconds
I0220 15:45:28.017909   82905 command_runner.go:123] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0220 15:45:28.031480   82905 command_runner.go:123] > [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
- I0220 15:45:28.557112   82905 command_runner.go:123] > [upload-certs] Skipping phase. Please see --upload-certs
I0220 15:45:28.557337   82905 command_runner.go:123] > [mark-control-plane] Marking the node minikube as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
| I0220 15:45:29.070246   82905 command_runner.go:123] > [bootstrap-token] Using token: ckmqq4.dov5m97q5ko44fpg
I0220 15:45:29.159638   82905 out.go:140]     ▪ Configuring RBAC rules ..

timing for kubernetes minikube 1.17.0 with Kubernetes v1.20.2

 $ time minikube-v1.17.0 start
😄  minikube v1.17.0 on Darwin 11.2
🔥  Creating docker container (CPUs=2, Memory=4000MB) ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...

real    0m40.697s
user    0m6.318s
sys     0m3.166s

timing for kubernetes minikube head Kubernetes v1.20.4 (images are downloaded before run)

but the head minikube on head (most of time is at Booting Up Control Plane)

$ time mk start
😄  minikube v1.17.1 on Darwin 11.2
🐳  Preparing Kubernetes v1.20.4 on Docker 20.10.3 ...
    ▪ Booting up control plane ...

real    1m37.604s
user    0m6.586s
sys     0m3.609

timing for kubernetes minikube head Kubernetes v1.20.3

medya@~/workspace/minikube (master) $ time mk start --kubernetes-version=v1.20.3
🐳  Preparing Kubernetes v1.20.3 on Docker 20.10.3 ...

real    1m28.191s
user    0m5.650s
sys 0m3.184s

timing for kubernetes minikube head Kubernetes v1.20.2 ( it goes back to normal again)

medya@~/workspace/minikube (master) $ time mk start --kubernetes-version=v1.20.2
😄  minikube v1.17.1 on Darwin 11.2
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...

real    0m35.340s
user    0m5.587s
sys 0m2.954s

full log

medya@~/workspace/minikube (master) $ mk start --alsologtostderr -v=8
I0220 15:44:03.745667   82905 out.go:229] Setting OutFile to fd 1 ...
I0220 15:44:03.746555   82905 out.go:281] isatty.IsTerminal(1) = true
I0220 15:44:03.746563   82905 out.go:242] Setting ErrFile to fd 2...
I0220 15:44:03.746568   82905 out.go:281] isatty.IsTerminal(2) = true
I0220 15:44:03.746654   82905 root.go:308] Updating PATH: /Users/medya/.minikube/bin
I0220 15:44:03.747359   82905 out.go:236] Setting JSON to false
I0220 15:44:03.873931   82905 start.go:108] hostinfo: {"hostname":"medya-macbookpro3.roam.corp.google.com","uptime":1481364,"bootTime":1612383279,"procs":616,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c86236b2-4976-3542-80ca-74a6b8b4ba03"}
W0220 15:44:03.874032   82905 start.go:116] gopshost.Virtualization returned error: not implemented yet
I0220 15:44:03.958113   82905 out.go:119] 😄  minikube v1.17.1 on Darwin 11.2
😄  minikube v1.17.1 on Darwin 11.2
I0220 15:44:03.960479   82905 driver.go:315] Setting default libvirt URI to qemu:///system
I0220 15:44:03.960581   82905 global.go:102] Querying for installed drivers using PATH=/Users/medya/.minikube/bin:/Users/medya/Downloads/google-cloud-sdk/bin:/usr/local/git/current/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/go/bin:/Users/medya/go/bin
I0220 15:44:03.960783   82905 global.go:110] parallels priority: 7, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0220 15:44:03.961024   82905 global.go:110] podman priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I0220 15:44:03.961060   82905 global.go:110] ssh priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0220 15:44:04.302870   82905 global.go:110] virtualbox priority: 6, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0220 15:44:04.303028   82905 global.go:110] vmware priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0220 15:44:04.303064   82905 global.go:110] vmwarefusion priority: 1, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:the 'vmwarefusion' driver is no longer available Reason: Fix:Switch to the newer 'vmware' driver by using '--driver=vmware'. This may require first deleting your existing cluster Doc:https://minikube.sigs.k8s.io/docs/drivers/vmware/}
I0220 15:44:04.429720   82905 docker.go:117] docker version: linux-20.10.2
I0220 15:44:04.429872   82905 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0220 15:44:04.972098   82905 info.go:253] docker info: {ID:PQPD:5ANS:KVOA:Q44R:UXEE:NIO7:ZUQK:Q7JD:SCGZ:23Y3:7JIE:AOMT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-02-20 23:44:04.579597095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6237872128 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:gateway.docker.internal:3128 HTTPSProxy:gateway.docker.internal:3129 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:}}
I0220 15:44:04.972438   82905 global.go:110] docker priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0220 15:44:04.982429   82905 global.go:110] hyperkit priority: 8, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc:}
I0220 15:44:04.982754   82905 driver.go:261] "docker" has a higher priority (9) than "" (0)
I0220 15:44:04.982763   82905 driver.go:257] not recommending "ssh" due to priority: 4
I0220 15:44:04.982781   82905 driver.go:286] Picked: docker
I0220 15:44:04.982799   82905 driver.go:287] Alternatives: [hyperkit parallels virtualbox ssh]
I0220 15:44:04.982803   82905 driver.go:288] Rejects: [vmware vmwarefusion podman]
I0220 15:44:05.057960   82905 out.go:119] ✨  Automatically selected the docker driver. Other choices: hyperkit, parallels, virtualbox, ssh
✨  Automatically selected the docker driver. Other choices: hyperkit, parallels, virtualbox, ssh
I0220 15:44:05.058057   82905 start.go:272] selected driver: docker
I0220 15:44:05.058490   82905 start.go:714] validating driver "docker" against 
I0220 15:44:05.058580   82905 start.go:725] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0220 15:44:05.059063   82905 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0220 15:44:05.292582   82905 info.go:253] docker info: {ID:PQPD:5ANS:KVOA:Q44R:UXEE:NIO7:ZUQK:Q7JD:SCGZ:23Y3:7JIE:AOMT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-02-20 23:44:05.215989117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6237872128 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:gateway.docker.internal:3128 HTTPSProxy:gateway.docker.internal:3129 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:}}
I0220 15:44:05.293389   82905 start_flags.go:249] no existing cluster config was found, will generate one from the flags 
I0220 15:44:05.294951   82905 start_flags.go:267] Using suggested 4000MB memory alloc based on sys=16384MB, container=5948MB
I0220 15:44:05.295573   82905 start_flags.go:671] Wait components to verify : map[apiserver:true system_pods:true]
I0220 15:44:05.296446   82905 cni.go:74] Creating CNI manager for ""
I0220 15:44:05.296461   82905 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
I0220 15:44:05.296470   82905 start_flags.go:390] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.17-1613846643-10477@sha256:1c101a31d1b5dca98f49be85bc0a673ff902428b969fd2dc0535c34cb38533a4 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false}
I0220 15:44:05.372880   82905 out.go:119] 👍  Starting control plane node minikube in cluster minikube
👍  Starting control plane node minikube in cluster minikube
I0220 15:44:05.692577   82905 image.go:92] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.17-1613846643-10477@sha256:1c101a31d1b5dca98f49be85bc0a673ff902428b969fd2dc0535c34cb38533a4 in local docker daemon, skipping pull
I0220 15:44:05.693102   82905 cache.go:116] gcr.io/k8s-minikube/kicbase-builds:v0.0.17-1613846643-10477@sha256:1c101a31d1b5dca98f49be85bc0a673ff902428b969fd2dc0535c34cb38533a4 exists in daemon, skipping pull
I0220 15:44:05.693401   82905 preload.go:97] Checking if preload exists for k8s version v1.20.4 and runtime docker
I0220 15:44:05.693502   82905 preload.go:105] Found local preload: /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.4-docker-overlay2-amd64.tar.lz4
I0220 15:44:05.693517   82905 cache.go:54] Caching tarball of preloaded images
I0220 15:44:05.693785   82905 preload.go:131] Found /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0220 15:44:05.693795   82905 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.4 on docker
I0220 15:44:05.694951   82905 profile.go:148] Saving config to /Users/medya/.minikube/profiles/minikube/config.json ...
I0220 15:44:05.694990   82905 lock.go:36] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/config.json: {Name:mkcfdcaaa21816d14cd9720660d7b2e91b28d741 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0220 15:44:05.695294   82905 cache.go:185] Successfully downloaded all kic artifacts
I0220 15:44:05.695586   82905 start.go:313] acquiring machines lock for minikube: {Name:mk056ef9e1e774511ad280f3f358ff4888f064af Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0220 15:44:05.695649   82905 start.go:317] acquired machines lock for "minikube" in 51.836µs
I0220 15:44:05.695667   82905 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.17-1613846643-10477@sha256:1c101a31d1b5dca98f49be85bc0a673ff902428b969fd2dc0535c34cb38533a4 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.4 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.4 ControlPlane:true Worker:true}
I0220 15:44:05.695735   82905 start.go:126] createHost starting for "" (driver="docker")
I0220 15:44:05.783978   82905 out.go:140] 🔥  Creating docker container (CPUs=2, Memory=4000MB) ...
🔥  Creating docker container (CPUs=2, Memory=4000MB) ...| I0220 15:44:05.786382   82905 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0220 15:44:05.788345   82905 client.go:168] LocalClient.Create starting
I0220 15:44:05.789863   82905 main.go:119] libmachine: Reading certificate data from /Users/medya/.minikube/certs/ca.pem
I0220 15:44:05.791775   82905 main.go:119] libmachine: Decoding PEM data...
I0220 15:44:05.796124   82905 main.go:119] libmachine: Parsing certificate...
I0220 15:44:05.805078   82905 main.go:119] libmachine: Reading certificate data from /Users/medya/.minikube/certs/cert.pem
I0220 15:44:05.805776   82905 main.go:119] libmachine: Decoding PEM data...
I0220 15:44:05.805807   82905 main.go:119] libmachine: Parsing certificate...
I0220 15:44:05.825384   82905 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
- W0220 15:44:05.990960   82905 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0220 15:44:05.991563   82905 network_create.go:240] running [docker network inspect minikube] to gather additional debugging logs...
I0220 15:44:05.991596   82905 cli_runner.go:115] Run: docker network inspect minikube
\ W0220 15:44:06.120805   82905 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I0220 15:44:06.120842   82905 network_create.go:243] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0220 15:44:06.120860   82905 network_create.go:245] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error: No such network: minikube

** /stderr **
I0220 15:44:06.120973   82905 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
| I0220 15:44:06.250024   82905 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0220 15:44:06.250081   82905 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ...
I0220 15:44:06.250431   82905 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
- I0220 15:44:06.456737   82905 kic.go:101] calculated static IP "192.168.49.2" for the "minikube" container
I0220 15:44:06.457588   82905 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
\ I0220 15:44:06.592000   82905 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
/ I0220 15:44:06.717143   82905 oci.go:102] Successfully created a docker volume minikube
I0220 15:44:06.717296   82905 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.17-1613846643-10477@sha256:1c101a31d1b5dca98f49be85bc0a673ff902428b969fd2dc0535c34cb38533a4 -d /var/lib
| I0220 15:44:07.439689   82905 oci.go:106] Successfully prepared a docker volume minikube
I0220 15:44:07.440011   82905 preload.go:97] Checking if preload exists for k8s version v1.20.4 and runtime docker
I0220 15:44:07.440071   82905 preload.go:105] Found local preload: /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.4-docker-overlay2-amd64.tar.lz4
I0220 15:44:07.440158   82905 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0220 15:44:07.440681   82905 kic.go:164] Starting extracting preloaded images to volume ...
I0220 15:44:07.441042   82905 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.17-1613846643-10477@sha256:1c101a31d1b5dca98f49be85bc0a673ff902428b969fd2dc0535c34cb38533a4 -I lz4 -xf /preloaded.tar -C /extractDir
- I0220 15:44:07.658147   82905 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase-builds:v0.0.17-1613846643-10477@sha256:1c101a31d1b5dca98f49be85bc0a673ff902428b969fd2dc0535c34cb38533a4
| I0220 15:44:08.636429   82905 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
- I0220 15:44:08.849093   82905 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
\ I0220 15:44:09.009432   82905 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
- I0220 15:44:09.292029   82905 oci.go:278] the created container "minikube" has a running status.
I0220 15:44:09.292065   82905 kic.go:195] Creating ssh key for kic: /Users/medya/.minikube/machines/minikube/id_rsa...
/ I0220 15:44:09.582387   82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0220 15:44:09.583074   82905 kic_runner.go:188] docker (temp): /Users/medya/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
| I0220 15:44:09.867039   82905 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
- I0220 15:44:10.091080   82905 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0220 15:44:10.091110   82905 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
| I0220 15:44:14.378090   82905 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.17-1613846643-10477@sha256:1c101a31d1b5dca98f49be85bc0a673ff902428b969fd2dc0535c34cb38533a4 -I lz4 -xf /preloaded.tar -C /extractDir: (6.936869069s)
I0220 15:44:14.379439   82905 kic.go:173] duration metric: took 6.939252 seconds to extract preloaded images to volume
I0220 15:44:14.379726   82905 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
/ I0220 15:44:14.521564   82905 machine.go:88] provisioning docker machine ...
I0220 15:44:14.522328   82905 ubuntu.go:169] provisioning hostname "minikube"
I0220 15:44:14.522930   82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
\ I0220 15:44:14.654558   82905 main.go:119] libmachine: Using SSH client type: native
I0220 15:44:14.655747   82905 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x440de60] 0x440de20   [] 0s} 127.0.0.1 55039  }
I0220 15:44:14.655764   82905 main.go:119] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
| I0220 15:44:14.807844   82905 main.go:119] libmachine: SSH cmd err, output: : minikube

I0220 15:44:14.808309   82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
/ I0220 15:44:14.936632   82905 main.go:119] libmachine: Using SSH client type: native
I0220 15:44:14.936857   82905 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x440de60] 0x440de20   [] 0s} 127.0.0.1 55039  }
I0220 15:44:14.936874   82905 main.go:119] libmachine: About to run SSH command:

        if ! grep -xq '.*\sminikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
            else 
                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
            fi
        fi
\ I0220 15:44:15.071746   82905 main.go:119] libmachine: SSH cmd err, output: : 
I0220 15:44:15.071820   82905 ubuntu.go:175] set auth options {CertDir:/Users/medya/.minikube CaCertPath:/Users/medya/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/medya/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/medya/.minikube/machines/server.pem ServerKeyPath:/Users/medya/.minikube/machines/server-key.pem ClientKeyPath:/Users/medya/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/medya/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/medya/.minikube}
I0220 15:44:15.071890   82905 ubuntu.go:177] setting up certificates
I0220 15:44:15.071920   82905 provision.go:83] configureAuth start
I0220 15:44:15.072132   82905 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
/ I0220 15:44:15.265151   82905 provision.go:137] copyHostCerts
I0220 15:44:15.265262   82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/certs/ca.pem -> /Users/medya/.minikube/ca.pem
I0220 15:44:15.265981   82905 exec_runner.go:145] found /Users/medya/.minikube/ca.pem, removing ...
I0220 15:44:15.266000   82905 exec_runner.go:190] rm: /Users/medya/.minikube/ca.pem
I0220 15:44:15.266787   82905 exec_runner.go:152] cp: /Users/medya/.minikube/certs/ca.pem --> /Users/medya/.minikube/ca.pem (1074 bytes)
I0220 15:44:15.267756   82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/certs/cert.pem -> /Users/medya/.minikube/cert.pem
I0220 15:44:15.267834   82905 exec_runner.go:145] found /Users/medya/.minikube/cert.pem, removing ...
I0220 15:44:15.267849   82905 exec_runner.go:190] rm: /Users/medya/.minikube/cert.pem
I0220 15:44:15.268342   82905 exec_runner.go:152] cp: /Users/medya/.minikube/certs/cert.pem --> /Users/medya/.minikube/cert.pem (1119 bytes)
I0220 15:44:15.268857   82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/certs/key.pem -> /Users/medya/.minikube/key.pem
I0220 15:44:15.268963   82905 exec_runner.go:145] found /Users/medya/.minikube/key.pem, removing ...
I0220 15:44:15.268992   82905 exec_runner.go:190] rm: /Users/medya/.minikube/key.pem
I0220 15:44:15.269624   82905 exec_runner.go:152] cp: /Users/medya/.minikube/certs/key.pem --> /Users/medya/.minikube/key.pem (1679 bytes)
I0220 15:44:15.270170   82905 provision.go:111] generating server cert: /Users/medya/.minikube/machines/server.pem ca-key=/Users/medya/.minikube/certs/ca.pem private-key=/Users/medya/.minikube/certs/ca-key.pem org=medya.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
- I0220 15:44:15.442750   82905 provision.go:165] copyRemoteCerts
I0220 15:44:15.443788   82905 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0220 15:44:15.444176   82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
| I0220 15:44:15.615621   82905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55039 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker}
/ I0220 15:44:15.710097   82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0220 15:44:15.710240   82905 ssh_runner.go:310] scp /Users/medya/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes)
I0220 15:44:15.737241   82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/machines/server.pem -> /etc/docker/server.pem
I0220 15:44:15.737392   82905 ssh_runner.go:310] scp /Users/medya/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
- I0220 15:44:15.757379   82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0220 15:44:15.757496   82905 ssh_runner.go:310] scp /Users/medya/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0220 15:44:15.781820   82905 provision.go:86] duration metric: configureAuth took 709.557665ms
I0220 15:44:15.781860   82905 ubuntu.go:193] setting minikube options for container-runtime
I0220 15:44:15.783333   82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
\ I0220 15:44:15.936528   82905 main.go:119] libmachine: Using SSH client type: native
I0220 15:44:15.936804   82905 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x440de60] 0x440de20   [] 0s} 127.0.0.1 55039  }
I0220 15:44:15.936825   82905 main.go:119] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
/ I0220 15:44:16.078877   82905 main.go:119] libmachine: SSH cmd err, output: : overlay

I0220 15:44:16.079195   82905 ubuntu.go:71] root file system type: overlay
I0220 15:44:16.079897   82905 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
I0220 15:44:16.080078   82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
- I0220 15:44:16.231801   82905 main.go:119] libmachine: Using SSH client type: native
I0220 15:44:16.232068   82905 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x440de60] 0x440de20   [] 0s} 127.0.0.1 55039  }
I0220 15:44:16.232132   82905 main.go:119] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
| I0220 15:44:16.404219   82905 main.go:119] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0220 15:44:16.404933   82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
- I0220 15:44:16.582522   82905 main.go:119] libmachine: Using SSH client type: native
I0220 15:44:16.582787   82905 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x440de60] 0x440de20   [] 0s} 127.0.0.1 55039  }
I0220 15:44:16.582839   82905 main.go:119] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
- I0220 15:44:17.797361   82905 main.go:119] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-01-29 14:31:32.000000000 +0000
+++ /lib/systemd/system/docker.service.new  2021-02-20 23:44:16.399319960 +0000
@@ -1,30 +1,32 @@
 [Unit]
 Description=Docker Application Container Engine
 Documentation=https://docs.docker.com
+BindsTo=containerd.service
 After=network-online.target firewalld.service containerd.service
 Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP $MAINPID

 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
 LimitNPROC=infinity
 LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0

 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

 # kill only the docker process, not all processes in the cgroup
 KillMode=process
-OOMScoreAdjust=-500

 [Install]
 WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0220 15:44:17.797500   82905 machine.go:91] provisioned docker machine in 3.333027499s
I0220 15:44:17.797516   82905 client.go:171] LocalClient.Create took 12.066147846s
I0220 15:44:17.797541   82905 start.go:168] duration metric: libmachine.API.Create for "minikube" took 12.068168562s
I0220 15:44:17.798087   82905 start.go:267] post-start starting for "minikube" (driver="docker")
I0220 15:44:17.798105   82905 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0220 15:44:17.799191   82905 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0220 15:44:17.799299   82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
| I0220 15:44:17.960781   82905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55039 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker}
/ I0220 15:44:18.057232   82905 ssh_runner.go:149] Run: cat /etc/os-release
I0220 15:44:18.061767   82905 command_runner.go:123] > NAME="Ubuntu"
I0220 15:44:18.061789   82905 command_runner.go:123] > VERSION="20.04.1 LTS (Focal Fossa)"
I0220 15:44:18.061794   82905 command_runner.go:123] > ID=ubuntu
I0220 15:44:18.061807   82905 command_runner.go:123] > ID_LIKE=debian
I0220 15:44:18.061814   82905 command_runner.go:123] > PRETTY_NAME="Ubuntu 20.04.1 LTS"
I0220 15:44:18.061822   82905 command_runner.go:123] > VERSION_ID="20.04"
I0220 15:44:18.061830   82905 command_runner.go:123] > HOME_URL="https://www.ubuntu.com/"
I0220 15:44:18.061845   82905 command_runner.go:123] > SUPPORT_URL="https://help.ubuntu.com/"
I0220 15:44:18.061853   82905 command_runner.go:123] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
I0220 15:44:18.061874   82905 command_runner.go:123] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
I0220 15:44:18.061880   82905 command_runner.go:123] > VERSION_CODENAME=focal
I0220 15:44:18.061884   82905 command_runner.go:123] > UBUNTU_CODENAME=focal
I0220 15:44:18.061979   82905 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0220 15:44:18.061997   82905 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0220 15:44:18.062007   82905 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0220 15:44:18.062013   82905 info.go:137] Remote host: Ubuntu 20.04.1 LTS
I0220 15:44:18.062291   82905 filesync.go:118] Scanning /Users/medya/.minikube/addons for local assets ...
I0220 15:44:18.062467   82905 filesync.go:118] Scanning /Users/medya/.minikube/files for local assets ...
I0220 15:44:18.062544   82905 start.go:270] post-start completed in 264.436835ms
I0220 15:44:18.065687   82905 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
- I0220 15:44:18.244877   82905 profile.go:148] Saving config to /Users/medya/.minikube/profiles/minikube/config.json ...
I0220 15:44:18.246654   82905 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0220 15:44:18.246799   82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
/ I0220 15:44:18.480760   82905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55039 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker}
- I0220 15:44:18.591265   82905 command_runner.go:123] > 10%
I0220 15:44:18.591296   82905 start.go:129] duration metric: createHost completed in 12.952542129s
I0220 15:44:18.591333   82905 start.go:80] releasing machines lock for "minikube", held for 12.952666358s
I0220 15:44:18.591783   82905 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
/ I0220 15:44:18.872143   82905 ssh_runner.go:149] Run: systemctl --version
I0220 15:44:18.872288   82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0220 15:44:18.873548   82905 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0220 15:44:18.875155   82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
- I0220 15:44:19.048163   82905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55039 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker}
I0220 15:44:19.048221   82905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55039 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker}
\ I0220 15:44:19.154409   82905 command_runner.go:123] > systemd 245 (245.4-4ubuntu3.4)
I0220 15:44:19.154458   82905 command_runner.go:123] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
I0220 15:44:19.155232   82905 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
/ I0220 15:44:19.354704   82905 command_runner.go:123] > 
I0220 15:44:19.354735   82905 command_runner.go:123] > 302 Moved
I0220 15:44:19.354756   82905 command_runner.go:123] > 

302 Moved

I0220 15:44:19.354764 82905 command_runner.go:123] > The document has moved I0220 15:44:19.354773 82905 command_runner.go:123] > here. I0220 15:44:19.354780 82905 command_runner.go:123] > I0220 15:44:19.355204 82905 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0220 15:44:19.367116 82905 command_runner.go:123] > # /lib/systemd/system/docker.service I0220 15:44:19.367299 82905 command_runner.go:123] > [Unit] I0220 15:44:19.367317 82905 command_runner.go:123] > Description=Docker Application Container Engine I0220 15:44:19.367328 82905 command_runner.go:123] > Documentation=https://docs.docker.com I0220 15:44:19.367336 82905 command_runner.go:123] > BindsTo=containerd.service I0220 15:44:19.367346 82905 command_runner.go:123] > After=network-online.target firewalld.service containerd.service I0220 15:44:19.367354 82905 command_runner.go:123] > Wants=network-online.target I0220 15:44:19.367365 82905 command_runner.go:123] > Requires=docker.socket I0220 15:44:19.367373 82905 command_runner.go:123] > StartLimitBurst=3 I0220 15:44:19.367379 82905 command_runner.go:123] > StartLimitIntervalSec=60 I0220 15:44:19.367385 82905 command_runner.go:123] > [Service] I0220 15:44:19.367391 82905 command_runner.go:123] > Type=notify I0220 15:44:19.367398 82905 command_runner.go:123] > Restart=on-failure I0220 15:44:19.367410 82905 command_runner.go:123] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration. I0220 15:44:19.367453 82905 command_runner.go:123] > # The base configuration already specifies an 'ExecStart=...' command. The first directive I0220 15:44:19.367477 82905 command_runner.go:123] > # here is to clear out that command inherited from the base configuration. Without this, I0220 15:44:19.367495 82905 command_runner.go:123] > # the command from the base configuration and the command specified here are treated as I0220 15:44:19.367519 82905 command_runner.go:123] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd I0220 15:44:19.367536 82905 command_runner.go:123] > # will catch this invalid input and refuse to start the service with an error like: I0220 15:44:19.367545 82905 command_runner.go:123] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. I0220 15:44:19.367588 82905 command_runner.go:123] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other I0220 15:44:19.367603 82905 command_runner.go:123] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL. I0220 15:44:19.367609 82905 command_runner.go:123] > ExecStart= I0220 15:44:19.367627 82905 command_runner.go:123] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 I0220 15:44:19.367748 82905 command_runner.go:123] > ExecReload=/bin/kill -s HUP $MAINPID I0220 15:44:19.367766 82905 command_runner.go:123] > # Having non-zero Limit*s causes performance problems due to accounting overhead I0220 15:44:19.367792 82905 command_runner.go:123] > # in the kernel. We recommend using cgroups to do container-local accounting. I0220 15:44:19.367802 82905 command_runner.go:123] > LimitNOFILE=infinity I0220 15:44:19.367809 82905 command_runner.go:123] > LimitNPROC=infinity I0220 15:44:19.367816 82905 command_runner.go:123] > LimitCORE=infinity I0220 15:44:19.367825 82905 command_runner.go:123] > # Uncomment TasksMax if your systemd version supports it. I0220 15:44:19.367835 82905 command_runner.go:123] > # Only systemd 226 and above support this version. I0220 15:44:19.367843 82905 command_runner.go:123] > TasksMax=infinity I0220 15:44:19.367850 82905 command_runner.go:123] > TimeoutStartSec=0 I0220 15:44:19.367861 82905 command_runner.go:123] > # set delegate yes so that systemd does not reset the cgroups of docker containers I0220 15:44:19.367869 82905 command_runner.go:123] > Delegate=yes I0220 15:44:19.367879 82905 command_runner.go:123] > # kill only the docker process, not all processes in the cgroup I0220 15:44:19.367886 82905 command_runner.go:123] > KillMode=process I0220 15:44:19.367894 82905 command_runner.go:123] > [Install] I0220 15:44:19.367901 82905 command_runner.go:123] > WantedBy=multi-user.target - I0220 15:44:19.368628 82905 cruntime.go:206] skipping containerd shutdown because we are bound to it I0220 15:44:19.369044 82905 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0220 15:44:19.381648 82905 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0220 15:44:19.401040 82905 command_runner.go:123] > runtime-endpoint: unix:///var/run/dockershim.sock I0220 15:44:19.401070 82905 command_runner.go:123] > image-endpoint: unix:///var/run/dockershim.sock I0220 15:44:19.401212 82905 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0220 15:44:19.414317 82905 command_runner.go:123] > # /lib/systemd/system/docker.service I0220 15:44:19.415257 82905 command_runner.go:123] > [Unit] I0220 15:44:19.415273 82905 command_runner.go:123] > Description=Docker Application Container Engine I0220 15:44:19.415283 82905 command_runner.go:123] > Documentation=https://docs.docker.com I0220 15:44:19.415290 82905 command_runner.go:123] > BindsTo=containerd.service I0220 15:44:19.415300 82905 command_runner.go:123] > After=network-online.target firewalld.service containerd.service I0220 15:44:19.415307 82905 command_runner.go:123] > Wants=network-online.target I0220 15:44:19.415319 82905 command_runner.go:123] > Requires=docker.socket I0220 15:44:19.415325 82905 command_runner.go:123] > StartLimitBurst=3 I0220 15:44:19.415332 82905 command_runner.go:123] > StartLimitIntervalSec=60 I0220 15:44:19.415341 82905 command_runner.go:123] > [Service] I0220 15:44:19.415346 82905 command_runner.go:123] > Type=notify I0220 15:44:19.415353 82905 command_runner.go:123] > Restart=on-failure I0220 15:44:19.415364 82905 command_runner.go:123] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration. I0220 15:44:19.415377 82905 command_runner.go:123] > # The base configuration already specifies an 'ExecStart=...' command. The first directive I0220 15:44:19.415404 82905 command_runner.go:123] > # here is to clear out that command inherited from the base configuration. Without this, I0220 15:44:19.415420 82905 command_runner.go:123] > # the command from the base configuration and the command specified here are treated as I0220 15:44:19.415447 82905 command_runner.go:123] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd I0220 15:44:19.415465 82905 command_runner.go:123] > # will catch this invalid input and refuse to start the service with an error like: I0220 15:44:19.415481 82905 command_runner.go:123] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. I0220 15:44:19.415503 82905 command_runner.go:123] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other I0220 15:44:19.415513 82905 command_runner.go:123] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL. I0220 15:44:19.415518 82905 command_runner.go:123] > ExecStart= I0220 15:44:19.415539 82905 command_runner.go:123] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 I0220 15:44:19.415555 82905 command_runner.go:123] > ExecReload=/bin/kill -s HUP $MAINPID I0220 15:44:19.415575 82905 command_runner.go:123] > # Having non-zero Limit*s causes performance problems due to accounting overhead I0220 15:44:19.415594 82905 command_runner.go:123] > # in the kernel. We recommend using cgroups to do container-local accounting. I0220 15:44:19.415604 82905 command_runner.go:123] > LimitNOFILE=infinity I0220 15:44:19.415616 82905 command_runner.go:123] > LimitNPROC=infinity I0220 15:44:19.415622 82905 command_runner.go:123] > LimitCORE=infinity I0220 15:44:19.415628 82905 command_runner.go:123] > # Uncomment TasksMax if your systemd version supports it. I0220 15:44:19.415637 82905 command_runner.go:123] > # Only systemd 226 and above support this version. I0220 15:44:19.415644 82905 command_runner.go:123] > TasksMax=infinity I0220 15:44:19.415654 82905 command_runner.go:123] > TimeoutStartSec=0 I0220 15:44:19.415671 82905 command_runner.go:123] > # set delegate yes so that systemd does not reset the cgroups of docker containers I0220 15:44:19.415684 82905 command_runner.go:123] > Delegate=yes I0220 15:44:19.415695 82905 command_runner.go:123] > # kill only the docker process, not all processes in the cgroup I0220 15:44:19.415708 82905 command_runner.go:123] > KillMode=process I0220 15:44:19.415718 82905 command_runner.go:123] > [Install] I0220 15:44:19.415728 82905 command_runner.go:123] > WantedBy=multi-user.target I0220 15:44:19.417294 82905 ssh_runner.go:149] Run: sudo systemctl daemon-reload \ I0220 15:44:19.491380 82905 ssh_runner.go:149] Run: sudo systemctl start docker I0220 15:44:19.503858 82905 ssh_runner.go:149] Run: docker version --format {{.Server.Version}} | I0220 15:44:19.572168 82905 command_runner.go:123] > 20.10.3 I0220 15:44:19.652556 82905 out.go:140] 🐳 Preparing Kubernetes v1.20.4 on Docker 20.10.3 ... 🐳 Preparing Kubernetes v1.20.4 on Docker 20.10.3 ...I0220 15:44:19.653411 82905 cli_runner.go:115] Run: docker exec -t minikube dig +short host.docker.internal \ I0220 15:44:19.920033 82905 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2 I0220 15:44:19.920694 82905 ssh_runner.go:149] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts I0220 15:44:19.931048 82905 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I0220 15:44:19.954047 82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube / I0220 15:44:20.130641 82905 localpath.go:87] copying /Users/medya/.minikube/client.crt -> /Users/medya/.minikube/profiles/minikube/client.crt I0220 15:44:20.131381 82905 localpath.go:112] copying /Users/medya/.minikube/client.key -> /Users/medya/.minikube/profiles/minikube/client.key I0220 15:44:20.132667 82905 preload.go:97] Checking if preload exists for k8s version v1.20.4 and runtime docker I0220 15:44:20.132724 82905 preload.go:105] Found local preload: /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.4-docker-overlay2-amd64.tar.lz4 I0220 15:44:20.132915 82905 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} - I0220 15:44:20.184843 82905 command_runner.go:123] > k8s.gcr.io/kube-proxy:v1.20.4 I0220 15:44:20.184868 82905 command_runner.go:123] > k8s.gcr.io/kube-apiserver:v1.20.4 I0220 15:44:20.184880 82905 command_runner.go:123] > k8s.gcr.io/kube-controller-manager:v1.20.4 I0220 15:44:20.184893 82905 command_runner.go:123] > k8s.gcr.io/kube-scheduler:v1.20.4 I0220 15:44:20.184917 82905 command_runner.go:123] > kubernetesui/dashboard:v2.1.0 I0220 15:44:20.184927 82905 command_runner.go:123] > gcr.io/k8s-minikube/storage-provisioner:v4 I0220 15:44:20.184933 82905 command_runner.go:123] > k8s.gcr.io/etcd:3.4.13-0 I0220 15:44:20.184939 82905 command_runner.go:123] > k8s.gcr.io/coredns:1.7.0 I0220 15:44:20.184947 82905 command_runner.go:123] > kubernetesui/metrics-scraper:v1.0.4 I0220 15:44:20.184954 82905 command_runner.go:123] > k8s.gcr.io/pause:3.2 I0220 15:44:20.185372 82905 docker.go:423] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.20.4 k8s.gcr.io/kube-apiserver:v1.20.4 k8s.gcr.io/kube-controller-manager:v1.20.4 k8s.gcr.io/kube-scheduler:v1.20.4 kubernetesui/dashboard:v2.1.0 gcr.io/k8s-minikube/storage-provisioner:v4 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I0220 15:44:20.185644 82905 docker.go:360] Images already preloaded, skipping extraction I0220 15:44:20.185976 82905 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0220 15:44:20.229367 82905 command_runner.go:123] > k8s.gcr.io/kube-proxy:v1.20.4 I0220 15:44:20.229387 82905 command_runner.go:123] > k8s.gcr.io/kube-apiserver:v1.20.4 I0220 15:44:20.229394 82905 command_runner.go:123] > k8s.gcr.io/kube-controller-manager:v1.20.4 I0220 15:44:20.229399 82905 command_runner.go:123] > k8s.gcr.io/kube-scheduler:v1.20.4 I0220 15:44:20.229405 82905 command_runner.go:123] > kubernetesui/dashboard:v2.1.0 I0220 15:44:20.229413 82905 command_runner.go:123] > gcr.io/k8s-minikube/storage-provisioner:v4 I0220 15:44:20.229428 82905 command_runner.go:123] > k8s.gcr.io/etcd:3.4.13-0 I0220 15:44:20.229448 82905 command_runner.go:123] > k8s.gcr.io/coredns:1.7.0 I0220 15:44:20.229459 82905 command_runner.go:123] > kubernetesui/metrics-scraper:v1.0.4 I0220 15:44:20.229470 82905 command_runner.go:123] > k8s.gcr.io/pause:3.2 I0220 15:44:20.233659 82905 docker.go:423] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.20.4 k8s.gcr.io/kube-apiserver:v1.20.4 k8s.gcr.io/kube-controller-manager:v1.20.4 k8s.gcr.io/kube-scheduler:v1.20.4 kubernetesui/dashboard:v2.1.0 gcr.io/k8s-minikube/storage-provisioner:v4 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I0220 15:44:20.233731 82905 cache_images.go:73] Images are preloaded, skipping loading I0220 15:44:20.234243 82905 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}} \ I0220 15:44:20.369783 82905 command_runner.go:123] > cgroupfs I0220 15:44:20.374710 82905 cni.go:74] Creating CNI manager for "" I0220 15:44:20.374763 82905 cni.go:140] CNI unnecessary in this configuration, recommending no CNI I0220 15:44:20.375244 82905 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0220 15:44:20.375284 82905 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0220 15:44:20.376020 82905 kubeadm.go:154] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 192.168.49.2:10249 I0220 15:44:20.377052 82905 kubeadm.go:905] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.20.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0220 15:44:20.377191 82905 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.4 | I0220 15:44:20.389400 82905 command_runner.go:123] > kubeadm I0220 15:44:20.389426 82905 command_runner.go:123] > kubectl I0220 15:44:20.389435 82905 command_runner.go:123] > kubelet I0220 15:44:20.391854 82905 binaries.go:44] Found k8s binaries, skipping transfer I0220 15:44:20.393330 82905 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0220 15:44:20.405969 82905 ssh_runner.go:310] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I0220 15:44:20.425027 82905 ssh_runner.go:310] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0220 15:44:20.442148 82905 ssh_runner.go:310] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1845 bytes) I0220 15:44:20.460616 82905 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0220 15:44:20.470184 82905 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" / I0220 15:44:20.493546 82905 certs.go:52] Setting up /Users/medya/.minikube/profiles/minikube for IP: 192.168.49.2 I0220 15:44:20.494098 82905 certs.go:171] skipping minikubeCA CA generation: /Users/medya/.minikube/ca.key I0220 15:44:20.494226 82905 certs.go:171] skipping proxyClientCA CA generation: /Users/medya/.minikube/proxy-client-ca.key I0220 15:44:20.494712 82905 certs.go:275] skipping minikube-user signed cert generation: /Users/medya/.minikube/profiles/minikube/client.key I0220 15:44:20.494745 82905 certs.go:279] generating minikube signed cert: /Users/medya/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0220 15:44:20.495338 82905 crypto.go:69] Generating cert /Users/medya/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] | I0220 15:44:20.794941 82905 crypto.go:157] Writing cert to /Users/medya/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0220 15:44:20.794959 82905 lock.go:36] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk1b420d73491b5f7ccd0bb5ceb42e91cf0ff2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0220 15:44:20.795286 82905 crypto.go:165] Writing key to /Users/medya/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0220 15:44:20.795296 82905 lock.go:36] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkcdbaf1f7076ca1c8364219eeb3f099d0d3135e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0220 15:44:20.795480 82905 certs.go:290] copying /Users/medya/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /Users/medya/.minikube/profiles/minikube/apiserver.crt I0220 15:44:20.795674 82905 certs.go:294] copying /Users/medya/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /Users/medya/.minikube/profiles/minikube/apiserver.key I0220 15:44:20.796773 82905 certs.go:279] generating aggregator signed cert: /Users/medya/.minikube/profiles/minikube/proxy-client.key I0220 15:44:20.796793 82905 crypto.go:69] Generating cert /Users/medya/.minikube/profiles/minikube/proxy-client.crt with IP's: [] / I0220 15:44:20.952872 82905 crypto.go:157] Writing cert to /Users/medya/.minikube/profiles/minikube/proxy-client.crt ... I0220 15:44:20.952915 82905 lock.go:36] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/proxy-client.crt: {Name:mkd78d7d6ff919c02bc728dfedf8734089fc1c87 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0220 15:44:20.953420 82905 crypto.go:165] Writing key to /Users/medya/.minikube/profiles/minikube/proxy-client.key ... I0220 15:44:20.953456 82905 lock.go:36] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/proxy-client.key: {Name:mk8d833a893ef6367116b3c6f13fc90a3a29812e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0220 15:44:20.953760 82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt I0220 15:44:20.953823 82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key I0220 15:44:20.953854 82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt I0220 15:44:20.953884 82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key I0220 15:44:20.953912 82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt I0220 15:44:20.953946 82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/ca.key -> /var/lib/minikube/certs/ca.key I0220 15:44:20.953977 82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt I0220 15:44:20.954007 82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key I0220 15:44:20.954175 82905 certs.go:354] found cert: /Users/medya/.minikube/certs/Users/medya/.minikube/certs/ca-key.pem (1675 bytes) I0220 15:44:20.954246 82905 certs.go:354] found cert: /Users/medya/.minikube/certs/Users/medya/.minikube/certs/ca.pem (1074 bytes) I0220 15:44:20.954301 82905 certs.go:354] found cert: /Users/medya/.minikube/certs/Users/medya/.minikube/certs/cert.pem (1119 bytes) I0220 15:44:20.954372 82905 certs.go:354] found cert: /Users/medya/.minikube/certs/Users/medya/.minikube/certs/key.pem (1679 bytes) I0220 15:44:20.954419 82905 vm_assets.go:96] NewFileAsset: /Users/medya/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem I0220 15:44:20.956648 82905 ssh_runner.go:310] scp /Users/medya/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0220 15:44:20.979693 82905 ssh_runner.go:310] scp /Users/medya/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) - I0220 15:44:21.019014 82905 ssh_runner.go:310] scp /Users/medya/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0220 15:44:21.055478 82905 ssh_runner.go:310] scp /Users/medya/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0220 15:44:21.078275 82905 ssh_runner.go:310] scp /Users/medya/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) \ I0220 15:44:21.099280 82905 ssh_runner.go:310] scp /Users/medya/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0220 15:44:21.118948 82905 ssh_runner.go:310] scp /Users/medya/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0220 15:44:21.137533 82905 ssh_runner.go:310] scp /Users/medya/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0220 15:44:21.158403 82905 ssh_runner.go:310] scp /Users/medya/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0220 15:44:21.179619 82905 ssh_runner.go:310] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) | I0220 15:44:21.194839 82905 ssh_runner.go:149] Run: openssl version I0220 15:44:21.201121 82905 command_runner.go:123] > OpenSSL 1.1.1f 31 Mar 2020 I0220 15:44:21.201255 82905 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0220 15:44:21.213822 82905 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0220 15:44:21.219024 82905 command_runner.go:123] > -rw-r--r-- 1 root root 1111 Oct 24 00:33 /usr/share/ca-certificates/minikubeCA.pem I0220 15:44:21.219066 82905 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Oct 24 00:33 /usr/share/ca-certificates/minikubeCA.pem I0220 15:44:21.219194 82905 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0220 15:44:21.226287 82905 command_runner.go:123] > b5213941 I0220 15:44:21.226693 82905 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0220 15:44:21.236127 82905 kubeadm.go:371] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.17-1613846643-10477@sha256:1c101a31d1b5dca98f49be85bc0a673ff902428b969fd2dc0535c34cb38533a4 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.4 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0220 15:44:21.236328 82905 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0220 15:44:21.275522 82905 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0220 15:44:21.284018 82905 command_runner.go:123] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory I0220 15:44:21.284048 82905 command_runner.go:123] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory I0220 15:44:21.284059 82905 command_runner.go:123] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory I0220 15:44:21.285042 82905 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml / I0220 15:44:21.294577 82905 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver I0220 15:44:21.294777 82905 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0220 15:44:21.302778 82905 command_runner.go:123] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory I0220 15:44:21.302800 82905 command_runner.go:123] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory I0220 15:44:21.302810 82905 command_runner.go:123] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory I0220 15:44:21.302825 82905 command_runner.go:123] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0220 15:44:21.303455 82905 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0220 15:44:21.303497 82905 ssh_runner.go:236] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" \ I0220 15:44:21.506467 82905 command_runner.go:123] > [init] Using Kubernetes version: v1.20.4 I0220 15:44:21.506585 82905 command_runner.go:123] > [preflight] Running pre-flight checks \ I0220 15:44:21.929878 82905 command_runner.go:123] > [preflight] Pulling images required for setting up a Kubernetes cluster I0220 15:44:21.930026 82905 command_runner.go:123] > [preflight] This might take a minute or two, depending on the speed of your internet connection I0220 15:44:21.930173 82905 command_runner.go:123] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' / I0220 15:44:22.174618 82905 command_runner.go:123] > [certs] Using certificateDir folder "/var/lib/minikube/certs" - I0220 15:44:22.247653 82905 out.go:140] ▪ Generating certificates and keys ... ▪ Generating certificates and keys ...I0220 15:44:22.247779 82905 command_runner.go:123] > [certs] Using existing ca certificate authority I0220 15:44:22.247903 82905 command_runner.go:123] > [certs] Using existing apiserver certificate and key on disk \ I0220 15:44:22.367834 82905 command_runner.go:123] > [certs] Generating "apiserver-kubelet-client" certificate and key / I0220 15:44:22.586912 82905 command_runner.go:123] > [certs] Generating "front-proxy-ca" certificate and key - I0220 15:44:22.661982 82905 command_runner.go:123] > [certs] Generating "front-proxy-client" certificate and key | I0220 15:44:22.814927 82905 command_runner.go:123] > [certs] Generating "etcd/ca" certificate and key - I0220 15:44:23.114670 82905 command_runner.go:123] > [certs] Generating "etcd/server" certificate and key I0220 15:44:23.114866 82905 command_runner.go:123] > [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] | I0220 15:44:23.276813 82905 command_runner.go:123] > [certs] Generating "etcd/peer" certificate and key I0220 15:44:23.277148 82905 command_runner.go:123] > [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] - I0220 15:44:23.521750 82905 command_runner.go:123] > [certs] Generating "etcd/healthcheck-client" certificate and key \ I0220 15:44:23.615448 82905 command_runner.go:123] > [certs] Generating "apiserver-etcd-client" certificate and key / I0220 15:44:23.768308 82905 command_runner.go:123] > [certs] Generating "sa" key and public key I0220 15:44:23.768448 82905 command_runner.go:123] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes" \ I0220 15:44:23.960277 82905 command_runner.go:123] > [kubeconfig] Writing "admin.conf" kubeconfig file / I0220 15:44:24.145640 82905 command_runner.go:123] > [kubeconfig] Writing "kubelet.conf" kubeconfig file I0220 15:44:24.196261 82905 command_runner.go:123] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file \ I0220 15:44:24.343561 82905 command_runner.go:123] > [kubeconfig] Writing "scheduler.conf" kubeconfig file I0220 15:44:24.369486 82905 command_runner.go:123] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0220 15:44:24.374520 82905 command_runner.go:123] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0220 15:44:24.374621 82905 command_runner.go:123] > [kubelet-start] Starting the kubelet | I0220 15:44:24.496334 82905 command_runner.go:123] > [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0220 15:44:24.521975 82905 out.go:140] ▪ Booting up control plane ... ▪ Booting up control plane ...I0220 15:44:24.522119 82905 command_runner.go:123] > [control-plane] Creating static Pod manifest for "kube-apiserver" I0220 15:44:24.522344 82905 command_runner.go:123] > [control-plane] Creating static Pod manifest for "kube-controller-manager" I0220 15:44:24.522442 82905 command_runner.go:123] > [control-plane] Creating static Pod manifest for "kube-scheduler" I0220 15:44:24.522659 82905 command_runner.go:123] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0220 15:44:24.522894 82905 command_runner.go:123] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s | I0220 15:45:04.513403 82905 command_runner.go:123] > [kubelet-check] Initial timeout of 40s passed. / I0220 15:45:28.017602 82905 command_runner.go:123] > [apiclient] All control plane components are healthy after 63.506128 seconds I0220 15:45:28.017909 82905 command_runner.go:123] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0220 15:45:28.031480 82905 command_runner.go:123] > [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster - I0220 15:45:28.557112 82905 command_runner.go:123] > [upload-certs] Skipping phase. Please see --upload-certs I0220 15:45:28.557337 82905 command_runner.go:123] > [mark-control-plane] Marking the node minikube as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" | I0220 15:45:29.070246 82905 command_runner.go:123] > [bootstrap-token] Using token: ckmqq4.dov5m97q5ko44fpg I0220 15:45:29.159638 82905 out.go:140] ▪ Configuring RBAC rules ... ▪ Configuring RBAC rules ...I0220 15:45:29.160259 82905 command_runner.go:123] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0220 15:45:29.160871 82905 command_runner.go:123] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes / I0220 15:45:29.174286 82905 command_runner.go:123] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0220 15:45:29.179384 82905 command_runner.go:123] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0220 15:45:29.185676 82905 command_runner.go:123] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0220 15:45:29.192452 82905 command_runner.go:123] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0220 15:45:29.215712 82905 command_runner.go:123] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key \ I0220 15:45:29.464063 82905 command_runner.go:123] > [addons] Applied essential addon: CoreDNS | I0220 15:45:29.543498 82905 command_runner.go:123] > [addons] Applied essential addon: kube-proxy I0220 15:45:29.545502 82905 command_runner.go:123] > Your Kubernetes control-plane has initialized successfully! I0220 15:45:29.545706 82905 command_runner.go:123] > To start using your cluster, you need to run the following as a regular user: I0220 15:45:29.545757 82905 command_runner.go:123] > mkdir -p $HOME/.kube I0220 15:45:29.545866 82905 command_runner.go:123] > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0220 15:45:29.545983 82905 command_runner.go:123] > sudo chown $(id -u):$(id -g) $HOME/.kube/config I0220 15:45:29.546162 82905 command_runner.go:123] > Alternatively, if you are the root user, you can run: I0220 15:45:29.546254 82905 command_runner.go:123] > export KUBECONFIG=/etc/kubernetes/admin.conf I0220 15:45:29.546404 82905 command_runner.go:123] > You should now deploy a pod network to the cluster. I0220 15:45:29.546614 82905 command_runner.go:123] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0220 15:45:29.546770 82905 command_runner.go:123] > https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0220 15:45:29.546941 82905 command_runner.go:123] > You can now join any number of control-plane nodes by copying certificate authorities I0220 15:45:29.547076 82905 command_runner.go:123] > and service account keys on each node and then running the following as root: I0220 15:45:29.547283 82905 command_runner.go:123] > kubeadm join control-plane.minikube.internal:8443 --token ckmqq4.dov5m97q5ko44fpg \ I0220 15:45:29.547475 82905 command_runner.go:123] > --discovery-token-ca-cert-hash sha256:3341be5aed211bfacb8ec8d6bd1fb552ba72844439f715fbfeb5b3fcd5ee208c \ I0220 15:45:29.547514 82905 command_runner.go:123] > --control-plane I0220 15:45:29.547659 82905 command_runner.go:123] > Then you can join any number of worker nodes by running the following on each as root: I0220 15:45:29.547800 82905 command_runner.go:123] > kubeadm join control-plane.minikube.internal:8443 --token ckmqq4.dov5m97q5ko44fpg \ I0220 15:45:29.547994 82905 command_runner.go:123] > --discovery-token-ca-cert-hash sha256:3341be5aed211bfacb8ec8d6bd1fb552ba72844439f715fbfeb5b3fcd5ee208c I0220 15:45:29.555759 82905 cni.go:74] Creating CNI manager for "" I0220 15:45:29.555836 82905 cni.go:140] CNI unnecessary in this configuration, recommending no CNI I0220 15:45:29.556369 82905 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0220 15:45:29.556510 82905 command_runner.go:123] ! [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ I0220 15:45:29.556850 82905 command_runner.go:123] ! [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist I0220 15:45:29.556862 82905 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0220 15:45:29.556877 82905 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.4/kubectl label nodes minikube.k8s.io/version=v1.17.1 minikube.k8s.io/commit=413cdf6e5b8f821ec451a45e4e153c7040802442 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_02_20T15_45_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0220 15:45:29.556987 82905 command_runner.go:123] ! [WARNING Swap]: running with swap on is not supported. Please disable swap I0220 15:45:29.557271 82905 command_runner.go:123] ! [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03 I0220 15:45:29.557543 82905 command_runner.go:123] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' / I0220 15:45:29.604392 82905 command_runner.go:123] > -16 I0220 15:45:29.604447 82905 ops.go:34] apiserver oom_adj: -16 | I0220 15:45:29.912873 82905 command_runner.go:123] > node/minikube labeled I0220 15:45:29.921441 82905 command_runner.go:123] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created I0220 15:45:29.921481 82905 kubeadm.go:981] duration metric: took 365.13681ms to wait for elevateKubeSystemPrivileges. I0220 15:45:29.921498 82905 kubeadm.go:373] StartCluster complete in 1m8.685295901s I0220 15:45:29.922177 82905 settings.go:142] acquiring lock: {Name:mkd0284ca69bdf79a9a1575487bea0e283dfb439 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0220 15:45:29.922414 82905 settings.go:150] Updating kubeconfig: /Users/medya/.kube/config I0220 15:45:29.924168 82905 lock.go:36] WriteFile acquiring /Users/medya/.kube/config: {Name:mk9fd218cbc52506c8b67871ae522c88260d21af Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0220 15:45:29.925789 82905 start.go:202] Will wait 6m0s for node up to / I0220 15:45:29.925993 82905 addons.go:372] enableAddons start: toEnable=map[], additional=[] I0220 15:45:30.021497 82905 out.go:119] 🔎 Verifying Kubernetes components... I0220 15:45:29.926164 82905 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.4/kubectl scale deployment --replicas=1 coredns -n=kube-system 🔎 Verifying Kubernetes components... I0220 15:45:30.021576 82905 addons.go:55] Setting storage-provisioner=true in profile "minikube" I0220 15:45:30.021662 82905 addons.go:55] Setting default-storageclass=true in profile "minikube" I0220 15:45:30.021695 82905 addons.go:275] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0220 15:45:30.021907 82905 addons.go:131] Setting addon storage-provisioner=true in "minikube" W0220 15:45:30.021920 82905 addons.go:140] addon storage-provisioner should already be in state true I0220 15:45:30.021949 82905 host.go:66] Checking if "minikube" exists ... I0220 15:45:30.022522 82905 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0220 15:45:30.022795 82905 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0220 15:45:30.025602 82905 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0220 15:45:30.150570 82905 command_runner.go:123] > deployment.apps/coredns scaled I0220 15:45:30.151139 82905 start.go:601] successfully scaled coredns replicas to 1 I0220 15:45:30.151911 82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0220 15:45:30.530917 82905 kapi.go:59] client config for minikube: &rest.Config{Host:"https://127.0.0.1:55036", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/medya/.minikube/profiles/minikube/client.crt", KeyFile:"/Users/medya/.minikube/profiles/minikube/client.key", CAFile:"/Users/medya/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x563bb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)} I0220 15:45:30.530913 82905 kapi.go:59] client config for minikube: &rest.Config{Host:"https://127.0.0.1:55036", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/medya/.minikube/profiles/minikube/client.crt", KeyFile:"/Users/medya/.minikube/profiles/minikube/client.key", CAFile:"/Users/medya/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x563bb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)} I0220 15:45:30.602457 82905 out.go:119] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4 ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4 I0220 15:45:30.602767 82905 addons.go:244] installing /etc/kubernetes/addons/storage-provisioner.yaml I0220 15:45:30.602776 82905 ssh_runner.go:310] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0220 15:45:30.602929 82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0220 15:45:30.618460 82905 api_server.go:48] waiting for apiserver process to appear ... I0220 15:45:30.618718 82905 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0220 15:45:30.677400 82905 addons.go:131] Setting addon default-storageclass=true in "minikube" W0220 15:45:30.677689 82905 addons.go:140] addon default-storageclass should already be in state true I0220 15:45:30.677913 82905 host.go:66] Checking if "minikube" exists ... I0220 15:45:30.681417 82905 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0220 15:45:30.697748 82905 command_runner.go:123] > 2004 I0220 15:45:30.697806 82905 api_server.go:68] duration metric: took 771.984851ms to wait for apiserver process to appear ... I0220 15:45:30.697819 82905 api_server.go:84] waiting for apiserver healthz status ... I0220 15:45:30.697834 82905 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55036/healthz ... I0220 15:45:30.721051 82905 api_server.go:241] https://127.0.0.1:55036/healthz returned 200: ok I0220 15:45:30.725965 82905 api_server.go:137] control plane version: v1.20.4 I0220 15:45:30.725989 82905 api_server.go:127] duration metric: took 28.162837ms to wait for apiserver health ... I0220 15:45:30.726133 82905 system_pods.go:41] waiting for kube-system pods to appear ... I0220 15:45:30.750628 82905 system_pods.go:57] 3 kube-system pods found I0220 15:45:30.750713 82905 system_pods.go:59] "etcd-minikube" [2067dc18-d65e-4d47-9b91-2452b17b413c] Pending I0220 15:45:30.750720 82905 system_pods.go:59] "kube-apiserver-minikube" [7dd55802-2b64-4f6b-b8cc-18a43d61d5c3] Pending I0220 15:45:30.750728 82905 system_pods.go:59] "kube-controller-manager-minikube" [d0cb906d-e9ad-4d3e-a6ac-061880a29cd5] Pending I0220 15:45:30.750741 82905 system_pods.go:72] duration metric: took 24.590629ms to wait for pod list to return data ... I0220 15:45:30.750760 82905 kubeadm.go:527] duration metric: took 824.939678ms to wait for : map[apiserver:true system_pods:true] ... I0220 15:45:30.750796 82905 node_conditions.go:101] verifying NodePressure condition ... I0220 15:45:30.781116 82905 node_conditions.go:121] node storage ephemeral capacity is 61255492Ki I0220 15:45:30.781198 82905 node_conditions.go:122] node cpu capacity is 4 I0220 15:45:30.782590 82905 node_conditions.go:104] duration metric: took 31.382397ms to run NodePressure ... I0220 15:45:30.782644 82905 start.go:207] waiting for startup goroutines ... I0220 15:45:30.866401 82905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55039 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker} I0220 15:45:30.952268 82905 addons.go:244] installing /etc/kubernetes/addons/storageclass.yaml I0220 15:45:30.952300 82905 ssh_runner.go:310] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0220 15:45:30.952496 82905 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0220 15:45:31.074901 82905 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0220 15:45:31.279023 82905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55039 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker} I0220 15:45:31.489243 82905 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0220 15:45:34.822599 82905 command_runner.go:123] > serviceaccount/storage-provisioner created I0220 15:45:34.822637 82905 command_runner.go:123] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created I0220 15:45:34.822649 82905 command_runner.go:123] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created I0220 15:45:34.822663 82905 command_runner.go:123] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created I0220 15:45:34.822670 82905 command_runner.go:123] > endpoints/k8s.io-minikube-hostpath created I0220 15:45:34.822679 82905 command_runner.go:123] > pod/storage-provisioner created I0220 15:45:34.822699 82905 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.74775136s) I0220 15:45:34.822727 82905 command_runner.go:123] > storageclass.storage.k8s.io/standard created I0220 15:45:34.822783 82905 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.333503051s) I0220 15:45:34.905435 82905 out.go:119] 🌟 Enabled addons: storage-provisioner, default-storageclass 🌟 Enabled addons: storage-provisioner, default-storageclass I0220 15:45:34.905471 82905 addons.go:374] enableAddons completed in 4.979485304s I0220 15:45:35.944763 82905 start.go:456] kubectl: 1.20.4, cluster: 1.20.4 (minor skew: 0) I0220 15:45:35.994642 82905 out.go:119] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
medyagh commented 3 years ago

I tested --kubernetes-version=v1.20.5-rc.0 and that one faces same issue

 $ time mk start --kubernetes-version=v1.20.5-rc.0

real    1m28.787s
user    0m6.547s
sys 0m3.701s
medyagh commented 3 years ago

here is a screenshot of the performance metrics dashboard

Screen Shot 2021-02-21 at 11 10 34 AM
gbraad commented 3 years ago

We see the same happening on our environments

sharifelgamal commented 3 years ago

The next major release of Kubernetes (1.21.1 and backports) should have a fix for this issue.

r10r commented 3 years ago

@sharifelgamal No the issue is still present in v1.21.1 and v1.20.7.

Does anyone know which release will contain the fix ?

sharifelgamal commented 3 years ago

The minikube 1.21 beta upgrades its default kubernetes version to 1.20.7 and has no performance issues. The GA 1.21 release should be out soon.

r10r commented 3 years ago

The minikube 1.21 beta upgrades its default kubernetes version to 1.20.7 and has no performance issues. The GA 1.21 release should be out soon.

Hmm, I'm not using minikube but a use a custom setup - then i'll double check it.

Edit: You're right - it was a misconfigured proxy environment that delayed startup here. It takes roughly 21 sec for kubeadm init to finish.

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

sharifelgamal commented 3 years ago

This is all taken care of, if someone uses one of the affected kubernetes version, we display a warning to the screen. Otherwise, our default k8s versions are all performant.