kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.12k stars 4.86k forks source link

Ingress addon not working on Apple m1 (using docker driver) #10847

Closed jordicea closed 3 years ago

jordicea commented 3 years ago

Steps to reproduce the issue:

  1. minikube start --vm-driver=docker --memory 4096 --kubernetes-version "v1.20.0" --disk-size "70g" --cpus 3 --wait=false
  2. minikube addons enable ingress

Full output of failed command:

X Exiting due to MK_USAGE: Due to networking limitations of driver docker on darwin, ingress addon is not supported.
Alternatively to use this addon you can use a vm-based driver:

    'minikube start --vm=true'

To track the update on this work in progress feature please check:
https://github.com/kubernetes/minikube/issues/7332
medyagh commented 3 years ago

@jordicea that is right we currently haven't added full support to ingress on Mac due to the network limiations of docker on Mac, (that wont give u an accessible IP to the docker container) there are other way you co do such as using minikube tunnel

as minikube suggested To track the update on this work in progress feature please check: https://github.com/kubernetes/minikube/issues/7332

medyagh commented 3 years ago

Thank you for sharing your experience! If you don't mind, could you please provide:

This will help us isolate the problem further. Thank you!

/triage needs-information /kind support

jordicea commented 3 years ago

Output for the command minikube start --vm-driver=docker --memory 4096 --kubernetes-version "v1.20.0" --disk-size "70g" --cpus 3 --wait=false --alsologtostderr -v=8:

I0317 00:53:06.159633   61979 out.go:239] Setting OutFile to fd 1 ...
I0317 00:53:06.159764   61979 out.go:267] MINIKUBE_IN_STYLE="0"
I0317 00:53:06.159768   61979 out.go:252] Setting ErrFile to fd 2...
I0317 00:53:06.159770   61979 out.go:267] MINIKUBE_IN_STYLE="0"
I0317 00:53:06.159855   61979 root.go:308] Updating PATH: /Users/jordicea/.minikube/bin
I0317 00:53:06.160053   61979 out.go:246] Setting JSON to false
I0317 00:53:06.174903   61979 start.go:108] hostinfo: {"hostname":"Air-de-Jordi.lan","uptime":89261,"bootTime":1615849525,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"9fe8c0da-8ed0-381c-9cec-2a779f3e1503"}
W0317 00:53:06.174995   61979 start.go:116] gopshost.Virtualization returned error: not implemented yet
I0317 00:53:06.213489   61979 out.go:129] * minikube v1.18.1 on Darwin 11.2.3 (arm64)
* minikube v1.18.1 on Darwin 11.2.3 (arm64)
I0317 00:53:06.213590   61979 notify.go:126] Checking for updates...
I0317 00:53:06.230526   61979 out.go:129]   - MINIKUBE_IN_STYLE=0
  - MINIKUBE_IN_STYLE=0
I0317 00:53:06.230757   61979 driver.go:323] Setting default libvirt URI to qemu:///system
I0317 00:53:06.397867   61979 docker.go:118] docker version: linux-20.10.3
I0317 00:53:06.397991   61979 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0317 00:53:06.726688   61979 info.go:253] docker info: {ID:3NIC:A3SA:P63B:KBV2:XP5V:NVAY:ESW3:3QD3:JLCA:KYOK:BMBA:ENHU Containers:18 ContainersRunning:18 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:144 OomKillDisable:true NGoroutines:126 SystemTime:2021-03-16 23:53:06.609880505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:3063775232 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:}}
I0317 00:53:06.745806   61979 out.go:129] * Using the docker driver based on user configuration
* Using the docker driver based on user configuration
I0317 00:53:06.745817   61979 start.go:276] selected driver: docker
I0317 00:53:06.745820   61979 start.go:718] validating driver "docker" against 
I0317 00:53:06.745829   61979 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0317 00:53:06.780958   61979 out.go:129] * - Ensure your docker daemon has access to enough CPU/memory resources.
* - Ensure your docker daemon has access to enough CPU/memory resources.
I0317 00:53:06.798877   61979 out.go:129] * - Docs https://docs.docker.com/docker-for-mac/#resources
* - Docs https://docs.docker.com/docker-for-mac/#resources
I0317 00:53:06.817001   61979 out.go:129]

W0317 00:53:06.817085   61979 out.go:191] X Exiting due to RSRC_INSUFFICIENT_CORES: Requested cpu count 3 is greater than the available cpus of 2
X Exiting due to RSRC_INSUFFICIENT_CORES: Requested cpu count 3 is greater than the available cpus of 2
I0317 00:53:06.833889   61979 out.go:129]

➜  ~ uscenv start >> ~/Desktop/minikube.log
I0317 00:53:57.743556   62605 out.go:239] Setting OutFile to fd 1 ...
I0317 00:53:57.744181   62605 out.go:267] MINIKUBE_IN_STYLE="0"
I0317 00:53:57.744195   62605 out.go:252] Setting ErrFile to fd 2...
I0317 00:53:57.744203   62605 out.go:267] MINIKUBE_IN_STYLE="0"
I0317 00:53:57.744364   62605 root.go:308] Updating PATH: /Users/jordicea/.minikube/bin
I0317 00:53:57.761250   62605 out.go:246] Setting JSON to false
I0317 00:53:57.778789   62605 start.go:108] hostinfo: {"hostname":"Air-de-Jordi.lan","uptime":89312,"bootTime":1615849525,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"9fe8c0da-8ed0-381c-9cec-2a779f3e1503"}
W0317 00:53:57.778912   62605 start.go:116] gopshost.Virtualization returned error: not implemented yet
I0317 00:53:57.798176   62605 out.go:129] * minikube v1.18.1 on Darwin 11.2.3 (arm64)
I0317 00:53:57.817251   62605 notify.go:126] Checking for updates...
I0317 00:53:57.834683   62605 out.go:129]   - MINIKUBE_IN_STYLE=0
I0317 00:53:57.836037   62605 driver.go:323] Setting default libvirt URI to qemu:///system
I0317 00:53:58.345718   62605 docker.go:118] docker version: linux-20.10.3
I0317 00:53:58.346434   62605 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0317 00:53:59.206899   62605 info.go:253] docker info: {ID:4JP6:W5PB:6SAO:KCEF:G5SB:KU5A:WKMQ:INC2:LFGR:KJZJ:EU32:F44O Containers:26 ContainersRunning:22 ContainersPaused:0 ContainersStopped:4 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:true NGoroutines:99 SystemTime:2021-03-16 23:53:58.808788216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6496616448 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:}}
I0317 00:53:59.224691   62605 out.go:129] * Using the docker driver based on user configuration
I0317 00:53:59.225150   62605 start.go:276] selected driver: docker
I0317 00:53:59.225164   62605 start.go:718] validating driver "docker" against 
I0317 00:53:59.225179   62605 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
W0317 00:53:59.225584   62605 info.go:50] Unable to get CPU info: no such file or directory
W0317 00:53:59.227943   62605 start.go:876] could not get system cpu info while verifying memory limits, which might be okay: no such file or directory
I0317 00:53:59.228049   62605 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0317 00:53:59.619951   62605 info.go:253] docker info: {ID:4JP6:W5PB:6SAO:KCEF:G5SB:KU5A:WKMQ:INC2:LFGR:KJZJ:EU32:F44O Containers:26 ContainersRunning:26 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:84 SystemTime:2021-03-16 23:53:59.438665008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6496616448 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:}}
I0317 00:53:59.620108   62605 start_flags.go:251] no existing cluster config was found, will generate one from the flags
W0317 00:53:59.620118   62605 info.go:50] Unable to get CPU info: no such file or directory
W0317 00:53:59.620121   62605 start.go:876] could not get system cpu info while verifying memory limits, which might be okay: no such file or directory
I0317 00:53:59.620200   62605 start_flags.go:709] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
I0317 00:53:59.620450   62605 cni.go:74] Creating CNI manager for ""
I0317 00:53:59.620462   62605 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
I0317 00:53:59.620467   62605 start_flags.go:395] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4096 CPUs:3 DiskSize:71680 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false}
I0317 00:53:59.671743   62605 out.go:129] * Starting control plane node minikube in cluster minikube
I0317 00:53:59.912174   62605 cache.go:120] Beginning downloading kic base image for docker with docker
I0317 00:53:59.930477   62605 out.go:129] * Pulling base image ...
I0317 00:53:59.931284   62605 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0317 00:53:59.931312   62605 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e to local daemon
I0317 00:53:59.931486   62605 image.go:140] Writing gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e to local daemon
I0317 00:53:59.931508   62605 image.go:145] Getting image gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
W0317 00:54:00.189105   62605 preload.go:118] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.0-docker-overlay2-arm64.tar.lz4 status code: 404
I0317 00:54:00.189342   62605 cache.go:93] acquiring lock: {Name:mkeb66a4aea7d44c2758afa330ed8951cd4583cf Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0317 00:54:00.189350   62605 cache.go:93] acquiring lock: {Name:mk2f12b1d9b484ccc431f75813093d0a18498531 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0317 00:54:00.189406   62605 cache.go:93] acquiring lock: {Name:mkc1d26466fa4ea28848f8dcc9b01b0f093e7ee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0317 00:54:00.189429   62605 cache.go:93] acquiring lock: {Name:mkec6d7a284d815f0b6c98fe11d63ceae133c45f Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0317 00:54:00.189533   62605 cache.go:93] acquiring lock: {Name:mk6578faa7390bd73a6d2528e704e0906656fb3c Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0317 00:54:00.189537   62605 cache.go:93] acquiring lock: {Name:mk57e3e82e5b7a8a9765d14b92e5dfb68221bcbe Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0317 00:54:00.189554   62605 cache.go:101] /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 exists
I0317 00:54:00.189342   62605 cache.go:93] acquiring lock: {Name:mk94d0238d12dc73a89d1ed0f1373ab6da8a73a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0317 00:54:00.189698   62605 cache.go:82] cache image "k8s.gcr.io/kube-scheduler:v1.20.0" -> "/Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0" took 390.125µs
I0317 00:54:00.189734   62605 cache.go:66] save to tar file k8s.gcr.io/kube-scheduler:v1.20.0 -> /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 succeeded
I0317 00:54:00.189585   62605 cache.go:101] /Users/jordicea/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 exists
I0317 00:54:00.189749   62605 cache.go:82] cache image "gcr.io/k8s-minikube/storage-provisioner:v4" -> "/Users/jordicea/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4" took 345.583µs
I0317 00:54:00.189759   62605 cache.go:66] save to tar file gcr.io/k8s-minikube/storage-provisioner:v4 -> /Users/jordicea/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 succeeded
I0317 00:54:00.189598   62605 profile.go:148] Saving config to /Users/jordicea/.minikube/profiles/minikube/config.json ...
I0317 00:54:00.189783   62605 cache.go:101] /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 exists
I0317 00:54:00.189799   62605 lock.go:36] WriteFile acquiring /Users/jordicea/.minikube/profiles/minikube/config.json: {Name:mkd72f83435f7cacb0c601e1a68abf9e80985dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0317 00:54:00.189806   62605 cache.go:82] cache image "k8s.gcr.io/kube-proxy:v1.20.0" -> "/Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0" took 469.458µs
I0317 00:54:00.189892   62605 cache.go:66] save to tar file k8s.gcr.io/kube-proxy:v1.20.0 -> /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 succeeded
I0317 00:54:00.189598   62605 cache.go:93] acquiring lock: {Name:mkd64b39a647a7d6d152e28750b47a1ac0d93a62 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0317 00:54:00.189620   62605 cache.go:101] /Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
I0317 00:54:00.190062   62605 cache.go:82] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 523.292µs
I0317 00:54:00.190090   62605 cache.go:66] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
I0317 00:54:00.189633   62605 cache.go:101] /Users/jordicea/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 exists
I0317 00:54:00.190112   62605 cache.go:82] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "/Users/jordicea/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" took 699.5µs
I0317 00:54:00.190129   62605 cache.go:66] save to tar file k8s.gcr.io/etcd:3.4.13-0 -> /Users/jordicea/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 succeeded
I0317 00:54:00.189638   62605 cache.go:93] acquiring lock: {Name:mk26d8493c0149eacead34f855d1b92b6aaa040f Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0317 00:54:00.189650   62605 cache.go:93] acquiring lock: {Name:mk26699fd1c813924b0995f42e8a02d2bf74634b Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0317 00:54:00.189670   62605 cache.go:101] /Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
I0317 00:54:00.190384   62605 cache.go:82] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 1.040959ms
I0317 00:54:00.190405   62605 cache.go:66] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
I0317 00:54:00.189672   62605 cache.go:101] /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 exists
I0317 00:54:00.190441   62605 cache.go:82] cache image "k8s.gcr.io/kube-controller-manager:v1.20.0" -> "/Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0" took 895.708µs
I0317 00:54:00.190461   62605 cache.go:66] save to tar file k8s.gcr.io/kube-controller-manager:v1.20.0 -> /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 succeeded
I0317 00:54:00.190036   62605 cache.go:101] /Users/jordicea/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
I0317 00:54:00.190500   62605 cache.go:101] /Users/jordicea/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 exists
I0317 00:54:00.190515   62605 cache.go:82] cache image "k8s.gcr.io/pause:3.2" -> "/Users/jordicea/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 915.875µs
I0317 00:54:00.190534   62605 cache.go:66] save to tar file k8s.gcr.io/pause:3.2 -> /Users/jordicea/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
I0317 00:54:00.190271   62605 cache.go:101] /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 exists
I0317 00:54:00.190557   62605 cache.go:82] cache image "k8s.gcr.io/kube-apiserver:v1.20.0" -> "/Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0" took 918.417µs
I0317 00:54:00.190587   62605 cache.go:66] save to tar file k8s.gcr.io/kube-apiserver:v1.20.0 -> /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 succeeded
I0317 00:54:00.190527   62605 cache.go:82] cache image "k8s.gcr.io/coredns:1.7.0" -> "/Users/jordicea/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" took 875.75µs
I0317 00:54:00.190603   62605 cache.go:66] save to tar file k8s.gcr.io/coredns:1.7.0 -> /Users/jordicea/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 succeeded
I0317 00:54:00.190618   62605 cache.go:73] Successfully saved all images to host disk.
I0317 00:54:00.927810   62605 image.go:158] Writing image gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
I0317 00:55:15.576909   62605 cache.go:148] successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
I0317 00:55:15.576943   62605 cache.go:185] Successfully downloaded all kic artifacts
I0317 00:55:15.582737   62605 start.go:313] acquiring machines lock for minikube: {Name:mk13ed4f2918eb5b85e4941c119d355db6fcc664 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0317 00:55:15.582930   62605 start.go:317] acquired machines lock for "minikube" in 155.417µs
I0317 00:55:15.582977   62605 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4096 CPUs:3 DiskSize:71680 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}
I0317 00:55:15.583161   62605 start.go:126] createHost starting for "" (driver="docker")
I0317 00:55:15.601545   62605 out.go:150] * Creating docker container (CPUs=3, Memory=4096MB) ...
I0317 00:55:15.602206   62605 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0317 00:55:15.602272   62605 client.go:168] LocalClient.Create starting
I0317 00:55:15.603093   62605 main.go:121] libmachine: Reading certificate data from /Users/jordicea/.minikube/certs/ca.pem
I0317 00:55:15.603329   62605 main.go:121] libmachine: Decoding PEM data...
I0317 00:55:15.603354   62605 main.go:121] libmachine: Parsing certificate...
I0317 00:55:15.603781   62605 main.go:121] libmachine: Reading certificate data from /Users/jordicea/.minikube/certs/cert.pem
I0317 00:55:15.603953   62605 main.go:121] libmachine: Decoding PEM data...
I0317 00:55:15.603974   62605 main.go:121] libmachine: Parsing certificate...
I0317 00:55:15.620977   62605 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0317 00:55:15.958821   62605 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0317 00:55:15.959165   62605 network_create.go:240] running [docker network inspect minikube] to gather additional debugging logs...
I0317 00:55:15.959190   62605 cli_runner.go:115] Run: docker network inspect minikube
W0317 00:55:16.183993   62605 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I0317 00:55:16.184020   62605 network_create.go:243] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0317 00:55:16.184028   62605 network_create.go:245] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr **
Error: No such network: minikube

** /stderr **
I0317 00:55:16.184169   62605 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0317 00:55:16.393768   62605 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0317 00:55:16.393997   62605 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ...
I0317 00:55:16.394080   62605 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I0317 00:55:16.848359   62605 kic.go:101] calculated static IP "192.168.49.2" for the "minikube" container
I0317 00:55:16.848588   62605 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0317 00:55:17.134300   62605 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0317 00:55:17.432386   62605 oci.go:102] Successfully created a docker volume minikube
I0317 00:55:17.432540   62605 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
I0317 00:56:17.601019   62605 cli_runner.go:168] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (1m0.168497958s)
I0317 00:56:17.601072   62605 oci.go:106] Successfully prepared a docker volume minikube
I0317 00:56:17.601360   62605 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0317 00:56:17.601961   62605 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
W0317 00:56:17.759194   62605 preload.go:118] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.0-docker-overlay2-arm64.tar.lz4 status code: 404
I0317 00:56:18.189786   62605 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=3 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
I0317 00:56:19.750120   62605 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=3 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (1.560221209s)
I0317 00:56:19.750331   62605 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I0317 00:56:20.120278   62605 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0317 00:56:20.512254   62605 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0317 00:56:20.985844   62605 oci.go:278] the created container "minikube" has a running status.
I0317 00:56:20.986108   62605 kic.go:199] Creating ssh key for kic: /Users/jordicea/.minikube/machines/minikube/id_rsa...
I0317 00:56:21.078817   62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0317 00:56:21.079192   62605 kic_runner.go:188] docker (temp): /Users/jordicea/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0317 00:56:21.730655   62605 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0317 00:56:22.041329   62605 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0317 00:56:22.041358   62605 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0317 00:56:22.309858   62605 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0317 00:56:22.528528   62605 machine.go:88] provisioning docker machine ...
I0317 00:56:22.528568   62605 ubuntu.go:169] provisioning hostname "minikube"
I0317 00:56:22.528866   62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0317 00:56:22.745048   62605 main.go:121] libmachine: Using SSH client type: native
I0317 00:56:22.745356   62605 main.go:121] libmachine: &{{{ 0 [] [] []} docker [0x1003a54a0] 0x1003a5470   [] 0s} 127.0.0.1 52379  }
I0317 00:56:22.745369   62605 main.go:121] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0317 00:56:22.884093   62605 main.go:121] libmachine: SSH cmd err, output: : minikube

I0317 00:56:22.884204   62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0317 00:56:23.090158   62605 main.go:121] libmachine: Using SSH client type: native
I0317 00:56:23.090344   62605 main.go:121] libmachine: &{{{ 0 [] [] []} docker [0x1003a54a0] 0x1003a5470   [] 0s} 127.0.0.1 52379  }
I0317 00:56:23.090360   62605 main.go:121] libmachine: About to run SSH command:

        if ! grep -xq '.*\sminikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
            else
                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
            fi
        fi
I0317 00:56:23.206162   62605 main.go:121] libmachine: SSH cmd err, output: :
I0317 00:56:23.206184   62605 ubuntu.go:175] set auth options {CertDir:/Users/jordicea/.minikube CaCertPath:/Users/jordicea/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jordicea/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jordicea/.minikube/machines/server.pem ServerKeyPath:/Users/jordicea/.minikube/machines/server-key.pem ClientKeyPath:/Users/jordicea/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jordicea/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jordicea/.minikube}
I0317 00:56:23.206207   62605 ubuntu.go:177] setting up certificates
I0317 00:56:23.206213   62605 provision.go:83] configureAuth start
I0317 00:56:23.206324   62605 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0317 00:56:23.421844   62605 provision.go:137] copyHostCerts
I0317 00:56:23.421890   62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/certs/ca.pem -> /Users/jordicea/.minikube/ca.pem
I0317 00:56:23.422132   62605 exec_runner.go:145] found /Users/jordicea/.minikube/ca.pem, removing ...
I0317 00:56:23.422141   62605 exec_runner.go:190] rm: /Users/jordicea/.minikube/ca.pem
I0317 00:56:23.422486   62605 exec_runner.go:152] cp: /Users/jordicea/.minikube/certs/ca.pem --> /Users/jordicea/.minikube/ca.pem (1082 bytes)
I0317 00:56:23.422838   62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/certs/cert.pem -> /Users/jordicea/.minikube/cert.pem
I0317 00:56:23.422875   62605 exec_runner.go:145] found /Users/jordicea/.minikube/cert.pem, removing ...
I0317 00:56:23.422878   62605 exec_runner.go:190] rm: /Users/jordicea/.minikube/cert.pem
I0317 00:56:23.422917   62605 exec_runner.go:152] cp: /Users/jordicea/.minikube/certs/cert.pem --> /Users/jordicea/.minikube/cert.pem (1127 bytes)
I0317 00:56:23.423071   62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/certs/key.pem -> /Users/jordicea/.minikube/key.pem
I0317 00:56:23.423112   62605 exec_runner.go:145] found /Users/jordicea/.minikube/key.pem, removing ...
I0317 00:56:23.423116   62605 exec_runner.go:190] rm: /Users/jordicea/.minikube/key.pem
I0317 00:56:23.423168   62605 exec_runner.go:152] cp: /Users/jordicea/.minikube/certs/key.pem --> /Users/jordicea/.minikube/key.pem (1679 bytes)
I0317 00:56:23.423385   62605 provision.go:111] generating server cert: /Users/jordicea/.minikube/machines/server.pem ca-key=/Users/jordicea/.minikube/certs/ca.pem private-key=/Users/jordicea/.minikube/certs/ca-key.pem org=jordicea.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0317 00:56:23.516577   62605 provision.go:165] copyRemoteCerts
I0317 00:56:23.516926   62605 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0317 00:56:23.516973   62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0317 00:56:23.733984   62605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52379 SSHKeyPath:/Users/jordicea/.minikube/machines/minikube/id_rsa Username:docker}
I0317 00:56:23.843653   62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0317 00:56:23.843733   62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0317 00:56:23.864599   62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/machines/server.pem -> /etc/docker/server.pem
I0317 00:56:23.864685   62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
I0317 00:56:23.875721   62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0317 00:56:23.875812   62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0317 00:56:23.886308   62605 provision.go:86] duration metric: configureAuth took 680.079791ms
I0317 00:56:23.886350   62605 ubuntu.go:193] setting minikube options for container-runtime
I0317 00:56:23.887897   62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0317 00:56:24.102313   62605 main.go:121] libmachine: Using SSH client type: native
I0317 00:56:24.102468   62605 main.go:121] libmachine: &{{{ 0 [] [] []} docker [0x1003a54a0] 0x1003a5470   [] 0s} 127.0.0.1 52379  }
I0317 00:56:24.102480   62605 main.go:121] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0317 00:56:24.236159   62605 main.go:121] libmachine: SSH cmd err, output: : overlay

I0317 00:56:24.236187   62605 ubuntu.go:71] root file system type: overlay
I0317 00:56:24.236379   62605 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
I0317 00:56:24.236513   62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0317 00:56:24.465693   62605 main.go:121] libmachine: Using SSH client type: native
I0317 00:56:24.465852   62605 main.go:121] libmachine: &{{{ 0 [] [] []} docker [0x1003a54a0] 0x1003a5470   [] 0s} 127.0.0.1 52379  }
I0317 00:56:24.465903   62605 main.go:121] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0317 00:56:24.620212   62605 main.go:121] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0317 00:56:24.620523   62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0317 00:56:24.833141   62605 main.go:121] libmachine: Using SSH client type: native
I0317 00:56:24.833284   62605 main.go:121] libmachine: &{{{ 0 [] [] []} docker [0x1003a54a0] 0x1003a5470   [] 0s} 127.0.0.1 52379  }
I0317 00:56:24.833305   62605 main.go:121] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0317 00:56:25.908658   62605 main.go:121] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service   2021-01-29 14:32:03.000000000 +0000
+++ /lib/systemd/system/docker.service.new  2021-03-16 23:56:24.617123006 +0000
@@ -1,30 +1,32 @@
 [Unit]
 Description=Docker Application Container Engine
 Documentation=https://docs.docker.com
+BindsTo=containerd.service
 After=network-online.target firewalld.service containerd.service
 Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID

 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
 LimitNPROC=infinity
 LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0

 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

 # kill only the docker process, not all processes in the cgroup
 KillMode=process
-OOMScoreAdjust=-500

 [Install]
 WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0317 00:56:25.908814   62605 machine.go:91] provisioned docker machine in 3.38027075s
I0317 00:56:25.908823   62605 client.go:171] LocalClient.Create took 1m10.306692042s
I0317 00:56:25.908836   62605 start.go:168] duration metric: libmachine.API.Create for "minikube" took 1m10.306779958s
I0317 00:56:25.908841   62605 start.go:267] post-start starting for "minikube" (driver="docker")
I0317 00:56:25.908844   62605 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0317 00:56:25.908932   62605 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0317 00:56:25.908985   62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0317 00:56:26.128019   62605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52379 SSHKeyPath:/Users/jordicea/.minikube/machines/minikube/id_rsa Username:docker}
I0317 00:56:26.238679   62605 ssh_runner.go:149] Run: cat /etc/os-release
I0317 00:56:26.243467   62605 command_runner.go:124] > NAME="Ubuntu"
I0317 00:56:26.243476   62605 command_runner.go:124] > VERSION="20.04.1 LTS (Focal Fossa)"
I0317 00:56:26.243480   62605 command_runner.go:124] > ID=ubuntu
I0317 00:56:26.243483   62605 command_runner.go:124] > ID_LIKE=debian
I0317 00:56:26.243487   62605 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.1 LTS"
I0317 00:56:26.243490   62605 command_runner.go:124] > VERSION_ID="20.04"
I0317 00:56:26.243497   62605 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
I0317 00:56:26.243501   62605 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
I0317 00:56:26.243505   62605 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
I0317 00:56:26.243515   62605 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
I0317 00:56:26.243519   62605 command_runner.go:124] > VERSION_CODENAME=focal
I0317 00:56:26.243522   62605 command_runner.go:124] > UBUNTU_CODENAME=focal
I0317 00:56:26.243572   62605 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0317 00:56:26.243585   62605 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0317 00:56:26.243592   62605 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0317 00:56:26.243595   62605 info.go:137] Remote host: Ubuntu 20.04.1 LTS
I0317 00:56:26.243814   62605 filesync.go:118] Scanning /Users/jordicea/.minikube/addons for local assets ...
I0317 00:56:26.243907   62605 filesync.go:118] Scanning /Users/jordicea/.minikube/files for local assets ...
I0317 00:56:26.243940   62605 start.go:270] post-start completed in 335.096125ms
I0317 00:56:26.244384   62605 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0317 00:56:26.450450   62605 profile.go:148] Saving config to /Users/jordicea/.minikube/profiles/minikube/config.json ...
I0317 00:56:26.450811   62605 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0317 00:56:26.450860   62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0317 00:56:26.659199   62605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52379 SSHKeyPath:/Users/jordicea/.minikube/machines/minikube/id_rsa Username:docker}
I0317 00:56:26.763221   62605 command_runner.go:124] > 12%
I0317 00:56:26.763238   62605 start.go:129] duration metric: createHost completed in 1m11.180217209s
I0317 00:56:26.763245   62605 start.go:80] releasing machines lock for "minikube", held for 1m11.180453875s
I0317 00:56:26.763351   62605 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0317 00:56:26.967846   62605 ssh_runner.go:149] Run: systemctl --version
I0317 00:56:26.967926   62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0317 00:56:26.968936   62605 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0317 00:56:26.969684   62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0317 00:56:27.184561   62605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52379 SSHKeyPath:/Users/jordicea/.minikube/machines/minikube/id_rsa Username:docker}
I0317 00:56:27.184830   62605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52379 SSHKeyPath:/Users/jordicea/.minikube/machines/minikube/id_rsa Username:docker}
I0317 00:56:27.463431   62605 command_runner.go:124] > 
I0317 00:56:27.463536   62605 command_runner.go:124] > 302 Moved
I0317 00:56:27.463557   62605 command_runner.go:124] > 

302 Moved

I0317 00:56:27.463572 62605 command_runner.go:124] > The document has moved I0317 00:56:27.463595 62605 command_runner.go:124] > here. I0317 00:56:27.463610 62605 command_runner.go:124] > I0317 00:56:27.470145 62605 command_runner.go:124] > systemd 245 (245.4-4ubuntu3.4) I0317 00:56:27.470272 62605 command_runner.go:124] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid I0317 00:56:27.470570 62605 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd I0317 00:56:27.492066 62605 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0317 00:56:27.505987 62605 command_runner.go:124] > # /lib/systemd/system/docker.service I0317 00:56:27.506018 62605 command_runner.go:124] > [Unit] I0317 00:56:27.506035 62605 command_runner.go:124] > Description=Docker Application Container Engine I0317 00:56:27.506047 62605 command_runner.go:124] > Documentation=https://docs.docker.com I0317 00:56:27.506056 62605 command_runner.go:124] > BindsTo=containerd.service I0317 00:56:27.506067 62605 command_runner.go:124] > After=network-online.target firewalld.service containerd.service I0317 00:56:27.506075 62605 command_runner.go:124] > Wants=network-online.target I0317 00:56:27.506084 62605 command_runner.go:124] > Requires=docker.socket I0317 00:56:27.506091 62605 command_runner.go:124] > StartLimitBurst=3 I0317 00:56:27.506098 62605 command_runner.go:124] > StartLimitIntervalSec=60 I0317 00:56:27.506104 62605 command_runner.go:124] > [Service] I0317 00:56:27.506110 62605 command_runner.go:124] > Type=notify I0317 00:56:27.506116 62605 command_runner.go:124] > Restart=on-failure I0317 00:56:27.506130 62605 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration. I0317 00:56:27.506145 62605 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive I0317 00:56:27.506158 62605 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this, I0317 00:56:27.506171 62605 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as I0317 00:56:27.506185 62605 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd I0317 00:56:27.506198 62605 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like: I0317 00:56:27.506212 62605 command_runner.go:124] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. I0317 00:56:27.506236 62605 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other I0317 00:56:27.506249 62605 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL. I0317 00:56:27.506255 62605 command_runner.go:124] > ExecStart= I0317 00:56:27.562718 62605 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 I0317 00:56:27.562753 62605 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID I0317 00:56:27.562760 62605 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead I0317 00:56:27.562766 62605 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting. I0317 00:56:27.562769 62605 command_runner.go:124] > LimitNOFILE=infinity I0317 00:56:27.562772 62605 command_runner.go:124] > LimitNPROC=infinity I0317 00:56:27.562775 62605 command_runner.go:124] > LimitCORE=infinity I0317 00:56:27.562779 62605 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it. I0317 00:56:27.562783 62605 command_runner.go:124] > # Only systemd 226 and above support this version. I0317 00:56:27.562786 62605 command_runner.go:124] > TasksMax=infinity I0317 00:56:27.562789 62605 command_runner.go:124] > TimeoutStartSec=0 I0317 00:56:27.562794 62605 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers I0317 00:56:27.562797 62605 command_runner.go:124] > Delegate=yes I0317 00:56:27.562802 62605 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup I0317 00:56:27.603564 62605 command_runner.go:124] > KillMode=process I0317 00:56:27.603575 62605 command_runner.go:124] > [Install] I0317 00:56:27.603591 62605 command_runner.go:124] > WantedBy=multi-user.target I0317 00:56:27.603618 62605 cruntime.go:206] skipping containerd shutdown because we are bound to it I0317 00:56:27.603740 62605 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0317 00:56:27.622125 62605 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0317 00:56:27.634633 62605 command_runner.go:124] > runtime-endpoint: unix:///var/run/dockershim.sock I0317 00:56:27.634647 62605 command_runner.go:124] > image-endpoint: unix:///var/run/dockershim.sock I0317 00:56:27.634924 62605 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0317 00:56:27.640372 62605 command_runner.go:124] > # /lib/systemd/system/docker.service I0317 00:56:27.640384 62605 command_runner.go:124] > [Unit] I0317 00:56:27.640388 62605 command_runner.go:124] > Description=Docker Application Container Engine I0317 00:56:27.640392 62605 command_runner.go:124] > Documentation=https://docs.docker.com I0317 00:56:27.640395 62605 command_runner.go:124] > BindsTo=containerd.service I0317 00:56:27.640398 62605 command_runner.go:124] > After=network-online.target firewalld.service containerd.service I0317 00:56:27.640401 62605 command_runner.go:124] > Wants=network-online.target I0317 00:56:27.640403 62605 command_runner.go:124] > Requires=docker.socket I0317 00:56:27.640406 62605 command_runner.go:124] > StartLimitBurst=3 I0317 00:56:27.640408 62605 command_runner.go:124] > StartLimitIntervalSec=60 I0317 00:56:27.640410 62605 command_runner.go:124] > [Service] I0317 00:56:27.640412 62605 command_runner.go:124] > Type=notify I0317 00:56:27.640415 62605 command_runner.go:124] > Restart=on-failure I0317 00:56:27.640420 62605 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration. I0317 00:56:27.640425 62605 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive I0317 00:56:27.640429 62605 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this, I0317 00:56:27.640434 62605 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as I0317 00:56:27.640438 62605 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd I0317 00:56:27.640443 62605 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like: I0317 00:56:27.640458 62605 command_runner.go:124] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. I0317 00:56:27.640508 62605 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other I0317 00:56:27.640513 62605 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL. I0317 00:56:27.640517 62605 command_runner.go:124] > ExecStart= I0317 00:56:27.640529 62605 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 I0317 00:56:27.640532 62605 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID I0317 00:56:27.640537 62605 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead I0317 00:56:27.640541 62605 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting. I0317 00:56:27.640543 62605 command_runner.go:124] > LimitNOFILE=infinity I0317 00:56:27.640546 62605 command_runner.go:124] > LimitNPROC=infinity I0317 00:56:27.640548 62605 command_runner.go:124] > LimitCORE=infinity I0317 00:56:27.640551 62605 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it. I0317 00:56:27.671047 62605 command_runner.go:124] > # Only systemd 226 and above support this version. I0317 00:56:27.718996 62605 command_runner.go:124] > TasksMax=infinity I0317 00:56:27.719004 62605 command_runner.go:124] > TimeoutStartSec=0 I0317 00:56:27.719009 62605 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers I0317 00:56:27.719011 62605 command_runner.go:124] > Delegate=yes I0317 00:56:27.719015 62605 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup I0317 00:56:27.719018 62605 command_runner.go:124] > KillMode=process I0317 00:56:27.719020 62605 command_runner.go:124] > [Install] I0317 00:56:27.719023 62605 command_runner.go:124] > WantedBy=multi-user.target I0317 00:56:27.719181 62605 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0317 00:56:27.775728 62605 ssh_runner.go:149] Run: sudo systemctl start docker I0317 00:56:27.781663 62605 ssh_runner.go:149] Run: docker version --format {{.Server.Version}} I0317 00:56:27.822559 62605 command_runner.go:124] > 20.10.3 I0317 00:56:27.842275 62605 out.go:150] * Preparing Kubernetes v1.20.0 on Docker 20.10.3 ... I0317 00:56:27.842834 62605 cli_runner.go:115] Run: docker exec -t minikube dig +short host.docker.internal I0317 00:56:28.095891 62605 network.go:68] got host ip for mount in container by digging dns: 192.168.64.1 I0317 00:56:28.096872 62605 ssh_runner.go:149] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts I0317 00:56:28.100218 62605 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I0317 00:56:28.106635 62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0317 00:56:28.314199 62605 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime docker W0317 00:56:28.563612 62605 preload.go:118] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.0-docker-overlay2-arm64.tar.lz4 status code: 404 I0317 00:56:28.563751 62605 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0317 00:56:28.609998 62605 docker.go:423] Got preloaded images: I0317 00:56:28.610015 62605 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.0 wasn't preloaded I0317 00:56:28.610020 62605 cache_images.go:76] LoadImages start: [k8s.gcr.io/kube-proxy:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-apiserver:v1.20.0 k8s.gcr.io/coredns:1.7.0 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/pause:3.2 gcr.io/k8s-minikube/storage-provisioner:v4 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4] I0317 00:56:28.617816 62605 image.go:168] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v4 I0317 00:56:28.617810 62605 image.go:168] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0 I0317 00:56:28.619209 62605 image.go:168] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4 I0317 00:56:28.622246 62605 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-0 I0317 00:56:28.622760 62605 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.7.0 I0317 00:56:28.623227 62605 image.go:168] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0 I0317 00:56:28.623512 62605 image.go:168] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0 I0317 00:56:28.624136 62605 image.go:168] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0 I0317 00:56:28.624588 62605 image.go:168] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0 I0317 00:56:28.626281 62605 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.2 I0317 00:56:28.634313 62605 image.go:176] daemon lookup for k8s.gcr.io/kube-apiserver:v1.20.0: Error response from daemon: reference does not exist I0317 00:56:28.635218 62605 image.go:176] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v4: Error response from daemon: reference does not exist I0317 00:56:28.637152 62605 image.go:176] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.4: Error response from daemon: reference does not exist I0317 00:56:28.638515 62605 image.go:176] daemon lookup for docker.io/kubernetesui/dashboard:v2.1.0: Error response from daemon: reference does not exist I0317 00:56:28.640483 62605 image.go:176] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.20.0: Error response from daemon: reference does not exist I0317 00:56:28.640484 62605 image.go:176] daemon lookup for k8s.gcr.io/kube-scheduler:v1.20.0: Error response from daemon: reference does not exist I0317 00:56:28.641831 62605 image.go:176] daemon lookup for k8s.gcr.io/kube-proxy:v1.20.0: Error response from daemon: reference does not exist I0317 00:56:28.677590 62605 command_runner.go:124] ! Error: No such image: k8s.gcr.io/etcd:3.4.13-0 I0317 00:56:28.677686 62605 cache_images.go:104] "k8s.gcr.io/etcd:3.4.13-0" needs transfer: "k8s.gcr.io/etcd:3.4.13-0" does not exist at hash "sha256:05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28" in container runtime I0317 00:56:28.677753 62605 command_runner.go:124] ! Error: No such image: k8s.gcr.io/coredns:1.7.0 I0317 00:56:28.677808 62605 cache_images.go:104] "k8s.gcr.io/coredns:1.7.0" needs transfer: "k8s.gcr.io/coredns:1.7.0" does not exist at hash "sha256:db91994f4ee8f894a1e8a6c1a76f615da8fc3c019300a3686291ce6fcbc57895" in container runtime I0317 00:56:28.677882 62605 cache_images.go:237] Loading image from cache: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 I0317 00:56:28.677888 62605 cache_images.go:237] Loading image from cache: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 I0317 00:56:28.677904 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 -> /var/lib/minikube/images/coredns_1.7.0 I0317 00:56:28.677904 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 -> /var/lib/minikube/images/etcd_3.4.13-0 I0317 00:56:28.678227 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.13-0 I0317 00:56:28.678256 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.7.0 I0317 00:56:28.699698 62605 command_runner.go:124] ! Error: No such image: k8s.gcr.io/pause:3.2 I0317 00:56:28.699735 62605 cache_images.go:104] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "sha256:2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime I0317 00:56:28.699743 62605 cache_images.go:237] Loading image from cache: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/pause_3.2 I0317 00:56:28.699757 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/pause_3.2 -> /var/lib/minikube/images/pause_3.2 I0317 00:56:28.699782 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/coredns_1.7.0': No such file or directory I0317 00:56:28.699796 62605 ssh_runner.go:306] existence check for /var/lib/minikube/images/coredns_1.7.0: stat -c "%s %y" /var/lib/minikube/images/coredns_1.7.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/coredns_1.7.0': No such file or directory I0317 00:56:28.699806 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 --> /var/lib/minikube/images/coredns_1.7.0 (14513152 bytes) I0317 00:56:28.699833 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.2 I0317 00:56:28.699835 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/etcd_3.4.13-0': No such file or directory I0317 00:56:28.699851 62605 ssh_runner.go:306] existence check for /var/lib/minikube/images/etcd_3.4.13-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.13-0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/etcd_3.4.13-0': No such file or directory I0317 00:56:28.699865 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 --> /var/lib/minikube/images/etcd_3.4.13-0 (145202176 bytes) I0317 00:56:28.744854 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/pause_3.2': No such file or directory I0317 00:56:28.744974 62605 ssh_runner.go:306] existence check for /var/lib/minikube/images/pause_3.2: stat -c "%s %y" /var/lib/minikube/images/pause_3.2: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/pause_3.2': No such file or directory I0317 00:56:28.745023 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/images/k8s.gcr.io/pause_3.2 --> /var/lib/minikube/images/pause_3.2 (268800 bytes) I0317 00:56:28.920646 62605 docker.go:167] Loading image: /var/lib/minikube/images/pause_3.2 I0317 00:56:28.920729 62605 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/pause_3.2 I0317 00:56:29.401896 62605 command_runner.go:124] > Loaded image: k8s.gcr.io/pause:3.2 I0317 00:56:29.403779 62605 cache_images.go:259] Transferred and loaded /Users/jordicea/.minikube/cache/images/k8s.gcr.io/pause_3.2 from cache I0317 00:56:29.403799 62605 docker.go:167] Loading image: /var/lib/minikube/images/coredns_1.7.0 I0317 00:56:29.404029 62605 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/coredns_1.7.0 I0317 00:56:29.476353 62605 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.20.0 I0317 00:56:29.476403 62605 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.20.0 I0317 00:56:29.482825 62605 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.20.0 I0317 00:56:29.514068 62605 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.20.0 I0317 00:56:30.187483 62605 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4 I0317 00:56:30.187752 62605 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0 I0317 00:56:30.431332 62605 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v4 I0317 00:56:30.753130 62605 command_runner.go:124] > Loaded image: k8s.gcr.io/coredns:1.7.0 I0317 00:56:30.753150 62605 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/coredns_1.7.0: (1.349112166s) I0317 00:56:30.753158 62605 cache_images.go:259] Transferred and loaded /Users/jordicea/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 from cache I0317 00:56:30.753190 62605 command_runner.go:124] ! Error: No such image: k8s.gcr.io/kube-scheduler:v1.20.0 I0317 00:56:30.753196 62605 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.20.0: (1.276790375s) I0317 00:56:30.753213 62605 cache_images.go:104] "k8s.gcr.io/kube-scheduler:v1.20.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.20.0" does not exist at hash "e7605f88f17d6a4c3f083ef9c6f5f19b39f87e4d4406a05a8612b54a6ea57051" in container runtime I0317 00:56:30.753222 62605 cache_images.go:237] Loading image from cache: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 I0317 00:56:30.753242 62605 command_runner.go:124] ! Error: No such image: k8s.gcr.io/kube-controller-manager:v1.20.0 I0317 00:56:30.753250 62605 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.20.0: (1.276885125s) I0317 00:56:30.753261 62605 cache_images.go:104] "k8s.gcr.io/kube-controller-manager:v1.20.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.20.0" does not exist at hash "1df8a2b116bd16f7070fd383a6769c8d644b365575e8ffa3e492b84e4f05fc74" in container runtime I0317 00:56:30.753265 62605 cache_images.go:237] Loading image from cache: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 I0317 00:56:30.753266 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 -> /var/lib/minikube/images/kube-scheduler_v1.20.0 I0317 00:56:30.753282 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 -> /var/lib/minikube/images/kube-controller-manager_v1.20.0 I0317 00:56:30.753304 62605 command_runner.go:124] ! Error: No such image: k8s.gcr.io/kube-proxy:v1.20.0 I0317 00:56:30.753311 62605 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.20.0: (1.270473125s) I0317 00:56:30.753332 62605 cache_images.go:104] "k8s.gcr.io/kube-proxy:v1.20.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.20.0" does not exist at hash "25a5233254979d0678a2db1d15b76b73dc380d81bc5eed93916ba5638b3cd894" in container runtime I0317 00:56:30.753337 62605 cache_images.go:237] Loading image from cache: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 I0317 00:56:30.753349 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 -> /var/lib/minikube/images/kube-proxy_v1.20.0 I0317 00:56:30.753381 62605 command_runner.go:124] ! Error: No such image: k8s.gcr.io/kube-apiserver:v1.20.0 I0317 00:56:30.753387 62605 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.20.0: (1.23929925s) I0317 00:56:30.753395 62605 cache_images.go:104] "k8s.gcr.io/kube-apiserver:v1.20.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.20.0" does not exist at hash "2c08bbbc02d3aa5dfbf4e79f15c0a61424049288917aa10364464ca1f7de7157" in container runtime I0317 00:56:30.753399 62605 cache_images.go:237] Loading image from cache: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 I0317 00:56:30.753408 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 -> /var/lib/minikube/images/kube-apiserver_v1.20.0 I0317 00:56:30.753422 62605 command_runner.go:124] ! Error: No such image: docker.io/kubernetesui/dashboard:v2.1.0 I0317 00:56:30.753426 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.20.0 I0317 00:56:30.753432 62605 cache_images.go:104] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "85e6c0cff043f6950709049ae0f298022e34579b2c84ab6e6ed1d2a9be7f3586" in container runtime I0317 00:56:30.759939 62605 cache_images.go:237] Loading image from cache: /Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 I0317 00:56:30.753373 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.20.0 I0317 00:56:30.759988 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 -> /var/lib/minikube/images/dashboard_v2.1.0 I0317 00:56:30.753373 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.20.0 I0317 00:56:30.753437 62605 command_runner.go:124] ! Error: No such image: docker.io/kubernetesui/metrics-scraper:v1.0.4 I0317 00:56:30.753468 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.20.0 I0317 00:56:30.760033 62605 cache_images.go:104] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "a262dd7495d909107f4b9d58967971c6be46afac9af288e07fb5cb6dad7043bb" in container runtime I0317 00:56:30.760043 62605 cache_images.go:237] Loading image from cache: /Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 I0317 00:56:30.760102 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0 I0317 00:56:30.760047 62605 command_runner.go:124] ! Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v4 I0317 00:56:30.760174 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 -> /var/lib/minikube/images/metrics-scraper_v1.0.4 I0317 00:56:30.760200 62605 cache_images.go:104] "gcr.io/k8s-minikube/storage-provisioner:v4" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v4" does not exist at hash "84bee7cc4870e0c1a8df49772afa22978453c8df9d9dbf6500a230da57bcd233" in container runtime I0317 00:56:30.760213 62605 cache_images.go:237] Loading image from cache: /Users/jordicea/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 I0317 00:56:30.760224 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 -> /var/lib/minikube/images/storage-provisioner_v4 I0317 00:56:30.760300 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v4 I0317 00:56:30.760406 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4 I0317 00:56:30.830670 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory I0317 00:56:30.830701 62605 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory I0317 00:56:30.830710 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.20.0': No such file or directory I0317 00:56:30.830721 62605 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.20.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.20.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.20.0': No such file or directory I0317 00:56:30.830744 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 --> /var/lib/minikube/images/kube-proxy_v1.20.0 (49547776 bytes) I0317 00:56:30.830757 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.20.0': No such file or directory I0317 00:56:30.830744 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (14860288 bytes) I0317 00:56:30.830774 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.20.0': No such file or directory I0317 00:56:30.830766 62605 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.20.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.20.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.20.0': No such file or directory I0317 00:56:30.830791 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.20.0': No such file or directory I0317 00:56:30.830785 62605 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.20.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.20.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.20.0': No such file or directory I0317 00:56:30.830800 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 --> /var/lib/minikube/images/kube-controller-manager_v1.20.0 (26694656 bytes) I0317 00:56:30.830844 62605 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.20.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.20.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.20.0': No such file or directory I0317 00:56:30.830859 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v4': No such file or directory I0317 00:56:30.830876 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 --> /var/lib/minikube/images/kube-apiserver_v1.20.0 (27689472 bytes) I0317 00:56:30.830874 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 --> /var/lib/minikube/images/kube-scheduler_v1.20.0 (12630016 bytes) I0317 00:56:30.830891 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory I0317 00:56:30.830897 62605 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory I0317 00:56:30.830881 62605 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v4: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v4: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v4': No such file or directory I0317 00:56:30.830907 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (66566656 bytes) I0317 00:56:30.830916 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 --> /var/lib/minikube/images/storage-provisioner_v4 (7916032 bytes) I0317 00:56:32.178165 62605 docker.go:167] Loading image: /var/lib/minikube/images/storage-provisioner_v4 I0317 00:56:32.178286 62605 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/storage-provisioner_v4 I0317 00:56:32.938709 62605 command_runner.go:124] > Loaded image: gcr.io/k8s-minikube/storage-provisioner:v4 I0317 00:56:32.945262 62605 cache_images.go:259] Transferred and loaded /Users/jordicea/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 from cache I0317 00:56:32.945288 62605 docker.go:167] Loading image: /var/lib/minikube/images/kube-scheduler_v1.20.0 I0317 00:56:32.945376 62605 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-scheduler_v1.20.0 I0317 00:56:34.581115 62605 command_runner.go:124] > Loaded image: k8s.gcr.io/kube-scheduler:v1.20.0 I0317 00:56:34.581142 62605 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/kube-scheduler_v1.20.0: (1.635762208s) I0317 00:56:34.581150 62605 cache_images.go:259] Transferred and loaded /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 from cache I0317 00:56:34.581154 62605 docker.go:167] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4 I0317 00:56:34.581235 62605 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/metrics-scraper_v1.0.4 I0317 00:56:35.819061 62605 command_runner.go:124] > Loaded image: kubernetesui/metrics-scraper:v1.0.4 I0317 00:56:35.823333 62605 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/metrics-scraper_v1.0.4: (1.242078708s) I0317 00:56:35.823361 62605 cache_images.go:259] Transferred and loaded /Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache I0317 00:56:35.823375 62605 docker.go:167] Loading image: /var/lib/minikube/images/kube-apiserver_v1.20.0 I0317 00:56:35.823536 62605 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-apiserver_v1.20.0 I0317 00:56:36.751443 62605 command_runner.go:124] > Loaded image: k8s.gcr.io/kube-apiserver:v1.20.0 I0317 00:56:36.767557 62605 cache_images.go:259] Transferred and loaded /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 from cache I0317 00:56:36.767612 62605 docker.go:167] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.20.0 I0317 00:56:36.767809 62605 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.20.0 I0317 00:56:38.058364 62605 command_runner.go:124] > Loaded image: k8s.gcr.io/kube-controller-manager:v1.20.0 I0317 00:56:38.067825 62605 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.20.0: (1.300002417s) I0317 00:56:38.067847 62605 cache_images.go:259] Transferred and loaded /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 from cache I0317 00:56:38.067855 62605 docker.go:167] Loading image: /var/lib/minikube/images/kube-proxy_v1.20.0 I0317 00:56:38.067921 62605 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-proxy_v1.20.0 I0317 00:56:40.494306 62605 command_runner.go:124] > Loaded image: k8s.gcr.io/kube-proxy:v1.20.0 I0317 00:56:40.508324 62605 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/kube-proxy_v1.20.0: (2.44039375s) I0317 00:56:40.508345 62605 cache_images.go:259] Transferred and loaded /Users/jordicea/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 from cache I0317 00:56:40.508353 62605 docker.go:167] Loading image: /var/lib/minikube/images/etcd_3.4.13-0 I0317 00:56:40.508779 62605 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/etcd_3.4.13-0 I0317 00:56:43.397627 62605 command_runner.go:124] > Loaded image: k8s.gcr.io/etcd:3.4.13-0 I0317 00:56:43.431716 62605 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/etcd_3.4.13-0: (2.922922375s) I0317 00:56:43.431745 62605 cache_images.go:259] Transferred and loaded /Users/jordicea/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 from cache I0317 00:56:43.431757 62605 docker.go:167] Loading image: /var/lib/minikube/images/dashboard_v2.1.0 I0317 00:56:43.431881 62605 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/dashboard_v2.1.0 I0317 00:56:45.048190 62605 command_runner.go:124] > Loaded image: kubernetesui/dashboard:v2.1.0 I0317 00:56:45.073355 62605 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/dashboard_v2.1.0: (1.64145375s) I0317 00:56:45.073390 62605 cache_images.go:259] Transferred and loaded /Users/jordicea/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 from cache I0317 00:56:45.073421 62605 cache_images.go:111] Successfully loaded all cached images I0317 00:56:45.073427 62605 cache_images.go:80] LoadImages completed in 16.463432292s I0317 00:56:45.073654 62605 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}} I0317 00:56:45.158308 62605 command_runner.go:124] > cgroupfs I0317 00:56:45.159985 62605 cni.go:74] Creating CNI manager for "" I0317 00:56:45.159993 62605 cni.go:140] CNI unnecessary in this configuration, recommending no CNI I0317 00:56:45.160006 62605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0317 00:56:45.160015 62605 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0317 00:56:45.160135 62605 kubeadm.go:154] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0317 00:56:45.184679 62605 kubeadm.go:919] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0317 00:56:45.184780 62605 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.0 I0317 00:56:45.189562 62605 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/binaries/v1.20.0': No such file or directory I0317 00:56:45.189577 62605 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.20.0: Process exited with status 2 stdout: stderr: ls: cannot access '/var/lib/minikube/binaries/v1.20.0': No such file or directory Initiating transfer... I0317 00:56:45.189647 62605 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.20.0 I0317 00:56:45.194551 62605 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.0/bin/linux/arm64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.0/bin/linux/arm64/kubeadm.sha256 I0317 00:56:45.194560 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/linux/v1.20.0/kubeadm -> /var/lib/minikube/binaries/v1.20.0/kubeadm I0317 00:56:45.194636 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.0/kubeadm I0317 00:56:45.195290 62605 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.0/bin/linux/arm64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.0/bin/linux/arm64/kubelet.sha256 I0317 00:56:45.195435 62605 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.0/bin/linux/arm64/kubectl.sha256 I0317 00:56:45.233368 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/linux/v1.20.0/kubectl -> /var/lib/minikube/binaries/v1.20.0/kubectl I0317 00:56:45.233417 62605 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0317 00:56:45.233469 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.0/kubectl I0317 00:56:45.236355 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/binaries/v1.20.0/kubectl': No such file or directory I0317 00:56:45.236372 62605 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.20.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.0/kubectl: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.20.0/kubectl': No such file or directory I0317 00:56:45.236400 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/linux/v1.20.0/kubectl --> /var/lib/minikube/binaries/v1.20.0/kubectl (37158912 bytes) I0317 00:56:45.241424 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/cache/linux/v1.20.0/kubelet -> /var/lib/minikube/binaries/v1.20.0/kubelet I0317 00:56:45.241430 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/binaries/v1.20.0/kubeadm': No such file or directory I0317 00:56:45.241493 62605 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.20.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.0/kubeadm: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.20.0/kubeadm': No such file or directory I0317 00:56:45.241556 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/linux/v1.20.0/kubeadm --> /var/lib/minikube/binaries/v1.20.0/kubeadm (36175872 bytes) I0317 00:56:45.241583 62605 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.0/kubelet I0317 00:56:45.314904 62605 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/binaries/v1.20.0/kubelet': No such file or directory I0317 00:56:45.314931 62605 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.20.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.0/kubelet: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.20.0/kubelet': No such file or directory I0317 00:56:45.314977 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/cache/linux/v1.20.0/kubelet --> /var/lib/minikube/binaries/v1.20.0/kubelet (105415560 bytes) I0317 00:56:47.962897 62605 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0317 00:56:47.967403 62605 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I0317 00:56:47.974889 62605 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0317 00:56:47.982822 62605 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1840 bytes) I0317 00:56:47.990638 62605 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0317 00:56:47.992955 62605 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I0317 00:56:47.999243 62605 certs.go:52] Setting up /Users/jordicea/.minikube/profiles/minikube for IP: 192.168.49.2 I0317 00:56:47.999460 62605 certs.go:171] skipping minikubeCA CA generation: /Users/jordicea/.minikube/ca.key I0317 00:56:47.999493 62605 certs.go:171] skipping proxyClientCA CA generation: /Users/jordicea/.minikube/proxy-client-ca.key I0317 00:56:47.999530 62605 certs.go:279] generating minikube-user signed cert: /Users/jordicea/.minikube/profiles/minikube/client.key I0317 00:56:47.999671 62605 crypto.go:69] Generating cert /Users/jordicea/.minikube/profiles/minikube/client.crt with IP's: [] I0317 00:56:48.074763 62605 crypto.go:157] Writing cert to /Users/jordicea/.minikube/profiles/minikube/client.crt ... I0317 00:56:48.074803 62605 lock.go:36] WriteFile acquiring /Users/jordicea/.minikube/profiles/minikube/client.crt: {Name:mka706fb0294dabf35a2df85a5ce4f43e9b6bb97 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0317 00:56:48.076523 62605 crypto.go:165] Writing key to /Users/jordicea/.minikube/profiles/minikube/client.key ... I0317 00:56:48.076555 62605 lock.go:36] WriteFile acquiring /Users/jordicea/.minikube/profiles/minikube/client.key: {Name:mk61c7ff6d58bda9bbb0bb4394c6ecbf4ebc0eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0317 00:56:48.077451 62605 certs.go:279] generating minikube signed cert: /Users/jordicea/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0317 00:56:48.077476 62605 crypto.go:69] Generating cert /Users/jordicea/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0317 00:56:48.240364 62605 crypto.go:157] Writing cert to /Users/jordicea/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0317 00:56:48.240380 62605 lock.go:36] WriteFile acquiring /Users/jordicea/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk3f4fe8eee69125ddf9a9e721e6146404f6092e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0317 00:56:48.241456 62605 crypto.go:165] Writing key to /Users/jordicea/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0317 00:56:48.241461 62605 lock.go:36] WriteFile acquiring /Users/jordicea/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk6873db83a12d51e0fec7a4382c0740ee2d0c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0317 00:56:48.241560 62605 certs.go:290] copying /Users/jordicea/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /Users/jordicea/.minikube/profiles/minikube/apiserver.crt I0317 00:56:48.241646 62605 certs.go:294] copying /Users/jordicea/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /Users/jordicea/.minikube/profiles/minikube/apiserver.key I0317 00:56:48.241733 62605 certs.go:279] generating aggregator signed cert: /Users/jordicea/.minikube/profiles/minikube/proxy-client.key I0317 00:56:48.241738 62605 crypto.go:69] Generating cert /Users/jordicea/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0317 00:56:48.388156 62605 crypto.go:157] Writing cert to /Users/jordicea/.minikube/profiles/minikube/proxy-client.crt ... I0317 00:56:48.388168 62605 lock.go:36] WriteFile acquiring /Users/jordicea/.minikube/profiles/minikube/proxy-client.crt: {Name:mk8a5563ce48b2d40a1b33b0ad73dc54ee42d871 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0317 00:56:48.388388 62605 crypto.go:165] Writing key to /Users/jordicea/.minikube/profiles/minikube/proxy-client.key ... I0317 00:56:48.388392 62605 lock.go:36] WriteFile acquiring /Users/jordicea/.minikube/profiles/minikube/proxy-client.key: {Name:mkfeaf598bdf34fde744b1dd090cf8599689939d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0317 00:56:48.389843 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt I0317 00:56:48.389860 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key I0317 00:56:48.389873 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt I0317 00:56:48.389883 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key I0317 00:56:48.389896 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt I0317 00:56:48.389910 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/ca.key -> /var/lib/minikube/certs/ca.key I0317 00:56:48.389922 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt I0317 00:56:48.389935 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key I0317 00:56:48.390002 62605 certs.go:354] found cert: /Users/jordicea/.minikube/certs/Users/jordicea/.minikube/certs/ca-key.pem (1675 bytes) I0317 00:56:48.390198 62605 certs.go:354] found cert: /Users/jordicea/.minikube/certs/Users/jordicea/.minikube/certs/ca.pem (1082 bytes) I0317 00:56:48.390303 62605 certs.go:354] found cert: /Users/jordicea/.minikube/certs/Users/jordicea/.minikube/certs/cert.pem (1127 bytes) I0317 00:56:48.390379 62605 certs.go:354] found cert: /Users/jordicea/.minikube/certs/Users/jordicea/.minikube/certs/key.pem (1679 bytes) I0317 00:56:48.390528 62605 vm_assets.go:96] NewFileAsset: /Users/jordicea/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem I0317 00:56:48.392730 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0317 00:56:48.417701 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0317 00:56:48.427103 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0317 00:56:48.438251 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0317 00:56:48.448041 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0317 00:56:48.457250 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0317 00:56:48.465816 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0317 00:56:48.474494 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0317 00:56:48.484014 62605 ssh_runner.go:316] scp /Users/jordicea/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0317 00:56:48.493149 62605 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0317 00:56:48.502609 62605 ssh_runner.go:149] Run: openssl version I0317 00:56:48.507895 62605 command_runner.go:124] > OpenSSL 1.1.1f 31 Mar 2020 I0317 00:56:48.508367 62605 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0317 00:56:48.515363 62605 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0317 00:56:48.518112 62605 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Mar 16 10:23 /usr/share/ca-certificates/minikubeCA.pem I0317 00:56:48.518149 62605 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Mar 16 10:23 /usr/share/ca-certificates/minikubeCA.pem I0317 00:56:48.518272 62605 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0317 00:56:48.521661 62605 command_runner.go:124] > b5213941 I0317 00:56:48.521852 62605 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0317 00:56:48.526131 62605 kubeadm.go:385] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4096 CPUs:3 DiskSize:71680 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0317 00:56:48.526237 62605 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0317 00:56:48.551582 62605 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0317 00:56:48.556507 62605 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory I0317 00:56:48.556519 62605 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory I0317 00:56:48.556524 62605 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory I0317 00:56:48.556592 62605 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0317 00:56:48.560534 62605 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver I0317 00:56:48.560590 62605 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0317 00:56:48.564340 62605 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory I0317 00:56:48.564350 62605 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory I0317 00:56:48.564355 62605 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory I0317 00:56:48.564360 62605 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0317 00:56:48.564371 62605 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0317 00:56:48.596241 62605 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0317 00:56:48.686204 62605 command_runner.go:124] > [init] Using Kubernetes version: v1.20.0 I0317 00:56:48.686237 62605 command_runner.go:124] > [preflight] Running pre-flight checks I0317 00:56:48.895265 62605 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster I0317 00:56:48.895329 62605 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection I0317 00:56:48.895413 62605 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0317 00:56:49.020521 62605 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs" I0317 00:56:49.038756 62605 out.go:150] - Generating certificates and keys ... I0317 00:56:49.039895 62605 command_runner.go:124] > [certs] Using existing ca certificate authority I0317 00:56:49.039944 62605 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk I0317 00:56:49.076792 62605 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key I0317 00:56:49.278592 62605 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key I0317 00:56:49.400307 62605 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key I0317 00:56:49.454714 62605 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key I0317 00:56:49.934681 62605 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key I0317 00:56:49.934901 62605 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0317 00:56:50.019796 62605 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key I0317 00:56:50.019878 62605 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0317 00:56:50.166576 62605 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key I0317 00:56:50.216641 62605 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key I0317 00:56:50.316633 62605 command_runner.go:124] > [certs] Generating "sa" key and public key I0317 00:56:50.316671 62605 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0317 00:56:50.441479 62605 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file I0317 00:56:50.693448 62605 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file I0317 00:56:50.813092 62605 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0317 00:56:50.856035 62605 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file I0317 00:56:50.863036 62605 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0317 00:56:50.863557 62605 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0317 00:56:50.863584 62605 command_runner.go:124] > [kubelet-start] Starting the kubelet I0317 00:56:50.911627 62605 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0317 00:56:50.930854 62605 out.go:150] - Booting up control plane ... I0317 00:56:50.931294 62605 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver" I0317 00:56:50.931375 62605 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager" I0317 00:56:50.931411 62605 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler" I0317 00:56:50.931486 62605 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0317 00:56:50.931653 62605 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0317 00:57:10.922611 62605 command_runner.go:124] > [apiclient] All control plane components are healthy after 20.002377 seconds I0317 00:57:10.922810 62605 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0317 00:57:10.998809 62605 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster I0317 00:57:11.724174 62605 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs I0317 00:57:11.724476 62605 command_runner.go:124] > [mark-control-plane] Marking the node minikube as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" I0317 00:57:12.341759 62605 command_runner.go:124] > [bootstrap-token] Using token: 43rnkk.nmgdqel3a952dvho I0317 00:57:12.390568 62605 out.go:150] - Configuring RBAC rules ... I0317 00:57:12.391017 62605 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0317 00:57:12.524558 62605 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes I0317 00:57:12.547078 62605 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0317 00:57:12.566278 62605 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0317 00:57:12.585736 62605 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0317 00:57:12.698702 62605 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0317 00:57:12.740023 62605 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0317 00:57:13.212917 62605 command_runner.go:124] > [addons] Applied essential addon: CoreDNS I0317 00:57:13.497609 62605 command_runner.go:124] > [addons] Applied essential addon: kube-proxy I0317 00:57:13.498388 62605 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully! I0317 00:57:13.498532 62605 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user: I0317 00:57:13.498587 62605 command_runner.go:124] > mkdir -p $HOME/.kube I0317 00:57:13.498707 62605 command_runner.go:124] > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0317 00:57:13.498796 62605 command_runner.go:124] > sudo chown $(id -u):$(id -g) $HOME/.kube/config I0317 00:57:13.498882 62605 command_runner.go:124] > Alternatively, if you are the root user, you can run: I0317 00:57:13.498976 62605 command_runner.go:124] > export KUBECONFIG=/etc/kubernetes/admin.conf I0317 00:57:13.499079 62605 command_runner.go:124] > You should now deploy a pod network to the cluster. I0317 00:57:13.499192 62605 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0317 00:57:13.499304 62605 command_runner.go:124] > https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0317 00:57:13.499456 62605 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities I0317 00:57:13.499587 62605 command_runner.go:124] > and service account keys on each node and then running the following as root: I0317 00:57:13.499774 62605 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token 43rnkk.nmgdqel3a952dvho \ I0317 00:57:13.499958 62605 command_runner.go:124] > --discovery-token-ca-cert-hash sha256:a584ff9453976ff9f11560ea2b10d9b09f16310a57c08a12ee472a1a688484e5 \ I0317 00:57:13.500001 62605 command_runner.go:124] > --control-plane I0317 00:57:13.500136 62605 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root: I0317 00:57:13.500269 62605 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token 43rnkk.nmgdqel3a952dvho \ I0317 00:57:13.500500 62605 command_runner.go:124] > --discovery-token-ca-cert-hash sha256:a584ff9453976ff9f11560ea2b10d9b09f16310a57c08a12ee472a1a688484e5 I0317 00:57:13.500891 62605 command_runner.go:124] ! [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ I0317 00:57:13.501160 62605 command_runner.go:124] ! [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist I0317 00:57:13.501304 62605 command_runner.go:124] ! [WARNING Swap]: running with swap on is not supported. Please disable swap I0317 00:57:13.501520 62605 command_runner.go:124] ! [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03 I0317 00:57:13.501693 62605 command_runner.go:124] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0317 00:57:13.501724 62605 cni.go:74] Creating CNI manager for "" I0317 00:57:13.501733 62605 cni.go:140] CNI unnecessary in this configuration, recommending no CNI I0317 00:57:13.501794 62605 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0317 00:57:13.501985 62605 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0317 00:57:13.501987 62605 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=09ee84d530de4a92f00f1c5dbc34cead092b95bc minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_03_17T00_57_13_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0317 00:57:13.509067 62605 command_runner.go:124] > -16 I0317 00:57:13.509100 62605 ops.go:34] apiserver oom_adj: -16 I0317 00:57:17.412102 62605 command_runner.go:124] > node/minikube labeled I0317 00:57:17.416478 62605 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created I0317 00:57:17.416535 62605 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (3.914536166s) I0317 00:57:17.416555 62605 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.0/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=09ee84d530de4a92f00f1c5dbc34cead092b95bc minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_03_17T00_57_13_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (3.914530292s) I0317 00:57:17.416555 62605 kubeadm.go:995] duration metric: took 3.914804125s to wait for elevateKubeSystemPrivileges. I0317 00:57:17.416936 62605 kubeadm.go:387] StartCluster complete in 28.890865292s I0317 00:57:17.416963 62605 settings.go:142] acquiring lock: {Name:mk333c2404842577ee3bb70f8587d2d8020575cd Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0317 00:57:17.417350 62605 settings.go:150] Updating kubeconfig: /Users/jordicea/.kube/config I0317 00:57:17.432371 62605 lock.go:36] WriteFile acquiring /Users/jordicea/.kube/config: {Name:mk391032430b2eb9f3297ce2df38e5ace766f6c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0317 00:57:17.447408 62605 start.go:202] Will wait 6m0s for node up to I0317 00:57:17.447679 62605 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl scale deployment --replicas=1 coredns -n=kube-system I0317 00:57:17.447861 62605 addons.go:381] enableAddons start: toEnable=map[], additional=[] I0317 00:57:17.482487 62605 out.go:129] * Verifying Kubernetes components... I0317 00:57:17.482570 62605 addons.go:58] Setting storage-provisioner=true in profile "minikube" I0317 00:57:17.482584 62605 addons.go:134] Setting addon storage-provisioner=true in "minikube" W0317 00:57:17.482589 62605 addons.go:143] addon storage-provisioner should already be in state true I0317 00:57:17.482604 62605 host.go:66] Checking if "minikube" exists ... I0317 00:57:17.482604 62605 addons.go:58] Setting default-storageclass=true in profile "minikube" I0317 00:57:17.482634 62605 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0317 00:57:17.483093 62605 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0317 00:57:17.483416 62605 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0317 00:57:17.500917 62605 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0317 00:57:17.546915 62605 command_runner.go:124] > deployment.apps/coredns scaled I0317 00:57:17.548788 62605 start.go:601] successfully scaled coredns replicas to 1 I0317 00:57:17.548884 62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0317 00:57:17.840985 62605 kapi.go:59] client config for minikube: &rest.Config{Host:"https://127.0.0.1:52385", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jordicea/.minikube/profiles/minikube/client.crt", KeyFile:"/Users/jordicea/.minikube/profiles/minikube/client.key", CAFile:"/Users/jordicea/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1012060b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)} I0317 00:57:17.840983 62605 kapi.go:59] client config for minikube: &rest.Config{Host:"https://127.0.0.1:52385", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jordicea/.minikube/profiles/minikube/client.crt", KeyFile:"/Users/jordicea/.minikube/profiles/minikube/client.key", CAFile:"/Users/jordicea/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1012060b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)} I0317 00:57:17.858009 62605 out.go:129] - Using image gcr.io/k8s-minikube/storage-provisioner:v4 I0317 00:57:17.858176 62605 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml I0317 00:57:17.858183 62605 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0317 00:57:17.858261 62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0317 00:57:17.861972 62605 kubeadm.go:479] skip waiting for components based on config. I0317 00:57:17.861994 62605 node_conditions.go:101] verifying NodePressure condition ... I0317 00:57:17.870964 62605 addons.go:134] Setting addon default-storageclass=true in "minikube" W0317 00:57:17.870979 62605 addons.go:143] addon default-storageclass should already be in state true I0317 00:57:17.870987 62605 host.go:66] Checking if "minikube" exists ... I0317 00:57:17.871298 62605 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0317 00:57:17.871970 62605 node_conditions.go:121] node storage ephemeral capacity is 49278612Ki I0317 00:57:17.872039 62605 node_conditions.go:122] node cpu capacity is 4 I0317 00:57:17.872061 62605 node_conditions.go:104] duration metric: took 10.063583ms to run NodePressure ... I0317 00:57:17.872070 62605 start.go:207] waiting for startup goroutines ... I0317 00:57:18.082292 62605 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml I0317 00:57:18.082309 62605 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0317 00:57:18.082321 62605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52379 SSHKeyPath:/Users/jordicea/.minikube/machines/minikube/id_rsa Username:docker} I0317 00:57:18.082396 62605 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0317 00:57:18.193681 62605 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0317 00:57:18.335430 62605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52379 SSHKeyPath:/Users/jordicea/.minikube/machines/minikube/id_rsa Username:docker} I0317 00:57:18.406718 62605 command_runner.go:124] > serviceaccount/storage-provisioner created I0317 00:57:18.437515 62605 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0317 00:57:18.547985 62605 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created I0317 00:57:18.551580 62605 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created I0317 00:57:18.586217 62605 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created I0317 00:57:18.625683 62605 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created I0317 00:57:18.716757 62605 command_runner.go:124] > pod/storage-provisioner created I0317 00:57:18.740491 62605 command_runner.go:124] > storageclass.storage.k8s.io/standard created I0317 00:57:18.809054 62605 out.go:129] * Enabled addons: storage-provisioner, default-storageclass I0317 00:57:18.809414 62605 addons.go:383] enableAddons completed in 1.361750708s I0317 00:57:18.945516 62605 start.go:460] kubectl: 1.20.4-dirty, cluster: 1.20.0 (minor skew: 0) I0317 00:57:18.963814 62605 out.go:129] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Minikube logs

spowelljr commented 3 years ago

Hi @jordicea, since this issue seems to be the same as #7332 and that issue already has a lot of discussion, would it be ok if I close this issue and you can track #7332 for developments?

sharifelgamal commented 3 years ago

So yeah unfortunately we do not currently support the ingress addon for the docker driver on darwin, regardless of architecture. Your best bet is going to be using a vm based driver or tunneling as @medyagh mentioned.

Closing as a dupe of #7332.