kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.43k stars 4.88k forks source link

docker: no free private network subnets found with given parameters (start: "192.168.9.0") #12950

Closed jakoberpf closed 2 years ago

jakoberpf commented 2 years ago

Steps to reproduce the issue: On M1 Pro Macbook Pro 2021 and docker installed, run minikube start.

I did some googling but came up empty for this issue.

Full output of failed command if not minikube start:

๐Ÿ˜„  minikube v1.24.0 on Darwin 12.0.1 (arm64)
โœจ  Automatically selected the docker driver. Other choices: virtualbox, ssh
๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿšœ  Pulling base image ...
๐Ÿ”ฅ  Creating docker container (CPUs=2, Memory=4000MB) ...\ E1115 12:37:56.567480   43168 network_create.go:85] failed to find free subnet for docker network minikube after 20 attempts: no free private network subnets found with given parameters (start: "192.168.9.0", step: 9, tries: 20)

โ—  Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: no free private network subnets found with given parameters (start: "192.168.9.0", step: 9, tries: 20)
๐Ÿณ  Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
    โ–ช Generating certificates and keys ...
    โ–ช Booting up control plane ...
    โ–ช Configuring RBAC rules ...
๐Ÿ”Ž  Verifying Kubernetes components...
    โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5
๐ŸŒŸ  Enabled addons: storage-provisioner, default-storageclass
๐Ÿ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
* * ==> Audit <== * |----------------|--------------------------------|----------|-----------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |----------------|--------------------------------|----------|-----------|---------|-------------------------------|-------------------------------| | update-check | | minikube | jakoberpf | v1.24.0 | Sat, 13 Nov 2021 18:23:03 CET | Sat, 13 Nov 2021 18:23:04 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Sat, 13 Nov 2021 18:47:10 CET | Sat, 13 Nov 2021 18:47:10 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Sat, 13 Nov 2021 19:22:28 CET | Sat, 13 Nov 2021 19:22:28 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Sat, 13 Nov 2021 22:10:46 CET | Sat, 13 Nov 2021 22:10:46 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Sat, 13 Nov 2021 23:02:57 CET | Sat, 13 Nov 2021 23:02:57 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Sun, 14 Nov 2021 00:00:39 CET | Sun, 14 Nov 2021 00:00:40 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Sun, 14 Nov 2021 00:02:15 CET | Sun, 14 Nov 2021 00:02:15 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Sun, 14 Nov 2021 00:10:14 CET | Sun, 14 Nov 2021 00:10:15 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Sun, 14 Nov 2021 01:03:40 CET | Sun, 14 Nov 2021 01:03:40 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Sun, 14 Nov 2021 01:22:11 CET | Sun, 14 Nov 2021 01:22:11 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Sun, 14 Nov 2021 11:55:25 CET | Sun, 14 Nov 2021 11:55:26 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Sun, 14 Nov 2021 12:06:47 CET | Sun, 14 Nov 2021 12:06:47 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 09:26:30 CET | Mon, 15 Nov 2021 09:26:30 CET | | update-check | | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 10:26:54 CET | Mon, 15 Nov 2021 10:26:54 CET | | start | | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 10:38:03 CET | Mon, 15 Nov 2021 10:41:37 CET | | delete | | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 10:42:43 CET | Mon, 15 Nov 2021 10:42:46 CET | | start | --cpus=4 --disk-size=10GB | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 10:57:13 CET | Mon, 15 Nov 2021 10:58:00 CET | | | --memory=8GB | | | | | | | update-context | | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 10:58:00 CET | Mon, 15 Nov 2021 10:58:00 CET | | addons | disable ingress | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 10:58:00 CET | Mon, 15 Nov 2021 10:58:00 CET | | addons | enable metrics-server | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 10:58:00 CET | Mon, 15 Nov 2021 10:58:01 CET | | delete | | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 11:59:52 CET | Mon, 15 Nov 2021 11:59:57 CET | | start | | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 12:00:47 CET | Mon, 15 Nov 2021 12:01:41 CET | | start | --driver=docker | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 12:26:52 CET | Mon, 15 Nov 2021 12:26:59 CET | | delete | | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 12:27:10 CET | Mon, 15 Nov 2021 12:27:15 CET | | start | --cpus=4 --disk-size=10GB | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 12:27:32 CET | Mon, 15 Nov 2021 12:28:18 CET | | | --memory=8GB --driver=docker | | | | | | | update-context | | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 12:28:18 CET | Mon, 15 Nov 2021 12:28:19 CET | | addons | disable ingress | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 12:28:19 CET | Mon, 15 Nov 2021 12:28:19 CET | | addons | enable metrics-server | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 12:28:19 CET | Mon, 15 Nov 2021 12:28:19 CET | | delete | | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 12:31:05 CET | Mon, 15 Nov 2021 12:31:09 CET | | start | | minikube | jakoberpf | v1.24.0 | Mon, 15 Nov 2021 12:37:53 CET | Mon, 15 Nov 2021 12:38:46 CET | |----------------|--------------------------------|----------|-----------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2021/11/15 12:37:53 Running on machine: Jakobs-MacBook-Pro Binary: Built with gc go1.17.2 for darwin/arm64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I1115 12:37:53.367398 43168 out.go:297] Setting OutFile to fd 1 ... I1115 12:37:53.367531 43168 out.go:349] isatty.IsTerminal(1) = true I1115 12:37:53.367532 43168 out.go:310] Setting ErrFile to fd 2... I1115 12:37:53.367535 43168 out.go:349] isatty.IsTerminal(2) = true I1115 12:37:53.367613 43168 root.go:313] Updating PATH: /Users/jakoberpf/.minikube/bin I1115 12:37:53.368108 43168 out.go:304] Setting JSON to false I1115 12:37:53.399577 43168 start.go:112] hostinfo: {"hostname":"Jakobs-MacBook-Pro.local","uptime":171085,"bootTime":1636805188,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.0.1","kernelVersion":"21.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"7aba47ce-7dd0-5260-843f-d8fcc63aa896"} W1115 12:37:53.399698 43168 start.go:120] gopshost.Virtualization returned error: not implemented yet I1115 12:37:53.418933 43168 out.go:176] ๐Ÿ˜„ minikube v1.24.0 on Darwin 12.0.1 (arm64) I1115 12:37:53.419490 43168 notify.go:174] Checking for updates... W1115 12:37:53.419511 43168 preload.go:294] Failed to list preload files: open /Users/jakoberpf/.minikube/cache/preloaded-tarball: no such file or directory I1115 12:37:53.420057 43168 driver.go:343] Setting default libvirt URI to qemu:///system I1115 12:37:53.420103 43168 global.go:111] Querying for installed drivers using PATH=/Users/jakoberpf/.minikube/bin:/Users/jakoberpf/.jenv/shims:/Users/jakoberpf/.gvm/bin:/opt/homebrew/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/Users/jakoberpf/.jenv/shims:/Users/jakoberpf/.gvm/bin:/opt/homebrew/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/Users/jakoberpf/.rvm/bin:/Users/jakoberpf/.bashhub/bin:/Users/jakoberpf/.rvm/bin I1115 12:37:53.420114 43168 global.go:119] vmwarefusion default: false priority: 1, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:the 'vmwarefusion' driver is no longer available Reason: Fix:Switch to the newer 'vmware' driver by using '--driver=vmware'. This may require first deleting your existing cluster Doc:https://minikube.sigs.k8s.io/docs/drivers/vmware/} I1115 12:37:53.605386 43168 docker.go:132] docker version: linux-20.10.10 I1115 12:37:53.605779 43168 cli_runner.go:115] Run: docker system info --format "{{json .}}" I1115 12:37:53.936445 43168 info.go:263] docker info: {ID:RRNE:M2EE:ZUHM:S5C2:XQJQ:SRUX:G2JG:TOOT:7ERU:VQXR:RMBI:AWCB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2021-11-15 11:37:53.697620875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.47-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:10434535424 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.10 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b46e404f6b9f661a205e28d59c982d3634148f8 Expected:5b46e404f6b9f661a205e28d59c982d3634148f8} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.3] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.1.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:0.9.0]] Warnings:}} I1115 12:37:53.936539 43168 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I1115 12:37:53.936886 43168 global.go:119] hyperkit default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "hyperkit": executable file not found in $PATH Reason: Fix:Run 'brew install hyperkit' Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/} I1115 12:37:53.936942 43168 global.go:119] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/} I1115 12:37:53.936986 43168 global.go:119] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/} I1115 12:37:53.936991 43168 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I1115 12:37:55.808815 43168 global.go:119] virtualbox default: true priority: 6, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I1115 12:37:55.808965 43168 global.go:119] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/} I1115 12:37:55.808989 43168 driver.go:278] not recommending "ssh" due to default: false I1115 12:37:55.809020 43168 driver.go:313] Picked: docker I1115 12:37:55.809034 43168 driver.go:314] Alternatives: [virtualbox ssh] I1115 12:37:55.809036 43168 driver.go:315] Rejects: [hyperkit parallels podman vmwarefusion vmware] I1115 12:37:55.831714 43168 out.go:176] โœจ Automatically selected the docker driver. Other choices: virtualbox, ssh I1115 12:37:55.831761 43168 start.go:280] selected driver: docker I1115 12:37:55.831765 43168 start.go:762] validating driver "docker" against I1115 12:37:55.831777 43168 start.go:773] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I1115 12:37:55.831953 43168 cli_runner.go:115] Run: docker system info --format "{{json .}}" I1115 12:37:56.010021 43168 info.go:263] docker info: {ID:RRNE:M2EE:ZUHM:S5C2:XQJQ:SRUX:G2JG:TOOT:7ERU:VQXR:RMBI:AWCB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2021-11-15 11:37:55.930692334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.47-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:10434535424 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.10 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b46e404f6b9f661a205e28d59c982d3634148f8 Expected:5b46e404f6b9f661a205e28d59c982d3634148f8} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.3] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.1.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:0.9.0]] Warnings:}} I1115 12:37:56.010122 43168 start_flags.go:268] no existing cluster config was found, will generate one from the flags W1115 12:37:56.010497 43168 info.go:50] Unable to get CPU info: no such file or directory W1115 12:37:56.010532 43168 start.go:925] could not get system cpu info while verifying memory limits, which might be okay: no such file or directory W1115 12:37:56.010539 43168 info.go:50] Unable to get CPU info: no such file or directory W1115 12:37:56.010540 43168 start.go:925] could not get system cpu info while verifying memory limits, which might be okay: no such file or directory I1115 12:37:56.010544 43168 start_flags.go:349] Using suggested 4000MB memory alloc based on sys=16384MB, container=9951MB I1115 12:37:56.010614 43168 start_flags.go:736] Wait components to verify : map[apiserver:true system_pods:true] I1115 12:37:56.010621 43168 cni.go:93] Creating CNI manager for "" I1115 12:37:56.010624 43168 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I1115 12:37:56.010626 43168 start_flags.go:282] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} I1115 12:37:56.029698 43168 out.go:176] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I1115 12:37:56.029746 43168 cache.go:118] Beginning downloading kic base image for docker with docker I1115 12:37:56.068399 43168 out.go:176] ๐Ÿšœ Pulling base image ... I1115 12:37:56.068444 43168 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker I1115 12:37:56.068479 43168 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon I1115 12:37:56.179045 43168 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull I1115 12:37:56.179070 43168 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load W1115 12:37:56.199491 43168 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-arm64.tar.lz4 status code: 404 I1115 12:37:56.199760 43168 profile.go:147] Saving config to /Users/jakoberpf/.minikube/profiles/minikube/config.json ... I1115 12:37:56.199806 43168 lock.go:35] WriteFile acquiring /Users/jakoberpf/.minikube/profiles/minikube/config.json: {Name:mkd468f76a7bd57ba9cf2ea784172acb87b931a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1115 12:37:56.200016 43168 cache.go:206] Successfully downloaded all kic artifacts I1115 12:37:56.200493 43168 start.go:313] acquiring machines lock for minikube: {Name:mk13d5c40f39dba8c4564027c2ab81f5fde3c46d Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1115 12:37:56.200542 43168 start.go:317] acquired machines lock for "minikube" in 42.542ยตs I1115 12:37:56.200663 43168 cache.go:107] acquiring lock: {Name:mk66f867c75bce4164fb7e6500fe40a2d80bbe3b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1115 12:37:56.200694 43168 cache.go:107] acquiring lock: {Name:mk51fdaae538376fef619dbc35f2df37ec3439ba Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1115 12:37:56.200723 43168 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true} I1115 12:37:56.200794 43168 start.go:126] createHost starting for "" (driver="docker") I1115 12:37:56.200810 43168 cache.go:115] /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 exists I1115 12:37:56.200816 43168 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.22.3" -> "/Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3" took 186.042ยตs I1115 12:37:56.200664 43168 cache.go:107] acquiring lock: {Name:mk91b0e7ed3b38625d1c333b593fbaa9f04ee2f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1115 12:37:56.200821 43168 cache.go:115] /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/pause_3.5 exists I1115 12:37:56.200825 43168 cache.go:96] cache image "k8s.gcr.io/pause:3.5" -> "/Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/pause_3.5" took 165.709ยตs I1115 12:37:56.200829 43168 cache.go:80] save to tar file k8s.gcr.io/pause:3.5 -> /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/pause_3.5 succeeded I1115 12:37:56.200835 43168 cache.go:107] acquiring lock: {Name:mkae50040d6fabbb62c4adccccf463a1a34c7bd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1115 12:37:56.200679 43168 cache.go:107] acquiring lock: {Name:mk1860412a02f4e46c3d023885603cae0a4bf393 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1115 12:37:56.200820 43168 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.22.3 -> /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 succeeded I1115 12:37:56.200860 43168 cache.go:107] acquiring lock: {Name:mk28f430394c5ad31556a13b220cd01d497f5d95 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1115 12:37:56.200870 43168 cache.go:107] acquiring lock: {Name:mkf3b45b5c685d250996f34edb9774d083a8ab5b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1115 12:37:56.200887 43168 cache.go:115] /Users/jakoberpf/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists I1115 12:37:56.200893 43168 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jakoberpf/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 59.333ยตs I1115 12:37:56.200888 43168 cache.go:107] acquiring lock: {Name:mkea0207a3ce7f994d7db77f345a6b00ff0f22ce Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1115 12:37:56.200896 43168 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/jakoberpf/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded I1115 12:37:56.200897 43168 cache.go:115] /Users/jakoberpf/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists I1115 12:37:56.200904 43168 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jakoberpf/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 268.708ยตs I1115 12:37:56.200909 43168 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/jakoberpf/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded I1115 12:37:56.200938 43168 cache.go:115] /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 exists I1115 12:37:56.237766 43168 out.go:203] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=4000MB) ... I1115 12:37:56.200932 43168 cache.go:107] acquiring lock: {Name:mkec283771cf6fc8924c3b2211433801f894482d Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1115 12:37:56.237772 43168 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.22.3" -> "/Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3" took 37.11725ms I1115 12:37:56.237779 43168 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.22.3 -> /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 succeeded I1115 12:37:56.200950 43168 cache.go:115] /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 exists I1115 12:37:56.237793 43168 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.22.3" -> "/Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3" took 37.101167ms I1115 12:37:56.237796 43168 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.22.3 -> /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 succeeded I1115 12:37:56.200980 43168 cache.go:115] /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 exists I1115 12:37:56.237800 43168 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.4" -> "/Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4" took 36.96ms I1115 12:37:56.237802 43168 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.4 -> /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 succeeded I1115 12:37:56.200988 43168 cache.go:115] /Users/jakoberpf/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists I1115 12:37:56.237806 43168 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jakoberpf/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 36.971834ms I1115 12:37:56.237808 43168 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jakoberpf/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded I1115 12:37:56.200983 43168 cache.go:107] acquiring lock: {Name:mkcf9ea776985ce8e66e2583a539723ab7227ab5 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1115 12:37:56.237848 43168 cache.go:115] /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 exists I1115 12:37:56.237849 43168 cache.go:115] /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 exists I1115 12:37:56.237852 43168 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.0-0" -> "/Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0" took 36.89425ms I1115 12:37:56.237854 43168 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.0-0 -> /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 succeeded I1115 12:37:56.237853 43168 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.22.3" -> "/Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3" took 36.962333ms I1115 12:37:56.237866 43168 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.3 -> /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 succeeded I1115 12:37:56.237869 43168 cache.go:87] Successfully saved all images to host disk. I1115 12:37:56.237937 43168 start.go:160] libmachine.API.Create for "minikube" (driver="docker") I1115 12:37:56.237949 43168 client.go:168] LocalClient.Create starting I1115 12:37:56.238020 43168 main.go:130] libmachine: Reading certificate data from /Users/jakoberpf/.minikube/certs/ca.pem I1115 12:37:56.238348 43168 main.go:130] libmachine: Decoding PEM data... I1115 12:37:56.238358 43168 main.go:130] libmachine: Parsing certificate... I1115 12:37:56.238423 43168 main.go:130] libmachine: Reading certificate data from /Users/jakoberpf/.minikube/certs/cert.pem I1115 12:37:56.238532 43168 main.go:130] libmachine: Decoding PEM data... I1115 12:37:56.238540 43168 main.go:130] libmachine: Parsing certificate... I1115 12:37:56.238987 43168 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W1115 12:37:56.352590 43168 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I1115 12:37:56.352699 43168 network_create.go:254] running [docker network inspect minikube] to gather additional debugging logs... I1115 12:37:56.352712 43168 cli_runner.go:115] Run: docker network inspect minikube W1115 12:37:56.459250 43168 cli_runner.go:162] docker network inspect minikube returned with exit code 1 I1115 12:37:56.459274 43168 network_create.go:257] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I1115 12:37:56.459280 43168 network_create.go:259] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I1115 12:37:56.459370 43168 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I1115 12:37:56.564485 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.564640 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.564776 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.564882 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.565001 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.565107 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.565215 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.565331 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.565439 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.565544 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.565649 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.565758 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.566352 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.566515 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.566735 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.567020 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.567138 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.567248 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.567357 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} I1115 12:37:56.567461 43168 network.go:240] skipping subnet 192.168.0.0/16 that is taken: &{IP:192.168.0.0 Netmask:255.255.0.0 Prefix:16 CIDR:192.168.0.0/16 Gateway:192.168.0.68 ClientMin:192.168.0.69 ClientMax:192.168.255.254 Broadcast:192.168.255.255 Interface:{IfaceName:en0 IfaceIPv4:192.168.0.68 IfaceMTU:1500 IfaceMAC:f4:d4:88:70:db:6d}} E1115 12:37:56.567480 43168 network_create.go:85] failed to find free subnet for docker network minikube after 20 attempts: no free private network subnets found with given parameters (start: "192.168.9.0", step: 9, tries: 20) W1115 12:37:56.567577 43168 out.go:241] โ— Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: no free private network subnets found with given parameters (start: "192.168.9.0", step: 9, tries: 20) I1115 12:37:56.567690 43168 cli_runner.go:115] Run: docker ps -a --format {{.Names}} I1115 12:37:56.677497 43168 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I1115 12:37:56.785693 43168 oci.go:102] Successfully created a docker volume minikube I1115 12:37:56.785830 43168 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib I1115 12:37:57.519312 43168 oci.go:106] Successfully prepared a docker volume minikube I1115 12:37:57.519534 43168 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker I1115 12:37:57.519643 43168 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'" I1115 12:37:57.695112 43168 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c I1115 12:37:58.112140 43168 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}} I1115 12:37:58.228328 43168 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I1115 12:37:58.376864 43168 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I1115 12:37:58.607545 43168 oci.go:281] the created container "minikube" has a running status. I1115 12:37:58.607573 43168 kic.go:210] Creating ssh key for kic: /Users/jakoberpf/.minikube/machines/minikube/id_rsa... I1115 12:37:58.672304 43168 kic_runner.go:187] docker (temp): /Users/jakoberpf/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I1115 12:37:58.828888 43168 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I1115 12:37:58.933990 43168 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I1115 12:37:58.934001 43168 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I1115 12:37:59.115231 43168 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I1115 12:37:59.220679 43168 machine.go:88] provisioning docker machine ... I1115 12:37:59.220723 43168 ubuntu.go:169] provisioning hostname "minikube" I1115 12:37:59.220844 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1115 12:37:59.326057 43168 main.go:130] libmachine: Using SSH client type: native I1115 12:37:59.326292 43168 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x102d97940] 0x102d9a760 [] 0s} 127.0.0.1 51429 } I1115 12:37:59.326301 43168 main.go:130] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I1115 12:37:59.446208 43168 main.go:130] libmachine: SSH cmd err, output: : minikube I1115 12:37:59.446304 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1115 12:37:59.552594 43168 main.go:130] libmachine: Using SSH client type: native I1115 12:37:59.552743 43168 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x102d97940] 0x102d9a760 [] 0s} 127.0.0.1 51429 } I1115 12:37:59.552752 43168 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I1115 12:37:59.664342 43168 main.go:130] libmachine: SSH cmd err, output: : I1115 12:37:59.664353 43168 ubuntu.go:175] set auth options {CertDir:/Users/jakoberpf/.minikube CaCertPath:/Users/jakoberpf/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jakoberpf/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jakoberpf/.minikube/machines/server.pem ServerKeyPath:/Users/jakoberpf/.minikube/machines/server-key.pem ClientKeyPath:/Users/jakoberpf/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jakoberpf/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jakoberpf/.minikube} I1115 12:37:59.664377 43168 ubuntu.go:177] setting up certificates I1115 12:37:59.664382 43168 provision.go:83] configureAuth start I1115 12:37:59.664474 43168 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1115 12:37:59.771433 43168 provision.go:138] copyHostCerts I1115 12:37:59.771543 43168 exec_runner.go:144] found /Users/jakoberpf/.minikube/ca.pem, removing ... I1115 12:37:59.771547 43168 exec_runner.go:207] rm: /Users/jakoberpf/.minikube/ca.pem I1115 12:37:59.772484 43168 exec_runner.go:151] cp: /Users/jakoberpf/.minikube/certs/ca.pem --> /Users/jakoberpf/.minikube/ca.pem (1086 bytes) I1115 12:37:59.772638 43168 exec_runner.go:144] found /Users/jakoberpf/.minikube/cert.pem, removing ... I1115 12:37:59.772641 43168 exec_runner.go:207] rm: /Users/jakoberpf/.minikube/cert.pem I1115 12:37:59.772693 43168 exec_runner.go:151] cp: /Users/jakoberpf/.minikube/certs/cert.pem --> /Users/jakoberpf/.minikube/cert.pem (1127 bytes) I1115 12:37:59.772782 43168 exec_runner.go:144] found /Users/jakoberpf/.minikube/key.pem, removing ... I1115 12:37:59.772784 43168 exec_runner.go:207] rm: /Users/jakoberpf/.minikube/key.pem I1115 12:37:59.772825 43168 exec_runner.go:151] cp: /Users/jakoberpf/.minikube/certs/key.pem --> /Users/jakoberpf/.minikube/key.pem (1679 bytes) I1115 12:37:59.773092 43168 provision.go:112] generating server cert: /Users/jakoberpf/.minikube/machines/server.pem ca-key=/Users/jakoberpf/.minikube/certs/ca.pem private-key=/Users/jakoberpf/.minikube/certs/ca-key.pem org=jakoberpf.minikube san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I1115 12:37:59.875979 43168 provision.go:172] copyRemoteCerts I1115 12:37:59.876670 43168 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I1115 12:37:59.876722 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1115 12:37:59.980473 43168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jakoberpf/.minikube/machines/minikube/id_rsa Username:docker} I1115 12:38:00.062360 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1086 bytes) I1115 12:38:00.077127 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes) I1115 12:38:00.090456 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I1115 12:38:00.104983 43168 provision.go:86] duration metric: configureAuth took 440.127083ms I1115 12:38:00.104991 43168 ubuntu.go:193] setting minikube options for container-runtime I1115 12:38:00.105116 43168 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3 I1115 12:38:00.105187 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1115 12:38:00.212992 43168 main.go:130] libmachine: Using SSH client type: native I1115 12:38:00.213121 43168 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x102d97940] 0x102d9a760 [] 0s} 127.0.0.1 51429 } I1115 12:38:00.213127 43168 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I1115 12:38:00.327005 43168 main.go:130] libmachine: SSH cmd err, output: : overlay I1115 12:38:00.327012 43168 ubuntu.go:71] root file system type: overlay I1115 12:38:00.327138 43168 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I1115 12:38:00.327255 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1115 12:38:00.433929 43168 main.go:130] libmachine: Using SSH client type: native I1115 12:38:00.434100 43168 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x102d97940] 0x102d9a760 [] 0s} 127.0.0.1 51429 } I1115 12:38:00.434148 43168 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I1115 12:38:00.560353 43168 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I1115 12:38:00.560486 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1115 12:38:00.680499 43168 main.go:130] libmachine: Using SSH client type: native I1115 12:38:00.680661 43168 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x102d97940] 0x102d9a760 [] 0s} 127.0.0.1 51429 } I1115 12:38:00.680670 43168 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I1115 12:38:01.245208 43168 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-07-30 19:53:13.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2021-11-15 11:38:00.557736003 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I1115 12:38:01.245227 43168 machine.go:91] provisioned docker machine in 2.02451525s I1115 12:38:01.245234 43168 client.go:171] LocalClient.Create took 5.007256708s I1115 12:38:01.245241 43168 start.go:168] duration metric: libmachine.API.Create for "minikube" took 5.007278333s I1115 12:38:01.245244 43168 start.go:267] post-start starting for "minikube" (driver="docker") I1115 12:38:01.245248 43168 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I1115 12:38:01.245396 43168 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I1115 12:38:01.245447 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1115 12:38:01.362523 43168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jakoberpf/.minikube/machines/minikube/id_rsa Username:docker} I1115 12:38:01.447229 43168 ssh_runner.go:152] Run: cat /etc/os-release I1115 12:38:01.452371 43168 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I1115 12:38:01.452382 43168 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I1115 12:38:01.452387 43168 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I1115 12:38:01.452390 43168 info.go:137] Remote host: Ubuntu 20.04.2 LTS I1115 12:38:01.452395 43168 filesync.go:126] Scanning /Users/jakoberpf/.minikube/addons for local assets ... I1115 12:38:01.452794 43168 filesync.go:126] Scanning /Users/jakoberpf/.minikube/files for local assets ... I1115 12:38:01.452885 43168 start.go:270] post-start completed in 207.634583ms I1115 12:38:01.453373 43168 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1115 12:38:01.567991 43168 profile.go:147] Saving config to /Users/jakoberpf/.minikube/profiles/minikube/config.json ... I1115 12:38:01.568372 43168 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I1115 12:38:01.568482 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1115 12:38:01.678660 43168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jakoberpf/.minikube/machines/minikube/id_rsa Username:docker} I1115 12:38:01.762333 43168 start.go:129] duration metric: createHost completed in 5.561489458s I1115 12:38:01.762392 43168 start.go:80] releasing machines lock for "minikube", held for 5.561803s I1115 12:38:01.762515 43168 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1115 12:38:01.876427 43168 ssh_runner.go:152] Run: systemctl --version I1115 12:38:01.876490 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1115 12:38:01.876599 43168 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/ I1115 12:38:01.877086 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1115 12:38:01.993912 43168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jakoberpf/.minikube/machines/minikube/id_rsa Username:docker} I1115 12:38:01.993943 43168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jakoberpf/.minikube/machines/minikube/id_rsa Username:docker} I1115 12:38:02.163478 43168 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd I1115 12:38:02.173862 43168 ssh_runner.go:152] Run: sudo systemctl cat docker.service I1115 12:38:02.185737 43168 cruntime.go:255] skipping containerd shutdown because we are bound to it I1115 12:38:02.186739 43168 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio I1115 12:38:02.198764 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I1115 12:38:02.212716 43168 ssh_runner.go:152] Run: sudo systemctl unmask docker.service I1115 12:38:02.271753 43168 ssh_runner.go:152] Run: sudo systemctl enable docker.socket I1115 12:38:02.331251 43168 ssh_runner.go:152] Run: sudo systemctl cat docker.service I1115 12:38:02.343564 43168 ssh_runner.go:152] Run: sudo systemctl daemon-reload I1115 12:38:02.394178 43168 ssh_runner.go:152] Run: sudo systemctl start docker I1115 12:38:02.404419 43168 ssh_runner.go:152] Run: docker version --format {{.Server.Version}} I1115 12:38:02.457715 43168 ssh_runner.go:152] Run: docker version --format {{.Server.Version}} I1115 12:38:02.521628 43168 out.go:203] ๐Ÿณ Preparing Kubernetes v1.22.3 on Docker 20.10.8 ... I1115 12:38:02.521766 43168 cli_runner.go:115] Run: docker exec -t minikube dig +short host.docker.internal I1115 12:38:02.721190 43168 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2 I1115 12:38:02.721331 43168 ssh_runner.go:152] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts I1115 12:38:02.726233 43168 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I1115 12:38:02.735414 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I1115 12:38:02.846499 43168 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker I1115 12:38:02.846572 43168 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}} I1115 12:38:02.877937 43168 docker.go:558] Got preloaded images: I1115 12:38:02.877944 43168 docker.go:564] k8s.gcr.io/kube-apiserver:v1.22.3 wasn't preloaded I1115 12:38:02.877947 43168 cache_images.go:83] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.22.3 k8s.gcr.io/kube-controller-manager:v1.22.3 k8s.gcr.io/kube-scheduler:v1.22.3 k8s.gcr.io/kube-proxy:v1.22.3 k8s.gcr.io/pause:3.5 k8s.gcr.io/etcd:3.5.0-0 k8s.gcr.io/coredns/coredns:v1.8.4 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7] I1115 12:38:02.891133 43168 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.0-0 I1115 12:38:02.892681 43168 image.go:134] retrieving image: k8s.gcr.io/pause:3.5 I1115 12:38:02.893318 43168 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.3 I1115 12:38:02.893957 43168 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.3 I1115 12:38:02.894340 43168 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1 I1115 12:38:02.894943 43168 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5 I1115 12:38:02.895641 43168 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.3 I1115 12:38:02.897060 43168 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7 I1115 12:38:02.897519 43168 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.22.3 I1115 12:38:02.898116 43168 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.4 I1115 12:38:02.908965 43168 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.22.3: Error response from daemon: reference does not exist I1115 12:38:02.908965 43168 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.5.0-0: Error response from daemon: reference does not exist I1115 12:38:02.910547 43168 image.go:180] daemon lookup for k8s.gcr.io/pause:3.5: Error response from daemon: reference does not exist I1115 12:38:02.911905 43168 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.22.3: Error response from daemon: reference does not exist I1115 12:38:02.912312 43168 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist I1115 12:38:02.913625 43168 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist I1115 12:38:02.915038 43168 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.22.3: Error response from daemon: reference does not exist I1115 12:38:02.915885 43168 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist I1115 12:38:02.917401 43168 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.3: Error response from daemon: reference does not exist I1115 12:38:02.918346 43168 image.go:180] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.4: Error response from daemon: reference does not exist W1115 12:38:03.413063 43168 image.go:267] image k8s.gcr.io/etcd:3.5.0-0 arch mismatch: want arm64 got amd64. fixing I1115 12:38:03.413261 43168 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.5.0-0 I1115 12:38:03.417470 43168 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.22.3 I1115 12:38:03.466521 43168 cache_images.go:111] "k8s.gcr.io/etcd:3.5.0-0" needs transfer: "k8s.gcr.io/etcd:3.5.0-0" does not exist at hash "a2ee49d2d4320959e0894768b7ca97d69e03bc360d90b591538359abf2a91609" in container runtime I1115 12:38:03.466541 43168 docker.go:239] Removing image: k8s.gcr.io/etcd:3.5.0-0 I1115 12:38:03.466641 43168 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/etcd:3.5.0-0 I1115 12:38:03.470328 43168 cache_images.go:111] "k8s.gcr.io/kube-apiserver:v1.22.3" needs transfer: "k8s.gcr.io/kube-apiserver:v1.22.3" does not exist at hash "32513be2649f452b9ed3e4aeaf8b9968224077a5838bc4446afcd8ad74e51acf" in container runtime I1115 12:38:03.470359 43168 docker.go:239] Removing image: k8s.gcr.io/kube-apiserver:v1.22.3 I1115 12:38:03.470424 43168 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.22.3 I1115 12:38:03.502764 43168 cache_images.go:281] Loading image from: /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 I1115 12:38:03.503172 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.0-0 I1115 12:38:03.505368 43168 cache_images.go:281] Loading image from: /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 I1115 12:38:03.505482 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.22.3 I1115 12:38:03.510343 43168 ssh_runner.go:309] existence check for /var/lib/minikube/images/etcd_3.5.0-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.0-0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/etcd_3.5.0-0': No such file or directory I1115 12:38:03.510390 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 --> /var/lib/minikube/images/etcd_3.5.0-0 (157800448 bytes) I1115 12:38:03.511069 43168 ssh_runner.go:309] existence check for /var/lib/minikube/images/kube-apiserver_v1.22.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.22.3: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.22.3': No such file or directory I1115 12:38:03.511102 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 --> /var/lib/minikube/images/kube-apiserver_v1.22.3 (28383744 bytes) I1115 12:38:03.521504 43168 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.22.3 I1115 12:38:03.573880 43168 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.22.3 I1115 12:38:03.579467 43168 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.22.3 I1115 12:38:03.599222 43168 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.5 I1115 12:38:03.668314 43168 cache_images.go:111] "k8s.gcr.io/kube-scheduler:v1.22.3" needs transfer: "k8s.gcr.io/kube-scheduler:v1.22.3" does not exist at hash "3893bb7d239347e1eec68a8f39501b676fc0a92b2c0101e415654bcd14a01eac" in container runtime I1115 12:38:03.668331 43168 docker.go:239] Removing image: k8s.gcr.io/kube-scheduler:v1.22.3 I1115 12:38:03.668429 43168 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.22.3 W1115 12:38:03.684394 43168 image.go:267] image k8s.gcr.io/coredns/coredns:v1.8.4 arch mismatch: want arm64 got amd64. fixing I1115 12:38:03.684555 43168 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns/coredns:v1.8.4 I1115 12:38:03.735217 43168 cache_images.go:111] "k8s.gcr.io/kube-proxy:v1.22.3" needs transfer: "k8s.gcr.io/kube-proxy:v1.22.3" does not exist at hash "3a8d1d04758e2eada31b9acaeebe6e9a9dc60f5ac267183611639fc8e0e0e0aa" in container runtime I1115 12:38:03.735235 43168 docker.go:239] Removing image: k8s.gcr.io/kube-proxy:v1.22.3 I1115 12:38:03.735252 43168 cache_images.go:111] "k8s.gcr.io/kube-controller-manager:v1.22.3" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.22.3" does not exist at hash "42e51ba6db03efeaff32c77e1fc61a1e8a596f98343ca1882a8e1700dc263efc" in container runtime I1115 12:38:03.735267 43168 docker.go:239] Removing image: k8s.gcr.io/kube-controller-manager:v1.22.3 I1115 12:38:03.735354 43168 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/kube-proxy:v1.22.3 I1115 12:38:03.735355 43168 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.22.3 I1115 12:38:03.783974 43168 cache_images.go:111] "k8s.gcr.io/pause:3.5" needs transfer: "k8s.gcr.io/pause:3.5" does not exist at hash "f7ff3c40426311c68450b0a2fce030935a625cef0e606ff2e6756870f552e760" in container runtime I1115 12:38:03.783991 43168 docker.go:239] Removing image: k8s.gcr.io/pause:3.5 I1115 12:38:03.784081 43168 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/pause:3.5 I1115 12:38:03.885257 43168 cache_images.go:281] Loading image from: /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 I1115 12:38:03.885436 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.22.3 I1115 12:38:03.906649 43168 cache_images.go:111] "k8s.gcr.io/coredns/coredns:v1.8.4" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.4" does not exist at hash "008e44c427c6ff7a26f5a1a0ddebebfd3ea33231bd96f546e1381d1dc39d34a0" in container runtime I1115 12:38:03.906667 43168 docker.go:239] Removing image: k8s.gcr.io/coredns/coredns:v1.8.4 I1115 12:38:03.906760 43168 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/coredns/coredns:v1.8.4 W1115 12:38:03.925127 43168 image.go:267] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing I1115 12:38:03.925283 43168 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5 I1115 12:38:03.960468 43168 cache_images.go:281] Loading image from: /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 I1115 12:38:03.960501 43168 cache_images.go:281] Loading image from: /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 I1115 12:38:03.960650 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.22.3 I1115 12:38:03.960663 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.22.3 I1115 12:38:04.025284 43168 cache_images.go:281] Loading image from: /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/pause_3.5 I1115 12:38:04.025419 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.5 I1115 12:38:04.068378 43168 ssh_runner.go:309] existence check for /var/lib/minikube/images/kube-scheduler_v1.22.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.22.3: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.22.3': No such file or directory I1115 12:38:04.068435 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 --> /var/lib/minikube/images/kube-scheduler_v1.22.3 (13499904 bytes) I1115 12:38:04.096288 43168 cache_images.go:281] Loading image from: /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 I1115 12:38:04.096431 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.4 I1115 12:38:04.123943 43168 cache_images.go:111] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime I1115 12:38:04.123960 43168 docker.go:239] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5 I1115 12:38:04.124067 43168 ssh_runner.go:152] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5 I1115 12:38:04.137217 43168 ssh_runner.go:309] existence check for /var/lib/minikube/images/kube-controller-manager_v1.22.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.22.3: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.22.3': No such file or directory I1115 12:38:04.137268 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 --> /var/lib/minikube/images/kube-controller-manager_v1.22.3 (27019776 bytes) I1115 12:38:04.140880 43168 ssh_runner.go:309] existence check for /var/lib/minikube/images/kube-proxy_v1.22.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.22.3: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.22.3': No such file or directory I1115 12:38:04.140909 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 --> /var/lib/minikube/images/kube-proxy_v1.22.3 (34377728 bytes) I1115 12:38:04.196738 43168 ssh_runner.go:309] existence check for /var/lib/minikube/images/pause_3.5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.5: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/pause_3.5': No such file or directory I1115 12:38:04.196785 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/pause_3.5 --> /var/lib/minikube/images/pause_3.5 (252416 bytes) I1115 12:38:04.258692 43168 ssh_runner.go:309] existence check for /var/lib/minikube/images/coredns_v1.8.4: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.4: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.4': No such file or directory I1115 12:38:04.258755 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 --> /var/lib/minikube/images/coredns_v1.8.4 (12264448 bytes) I1115 12:38:04.342025 43168 cache_images.go:281] Loading image from: /Users/jakoberpf/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 I1115 12:38:04.342235 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5 W1115 12:38:04.380503 43168 image.go:267] image docker.io/kubernetesui/dashboard:v2.3.1 arch mismatch: want arm64 got amd64. fixing I1115 12:38:04.380680 43168 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.3.1 W1115 12:38:04.390340 43168 image.go:267] image docker.io/kubernetesui/metrics-scraper:v1.0.7 arch mismatch: want arm64 got amd64. fixing I1115 12:38:04.390524 43168 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.7 I1115 12:38:04.525498 43168 ssh_runner.go:309] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory I1115 12:38:04.525557 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes) I1115 12:38:04.553717 43168 docker.go:206] Loading image: /var/lib/minikube/images/pause_3.5 I1115 12:38:04.553730 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.5 | docker load" I1115 12:38:04.669464 43168 cache_images.go:111] "docker.io/kubernetesui/dashboard:v2.3.1" needs transfer: "docker.io/kubernetesui/dashboard:v2.3.1" does not exist at hash "9fe3914f585c5ba68c0cbad7c16febea5a09caec8dbc1b0e22f2b17e613ed88a" in container runtime I1115 12:38:04.669484 43168 docker.go:239] Removing image: docker.io/kubernetesui/dashboard:v2.3.1 I1115 12:38:04.669497 43168 cache_images.go:111] "docker.io/kubernetesui/metrics-scraper:v1.0.7" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.7" does not exist at hash "ea493a196fbd2426a92d57ad4e606d1efc11049d7e7bedf90b160d74d75308c2" in container runtime I1115 12:38:04.669518 43168 docker.go:239] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.7 I1115 12:38:04.669588 43168 ssh_runner.go:152] Run: docker rmi docker.io/kubernetesui/dashboard:v2.3.1 I1115 12:38:04.669599 43168 ssh_runner.go:152] Run: docker rmi docker.io/kubernetesui/metrics-scraper:v1.0.7 I1115 12:38:04.968410 43168 cache_images.go:281] Loading image from: /Users/jakoberpf/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 I1115 12:38:04.968674 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.3.1 I1115 12:38:04.968731 43168 cache_images.go:281] Loading image from: /Users/jakoberpf/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 I1115 12:38:04.968949 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.7 I1115 12:38:04.983369 43168 cache_images.go:310] Transferred and loaded /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/pause_3.5 from cache I1115 12:38:05.161681 43168 ssh_runner.go:309] existence check for /var/lib/minikube/images/dashboard_v2.3.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.3.1: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/dashboard_v2.3.1': No such file or directory I1115 12:38:05.161785 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 --> /var/lib/minikube/images/dashboard_v2.3.1 (65396736 bytes) I1115 12:38:05.161927 43168 ssh_runner.go:309] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.7: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.7': No such file or directory I1115 12:38:05.161948 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 --> /var/lib/minikube/images/metrics-scraper_v1.0.7 (13969408 bytes) I1115 12:38:07.115118 43168 docker.go:206] Loading image: /var/lib/minikube/images/storage-provisioner_v5 I1115 12:38:07.115156 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load" I1115 12:38:08.015314 43168 cache_images.go:310] Transferred and loaded /Users/jakoberpf/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache I1115 12:38:08.015357 43168 docker.go:206] Loading image: /var/lib/minikube/images/kube-scheduler_v1.22.3 I1115 12:38:08.015376 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.22.3 | docker load" I1115 12:38:09.725059 43168 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.22.3 | docker load": (1.709653667s) I1115 12:38:09.725077 43168 cache_images.go:310] Transferred and loaded /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 from cache I1115 12:38:09.725109 43168 docker.go:206] Loading image: /var/lib/minikube/images/coredns_v1.8.4 I1115 12:38:09.725118 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.4 | docker load" I1115 12:38:11.063753 43168 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.4 | docker load": (1.3385845s) I1115 12:38:11.063770 43168 cache_images.go:310] Transferred and loaded /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 from cache I1115 12:38:11.063802 43168 docker.go:206] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.7 I1115 12:38:11.063811 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/metrics-scraper_v1.0.7 | docker load" I1115 12:38:12.267899 43168 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/metrics-scraper_v1.0.7 | docker load": (1.204062083s) I1115 12:38:12.267916 43168 cache_images.go:310] Transferred and loaded /Users/jakoberpf/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 from cache I1115 12:38:12.267947 43168 docker.go:206] Loading image: /var/lib/minikube/images/kube-apiserver_v1.22.3 I1115 12:38:12.267956 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.22.3 | docker load" I1115 12:38:14.929402 43168 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.22.3 | docker load": (2.661410208s) I1115 12:38:14.929419 43168 cache_images.go:310] Transferred and loaded /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 from cache I1115 12:38:14.929446 43168 docker.go:206] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.22.3 I1115 12:38:14.929470 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.22.3 | docker load" I1115 12:38:17.211301 43168 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.22.3 | docker load": (2.281796375s) I1115 12:38:17.211425 43168 cache_images.go:310] Transferred and loaded /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 from cache I1115 12:38:17.211465 43168 docker.go:206] Loading image: /var/lib/minikube/images/kube-proxy_v1.22.3 I1115 12:38:17.211476 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.22.3 | docker load" I1115 12:38:18.801075 43168 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.22.3 | docker load": (1.589572083s) I1115 12:38:18.801107 43168 cache_images.go:310] Transferred and loaded /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 from cache I1115 12:38:18.801132 43168 docker.go:206] Loading image: /var/lib/minikube/images/dashboard_v2.3.1 I1115 12:38:18.801144 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/dashboard_v2.3.1 | docker load" I1115 12:38:22.579241 43168 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/dashboard_v2.3.1 | docker load": (3.778038833s) I1115 12:38:22.579304 43168 cache_images.go:310] Transferred and loaded /Users/jakoberpf/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 from cache I1115 12:38:22.579368 43168 docker.go:206] Loading image: /var/lib/minikube/images/etcd_3.5.0-0 I1115 12:38:22.579381 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.0-0 | docker load" I1115 12:38:27.762900 43168 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.0-0 | docker load": (5.183465333s) I1115 12:38:27.762928 43168 cache_images.go:310] Transferred and loaded /Users/jakoberpf/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 from cache I1115 12:38:27.762981 43168 cache_images.go:118] Successfully loaded all cached images I1115 12:38:27.762990 43168 cache_images.go:87] LoadImages completed in 24.8849s I1115 12:38:27.763849 43168 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}} I1115 12:38:27.863924 43168 cni.go:93] Creating CNI manager for "" I1115 12:38:27.863940 43168 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I1115 12:38:27.864478 43168 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I1115 12:38:27.864515 43168 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.22.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I1115 12:38:27.864917 43168 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.17.0.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 172.17.0.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "172.17.0.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.22.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I1115 12:38:27.865094 43168 kubeadm.go:909] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.22.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 [Install] config: {KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I1115 12:38:27.865277 43168 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.3 I1115 12:38:27.873308 43168 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.22.3: Process exited with status 2 stdout: stderr: ls: cannot access '/var/lib/minikube/binaries/v1.22.3': No such file or directory Initiating transfer... I1115 12:38:27.873480 43168 ssh_runner.go:152] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.22.3 I1115 12:38:27.880713 43168 binary.go:67] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.22.3/bin/linux/arm64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.3/bin/linux/arm64/kubeadm.sha256 I1115 12:38:27.880713 43168 binary.go:67] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.22.3/bin/linux/arm64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.3/bin/linux/arm64/kubelet.sha256 I1115 12:38:27.880782 43168 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet I1115 12:38:27.880893 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.22.3/kubeadm I1115 12:38:27.880886 43168 binary.go:67] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.22.3/bin/linux/arm64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.3/bin/linux/arm64/kubectl.sha256 I1115 12:38:27.881016 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.22.3/kubectl I1115 12:38:27.891609 43168 ssh_runner.go:309] existence check for /var/lib/minikube/binaries/v1.22.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.22.3/kubectl: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.22.3/kubectl': No such file or directory I1115 12:38:27.891638 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/linux/v1.22.3/kubectl --> /var/lib/minikube/binaries/v1.22.3/kubectl (43450368 bytes) I1115 12:38:27.891699 43168 ssh_runner.go:309] existence check for /var/lib/minikube/binaries/v1.22.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.22.3/kubeadm: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.22.3/kubeadm': No such file or directory I1115 12:38:27.891727 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/linux/v1.22.3/kubeadm --> /var/lib/minikube/binaries/v1.22.3/kubeadm (42467328 bytes) I1115 12:38:27.891780 43168 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.22.3/kubelet I1115 12:38:27.897286 43168 ssh_runner.go:309] existence check for /var/lib/minikube/binaries/v1.22.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.22.3/kubelet: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.22.3/kubelet': No such file or directory I1115 12:38:27.897343 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/cache/linux/v1.22.3/kubelet --> /var/lib/minikube/binaries/v1.22.3/kubelet (112474152 bytes) I1115 12:38:34.457143 43168 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I1115 12:38:34.465182 43168 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes) I1115 12:38:34.476196 43168 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I1115 12:38:34.487553 43168 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2045 bytes) I1115 12:38:34.499542 43168 ssh_runner.go:152] Run: grep 172.17.0.2 control-plane.minikube.internal$ /etc/hosts I1115 12:38:34.504825 43168 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.0.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I1115 12:38:34.514592 43168 certs.go:54] Setting up /Users/jakoberpf/.minikube/profiles/minikube for IP: 172.17.0.2 I1115 12:38:34.515443 43168 certs.go:182] skipping minikubeCA CA generation: /Users/jakoberpf/.minikube/ca.key I1115 12:38:34.515650 43168 certs.go:182] skipping proxyClientCA CA generation: /Users/jakoberpf/.minikube/proxy-client-ca.key I1115 12:38:34.515718 43168 certs.go:302] generating minikube-user signed cert: /Users/jakoberpf/.minikube/profiles/minikube/client.key I1115 12:38:34.515743 43168 crypto.go:68] Generating cert /Users/jakoberpf/.minikube/profiles/minikube/client.crt with IP's: [] I1115 12:38:34.630331 43168 crypto.go:156] Writing cert to /Users/jakoberpf/.minikube/profiles/minikube/client.crt ... I1115 12:38:34.630340 43168 lock.go:35] WriteFile acquiring /Users/jakoberpf/.minikube/profiles/minikube/client.crt: {Name:mk948edb4b726e0fe66f626a6150f827cc635574 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1115 12:38:34.630549 43168 crypto.go:164] Writing key to /Users/jakoberpf/.minikube/profiles/minikube/client.key ... I1115 12:38:34.630551 43168 lock.go:35] WriteFile acquiring /Users/jakoberpf/.minikube/profiles/minikube/client.key: {Name:mk05fe8812c01b5175d1dcf2b68e74e0ca9cd07f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1115 12:38:34.630658 43168 certs.go:302] generating minikube signed cert: /Users/jakoberpf/.minikube/profiles/minikube/apiserver.key.7b749c5f I1115 12:38:34.630668 43168 crypto.go:68] Generating cert /Users/jakoberpf/.minikube/profiles/minikube/apiserver.crt.7b749c5f with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1] I1115 12:38:34.668767 43168 crypto.go:156] Writing cert to /Users/jakoberpf/.minikube/profiles/minikube/apiserver.crt.7b749c5f ... I1115 12:38:34.668771 43168 lock.go:35] WriteFile acquiring /Users/jakoberpf/.minikube/profiles/minikube/apiserver.crt.7b749c5f: {Name:mkad8e7a52ca9858f9fe912b890a5975dfcfda7a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1115 12:38:34.668906 43168 crypto.go:164] Writing key to /Users/jakoberpf/.minikube/profiles/minikube/apiserver.key.7b749c5f ... I1115 12:38:34.668908 43168 lock.go:35] WriteFile acquiring /Users/jakoberpf/.minikube/profiles/minikube/apiserver.key.7b749c5f: {Name:mk2b30f94c1b0f69d6deae3fed9bd60698831d21 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1115 12:38:34.668993 43168 certs.go:320] copying /Users/jakoberpf/.minikube/profiles/minikube/apiserver.crt.7b749c5f -> /Users/jakoberpf/.minikube/profiles/minikube/apiserver.crt I1115 12:38:34.669082 43168 certs.go:324] copying /Users/jakoberpf/.minikube/profiles/minikube/apiserver.key.7b749c5f -> /Users/jakoberpf/.minikube/profiles/minikube/apiserver.key I1115 12:38:34.669169 43168 certs.go:302] generating aggregator signed cert: /Users/jakoberpf/.minikube/profiles/minikube/proxy-client.key I1115 12:38:34.669178 43168 crypto.go:68] Generating cert /Users/jakoberpf/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I1115 12:38:34.763419 43168 crypto.go:156] Writing cert to /Users/jakoberpf/.minikube/profiles/minikube/proxy-client.crt ... I1115 12:38:34.763429 43168 lock.go:35] WriteFile acquiring /Users/jakoberpf/.minikube/profiles/minikube/proxy-client.crt: {Name:mkf33aa385212272df55714501167ea332c8ddab Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1115 12:38:34.763608 43168 crypto.go:164] Writing key to /Users/jakoberpf/.minikube/profiles/minikube/proxy-client.key ... I1115 12:38:34.763610 43168 lock.go:35] WriteFile acquiring /Users/jakoberpf/.minikube/profiles/minikube/proxy-client.key: {Name:mk5762877d48c0cec0a3c6f65b426710a7da5c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1115 12:38:34.763898 43168 certs.go:388] found cert: /Users/jakoberpf/.minikube/certs/Users/jakoberpf/.minikube/certs/ca-key.pem (1679 bytes) I1115 12:38:34.764092 43168 certs.go:388] found cert: /Users/jakoberpf/.minikube/certs/Users/jakoberpf/.minikube/certs/ca.pem (1086 bytes) I1115 12:38:34.764202 43168 certs.go:388] found cert: /Users/jakoberpf/.minikube/certs/Users/jakoberpf/.minikube/certs/cert.pem (1127 bytes) I1115 12:38:34.764288 43168 certs.go:388] found cert: /Users/jakoberpf/.minikube/certs/Users/jakoberpf/.minikube/certs/key.pem (1679 bytes) I1115 12:38:34.766877 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I1115 12:38:34.781694 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I1115 12:38:34.797145 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I1115 12:38:34.812986 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I1115 12:38:34.827689 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I1115 12:38:34.842557 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I1115 12:38:34.856544 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I1115 12:38:34.870948 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I1115 12:38:34.885439 43168 ssh_runner.go:319] scp /Users/jakoberpf/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I1115 12:38:34.901498 43168 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I1115 12:38:34.913882 43168 ssh_runner.go:152] Run: openssl version I1115 12:38:34.919370 43168 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I1115 12:38:34.926121 43168 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I1115 12:38:34.930674 43168 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:41 /usr/share/ca-certificates/minikubeCA.pem I1115 12:38:34.930725 43168 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I1115 12:38:34.935677 43168 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I1115 12:38:34.942690 43168 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} I1115 12:38:34.942787 43168 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I1115 12:38:34.971680 43168 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I1115 12:38:34.979007 43168 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I1115 12:38:34.985616 43168 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I1115 12:38:34.985764 43168 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I1115 12:38:34.993125 43168 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I1115 12:38:34.993154 43168 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I1115 12:38:35.492998 43168 out.go:203] โ–ช Generating certificates and keys ... I1115 12:38:36.987322 43168 out.go:203] โ–ช Booting up control plane ... I1115 12:38:44.552652 43168 out.go:203] โ–ช Configuring RBAC rules ... I1115 12:38:44.931952 43168 cni.go:93] Creating CNI manager for "" I1115 12:38:44.931967 43168 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I1115 12:38:44.931995 43168 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I1115 12:38:44.932202 43168 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_11_15T12_38_44_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I1115 12:38:44.932242 43168 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I1115 12:38:44.968307 43168 ops.go:34] apiserver oom_adj: -16 I1115 12:38:45.307079 43168 kubeadm.go:985] duration metric: took 375.072459ms to wait for elevateKubeSystemPrivileges. I1115 12:38:45.307104 43168 kubeadm.go:392] StartCluster complete in 10.364362333s I1115 12:38:45.307119 43168 settings.go:142] acquiring lock: {Name:mk06beb820383ea100af03fea702960cc87d129d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1115 12:38:45.307292 43168 settings.go:150] Updating kubeconfig: /Users/jakoberpf/.kube/config I1115 12:38:45.308902 43168 lock.go:35] WriteFile acquiring /Users/jakoberpf/.kube/config: {Name:mka7d441a91a01a36f51ef1e8c85d2500638ad71 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1115 12:38:45.877766 43168 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I1115 12:38:45.877822 43168 start.go:229] Will wait 6m0s for node &{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true} I1115 12:38:45.878372 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I1115 12:38:45.912988 43168 out.go:176] ๐Ÿ”Ž Verifying Kubernetes components... I1115 12:38:45.913275 43168 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet I1115 12:38:45.878581 43168 addons.go:415] enableAddons start: toEnable=map[], additional=[] I1115 12:38:45.878793 43168 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3 I1115 12:38:45.913843 43168 addons.go:65] Setting storage-provisioner=true in profile "minikube" I1115 12:38:45.913871 43168 addons.go:153] Setting addon storage-provisioner=true in "minikube" W1115 12:38:45.913882 43168 addons.go:165] addon storage-provisioner should already be in state true I1115 12:38:45.913944 43168 host.go:66] Checking if "minikube" exists ... I1115 12:38:45.914177 43168 addons.go:65] Setting default-storageclass=true in profile "minikube" I1115 12:38:45.914369 43168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I1115 12:38:45.916286 43168 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I1115 12:38:45.923395 43168 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I1115 12:38:45.964524 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I1115 12:38:45.964525 43168 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.65.2 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I1115 12:38:46.126169 43168 start.go:739] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS I1115 12:38:46.169837 43168 api_server.go:51] waiting for apiserver process to appear ... I1115 12:38:46.185896 43168 out.go:176] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I1115 12:38:46.172600 43168 addons.go:153] Setting addon default-storageclass=true in "minikube" W1115 12:38:46.185935 43168 addons.go:165] addon default-storageclass should already be in state true I1115 12:38:46.185966 43168 host.go:66] Checking if "minikube" exists ... I1115 12:38:46.185980 43168 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I1115 12:38:46.185983 43168 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I1115 12:38:46.185988 43168 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I1115 12:38:46.186053 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1115 12:38:46.187008 43168 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I1115 12:38:46.199764 43168 api_server.go:71] duration metric: took 321.913459ms to wait for apiserver process to appear ... I1115 12:38:46.199818 43168 api_server.go:87] waiting for apiserver healthz status ... I1115 12:38:46.199826 43168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51433/healthz ... I1115 12:38:46.207868 43168 api_server.go:266] https://127.0.0.1:51433/healthz returned 200: ok I1115 12:38:46.209672 43168 api_server.go:140] control plane version: v1.22.3 I1115 12:38:46.209679 43168 api_server.go:130] duration metric: took 9.857709ms to wait for apiserver health ... I1115 12:38:46.209684 43168 system_pods.go:43] waiting for kube-system pods to appear ... I1115 12:38:46.219217 43168 system_pods.go:59] 4 kube-system pods found I1115 12:38:46.219230 43168 system_pods.go:61] "etcd-minikube" [81df6d7a-1e5e-49dc-a7e2-a9a5a70bf846] Pending I1115 12:38:46.219236 43168 system_pods.go:61] "kube-apiserver-minikube" [73d40e5d-b3c9-4ddf-927a-cacdfaace9ac] Pending I1115 12:38:46.219240 43168 system_pods.go:61] "kube-controller-manager-minikube" [dd2768d2-e890-4ffb-a584-37cbe5a52fe7] Pending I1115 12:38:46.219242 43168 system_pods.go:61] "kube-scheduler-minikube" [79430d4f-35f5-4251-a390-6e256284de90] Pending I1115 12:38:46.219243 43168 system_pods.go:74] duration metric: took 9.558ms to wait for pod list to return data ... I1115 12:38:46.219248 43168 kubeadm.go:547] duration metric: took 341.403375ms to wait for : map[apiserver:true system_pods:true] ... I1115 12:38:46.219256 43168 node_conditions.go:102] verifying NodePressure condition ... I1115 12:38:46.224177 43168 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki I1115 12:38:46.224192 43168 node_conditions.go:123] node cpu capacity is 6 I1115 12:38:46.224200 43168 node_conditions.go:105] duration metric: took 4.941625ms to run NodePressure ... I1115 12:38:46.224207 43168 start.go:234] waiting for startup goroutines ... I1115 12:38:46.307564 43168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jakoberpf/.minikube/machines/minikube/id_rsa Username:docker} I1115 12:38:46.307610 43168 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I1115 12:38:46.307615 43168 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I1115 12:38:46.307684 43168 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1115 12:38:46.395557 43168 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I1115 12:38:46.427033 43168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jakoberpf/.minikube/machines/minikube/id_rsa Username:docker} I1115 12:38:46.520920 43168 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I1115 12:38:46.697980 43168 out.go:176] ๐ŸŒŸ Enabled addons: storage-provisioner, default-storageclass I1115 12:38:46.698029 43168 addons.go:417] enableAddons completed in 819.448625ms I1115 12:38:46.838105 43168 start.go:473] kubectl: 1.21.5, cluster: 1.22.3 (minor skew: 1) I1115 12:38:46.872941 43168 out.go:176] ๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Logs begin at Mon 2021-11-15 11:37:58 UTC, end at Mon 2021-11-15 11:39:57 UTC. -- Nov 15 11:37:58 minikube systemd[1]: Starting Docker Application Container Engine... Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.489520627Z" level=info msg="Starting up" Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.492826835Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.493200460Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.493305835Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.493319252Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.494970169Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.495031127Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.495047877Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.495059419Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.539069585Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.539370544Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.539653210Z" level=info msg="Loading containers: start." Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.627518710Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.654273960Z" level=info msg="Loading containers: done." Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.676811044Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8 Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.676914002Z" level=info msg="Daemon has completed initialization" Nov 15 11:37:58 minikube systemd[1]: Started Docker Application Container Engine. Nov 15 11:37:58 minikube dockerd[211]: time="2021-11-15T11:37:58.710972960Z" level=info msg="API listen on /run/docker.sock" Nov 15 11:38:00 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Nov 15 11:38:01 minikube systemd[1]: Stopping Docker Application Container Engine... Nov 15 11:38:01 minikube dockerd[211]: time="2021-11-15T11:38:01.038899545Z" level=info msg="Processing signal 'terminated'" Nov 15 11:38:01 minikube dockerd[211]: time="2021-11-15T11:38:01.040989128Z" level=info msg="Daemon shutdown complete" Nov 15 11:38:01 minikube systemd[1]: docker.service: Succeeded. Nov 15 11:38:01 minikube systemd[1]: Stopped Docker Application Container Engine. Nov 15 11:38:01 minikube systemd[1]: Starting Docker Application Container Engine... Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.088057545Z" level=info msg="Starting up" Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.092157336Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.092289211Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.092309545Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.092321836Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.093912628Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.094005253Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.094023336Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.094030170Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.100534586Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.102934711Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.102952586Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.103052170Z" level=info msg="Loading containers: start." Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.173794837Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.205239587Z" level=info msg="Loading containers: done." Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.223693295Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8 Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.223746878Z" level=info msg="Daemon has completed initialization" Nov 15 11:38:01 minikube systemd[1]: Started Docker Application Container Engine. Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.247334212Z" level=info msg="API listen on [::]:2376" Nov 15 11:38:01 minikube dockerd[468]: time="2021-11-15T11:38:01.251873128Z" level=info msg="API listen on /var/run/docker.sock" Nov 15 11:39:28 minikube dockerd[468]: time="2021-11-15T11:39:28.434491919Z" level=info msg="ignoring event" container=a13b6f8a6a5199ca81312ac12493a2c1b068f136f75c41291986e59b55b80f6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID ea3fd217a504e 66749159455b3 29 seconds ago Running storage-provisioner 1 e7456940c5103 65a490ea82ac0 008e44c427c6f 59 seconds ago Running coredns 0 b013a8e354faa a13b6f8a6a519 66749159455b3 59 seconds ago Exited storage-provisioner 0 e7456940c5103 f3b2787a58e9c 3a8d1d04758e2 About a minute ago Running kube-proxy 0 4e214f99d357b 701539f5a7e09 42e51ba6db03e About a minute ago Running kube-controller-manager 0 b145421a0718f 1d34e41e7655c a2ee49d2d4320 About a minute ago Running etcd 0 759357d12f579 cb8a71a59a7a0 32513be2649f4 About a minute ago Running kube-apiserver 0 2c2bcd2ddba28 e1a88102d5d26 3893bb7d23934 About a minute ago Running kube-scheduler 0 e343a8c2d0f40 * * ==> coredns [65a490ea82ac] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94 CoreDNS-1.8.4 linux/arm64, go1.16.4, 053c4d5 * * ==> describe nodes <== * Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=arm64 beta.kubernetes.io/os=linux kubernetes.io/arch=arm64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_11_15T12_38_44_0700 minikube.k8s.io/version=v1.24.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 15 Nov 2021 11:38:41 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Mon, 15 Nov 2021 11:39:56 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 15 Nov 2021 11:38:55 +0000 Mon, 15 Nov 2021 11:38:40 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 15 Nov 2021 11:38:55 +0000 Mon, 15 Nov 2021 11:38:40 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 15 Nov 2021 11:38:55 +0000 Mon, 15 Nov 2021 11:38:40 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 15 Nov 2021 11:38:55 +0000 Mon, 15 Nov 2021 11:38:55 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 172.17.0.2 Hostname: minikube Capacity: cpu: 6 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 hugepages-32Mi: 0 hugepages-64Ki: 0 memory: 10189976Ki pods: 110 Allocatable: cpu: 6 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 hugepages-32Mi: 0 hugepages-64Ki: 0 memory: 10189976Ki pods: 110 System Info: Machine ID: b4bce83d7afb4708a4b1b0614c6c2ef8 System UUID: b4bce83d7afb4708a4b1b0614c6c2ef8 Boot ID: d4d99d08-6f28-46f8-81f0-718eaea0221e Kernel Version: 5.10.47-linuxkit OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: arm64 Container Runtime Version: docker://20.10.8 Kubelet Version: v1.22.3 Kube-Proxy Version: v1.22.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-78fcd69978-r6b4r 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (1%!)(MISSING) 60s kube-system etcd-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 72s kube-system kube-apiserver-minikube 250m (4%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 72s kube-system kube-controller-manager-minikube 200m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 72s kube-system kube-proxy-r5jcl 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 61s kube-system kube-scheduler-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 75s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 71s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (12%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (1%!)(MISSING) 170Mi (1%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-32Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-64Ki 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 59s kube-proxy Normal Starting 73s kubelet Starting kubelet. Normal NodeHasSufficientMemory 72s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 72s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 72s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 72s kubelet Updated Node Allocatable limit across pods Normal NodeReady 62s kubelet Node minikube status is now: NodeReady * * ==> dmesg <== * [Nov15 09:44] cacheinfo: Unable to detect cache hierarchy for CPU 0 [ +6.439447] grpcfuse: loading out-of-tree module taints kernel. [Nov15 11:39] hrtimer: interrupt took 3905208 ns * * ==> etcd [1d34e41e7655] <== * running etcd on unsupported architecture "arm64" since ETCD_UNSUPPORTED_ARCH is set [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2021-11-15 11:38:39.095595 I | etcdmain: etcd Version: 3.4.13 2021-11-15 11:38:39.095622 I | etcdmain: Git SHA: ae9734ed2 2021-11-15 11:38:39.095624 I | etcdmain: Go Version: go1.13.5 2021-11-15 11:38:39.095626 I | etcdmain: Go OS/Arch: linux/arm64 2021-11-15 11:38:39.095628 I | etcdmain: setting maximum number of CPUs to 6, total number of available CPUs is 6 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2021-11-15 11:38:39.095677 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2021-11-15 11:38:39.096302 I | embed: name = minikube 2021-11-15 11:38:39.096317 I | embed: data dir = /var/lib/minikube/etcd 2021-11-15 11:38:39.096319 I | embed: member dir = /var/lib/minikube/etcd/member 2021-11-15 11:38:39.096322 I | embed: heartbeat = 100ms 2021-11-15 11:38:39.096323 I | embed: election = 1000ms 2021-11-15 11:38:39.096325 I | embed: snapshot count = 10000 2021-11-15 11:38:39.096333 I | embed: advertise client URLs = https://172.17.0.2:2379 2021-11-15 11:38:39.167106 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f raft2021/11/15 11:38:39 INFO: b8e14bda2255bc24 switched to configuration voters=() raft2021/11/15 11:38:39 INFO: b8e14bda2255bc24 became follower at term 0 raft2021/11/15 11:38:39 INFO: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2021/11/15 11:38:39 INFO: b8e14bda2255bc24 became follower at term 1 raft2021/11/15 11:38:39 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620) 2021-11-15 11:38:39.171552 W | auth: simple token is not cryptographically signed 2021-11-15 11:38:39.179004 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] 2021-11-15 11:38:39.180208 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2021-11-15 11:38:39.180299 I | embed: listening for metrics on http://127.0.0.1:2381 2021-11-15 11:38:39.180393 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10) 2021-11-15 11:38:39.180668 I | embed: listening for peers on 172.17.0.2:2380 raft2021/11/15 11:38:39 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620) 2021-11-15 11:38:39.181104 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f raft2021/11/15 11:38:39 INFO: b8e14bda2255bc24 is starting a new election at term 1 raft2021/11/15 11:38:39 INFO: b8e14bda2255bc24 became candidate at term 2 raft2021/11/15 11:38:39 INFO: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2 raft2021/11/15 11:38:39 INFO: b8e14bda2255bc24 became leader at term 2 raft2021/11/15 11:38:39 INFO: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2 2021-11-15 11:38:39.968390 I | etcdserver: setting up the initial cluster version to 3.4 2021-11-15 11:38:39.969329 N | etcdserver/membership: set the initial cluster version to 3.4 2021-11-15 11:38:39.969346 I | etcdserver/api: enabled capabilities for version 3.4 2021-11-15 11:38:39.969402 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f 2021-11-15 11:38:39.969662 I | embed: ready to serve client requests 2021-11-15 11:38:39.969720 I | embed: ready to serve client requests 2021-11-15 11:38:39.970605 I | embed: serving client requests on 127.0.0.1:2379 2021-11-15 11:38:39.972543 I | embed: serving client requests on 172.17.0.2:2379 2021-11-15 11:38:50.174450 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-11-15 11:38:50.828497 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-11-15 11:39:00.833602 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-11-15 11:39:10.828862 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-11-15 11:39:20.829955 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-11-15 11:39:30.830789 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-11-15 11:39:40.830137 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-11-15 11:39:50.830554 I | etcdserver/api/etcdhttp: /health OK (status code 200) * * ==> kernel <== * 11:39:57 up 1:55, 0 users, load average: 0.76, 0.58, 0.48 Linux minikube 5.10.47-linuxkit #1 SMP PREEMPT Sat Jul 3 21:50:16 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [cb8a71a59a7a] <== * W1115 11:38:40.613015 1 genericapiserver.go:455] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W1115 11:38:40.614850 1 genericapiserver.go:455] Skipping API apps/v1beta2 because it has no resources. W1115 11:38:40.614863 1 genericapiserver.go:455] Skipping API apps/v1beta1 because it has no resources. W1115 11:38:40.615715 1 genericapiserver.go:455] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. I1115 11:38:40.617713 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1115 11:38:40.617728 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W1115 11:38:40.630851 1 genericapiserver.go:455] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I1115 11:38:41.711424 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I1115 11:38:41.711694 1 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I1115 11:38:41.711425 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I1115 11:38:41.712479 1 secure_serving.go:266] Serving securely on [::]:8443 I1115 11:38:41.712513 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I1115 11:38:41.713088 1 customresource_discovery_controller.go:209] Starting DiscoveryController I1115 11:38:41.713171 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I1115 11:38:41.713186 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I1115 11:38:41.713196 1 available_controller.go:491] Starting AvailableConditionController I1115 11:38:41.713199 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I1115 11:38:41.713206 1 controller.go:83] Starting OpenAPI AggregationController I1115 11:38:41.713274 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I1115 11:38:41.713288 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I1115 11:38:41.713303 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I1115 11:38:41.713322 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I1115 11:38:41.713436 1 apf_controller.go:312] Starting API Priority and Fairness config controller I1115 11:38:41.713483 1 controller.go:85] Starting OpenAPI controller I1115 11:38:41.713501 1 naming_controller.go:291] Starting NamingConditionController I1115 11:38:41.713519 1 establishing_controller.go:76] Starting EstablishingController I1115 11:38:41.713526 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I1115 11:38:41.713532 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I1115 11:38:41.713541 1 crd_finalizer.go:266] Starting CRDFinalizer I1115 11:38:41.713591 1 autoregister_controller.go:141] Starting autoregister controller I1115 11:38:41.713596 1 cache.go:32] Waiting for caches to sync for autoregister controller I1115 11:38:41.713619 1 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I1115 11:38:41.720874 1 crdregistration_controller.go:111] Starting crd-autoregister controller I1115 11:38:41.720885 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister E1115 11:38:41.724251 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: I1115 11:38:41.777131 1 shared_informer.go:247] Caches are synced for node_authorizer I1115 11:38:41.782477 1 controller.go:611] quota admission added evaluator for: namespaces I1115 11:38:41.855868 1 shared_informer.go:247] Caches are synced for crd-autoregister I1115 11:38:41.855932 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1115 11:38:41.855944 1 cache.go:39] Caches are synced for AvailableConditionController controller I1115 11:38:41.855960 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I1115 11:38:41.856046 1 apf_controller.go:317] Running API Priority and Fairness config worker I1115 11:38:41.856144 1 cache.go:39] Caches are synced for autoregister controller I1115 11:38:42.711974 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1115 11:38:42.711999 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I1115 11:38:42.718327 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000 I1115 11:38:42.721508 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000 I1115 11:38:42.721533 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I1115 11:38:43.005158 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I1115 11:38:43.022011 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W1115 11:38:43.093197 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [172.17.0.2] I1115 11:38:43.093859 1 controller.go:611] quota admission added evaluator for: endpoints I1115 11:38:43.095983 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I1115 11:38:43.759519 1 controller.go:611] quota admission added evaluator for: serviceaccounts I1115 11:38:44.732276 1 controller.go:611] quota admission added evaluator for: deployments.apps I1115 11:38:44.768165 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I1115 11:38:44.989712 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I1115 11:38:56.789042 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I1115 11:38:57.411558 1 controller.go:611] quota admission added evaluator for: replicasets.apps I1115 11:38:58.197659 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io * * ==> kube-controller-manager [701539f5a7e0] <== * I1115 11:38:56.721197 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving I1115 11:38:56.723377 1 controllermanager.go:577] Started "csrcleaner" I1115 11:38:56.723443 1 cleaner.go:82] Starting CSR cleaner controller I1115 11:38:56.757907 1 controllermanager.go:577] Started "pvc-protection" I1115 11:38:56.758022 1 shared_informer.go:240] Waiting for caches to sync for resource quota I1115 11:38:56.758154 1 pvc_protection_controller.go:110] "Starting PVC protection controller" I1115 11:38:56.758158 1 shared_informer.go:240] Waiting for caches to sync for PVC protection W1115 11:38:56.773221 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1115 11:38:56.776128 1 shared_informer.go:247] Caches are synced for namespace I1115 11:38:56.778676 1 shared_informer.go:247] Caches are synced for daemon sets I1115 11:38:56.780681 1 shared_informer.go:247] Caches are synced for node I1115 11:38:56.780945 1 range_allocator.go:172] Starting range CIDR allocator I1115 11:38:56.780986 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I1115 11:38:56.781007 1 shared_informer.go:247] Caches are synced for cidrallocator I1115 11:38:56.782769 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1115 11:38:56.785356 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24] I1115 11:38:56.791179 1 shared_informer.go:247] Caches are synced for stateful set I1115 11:38:56.794148 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-r5jcl" I1115 11:38:56.796559 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1115 11:38:56.796685 1 shared_informer.go:247] Caches are synced for PV protection I1115 11:38:56.796694 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1115 11:38:56.796719 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1115 11:38:56.796728 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1115 11:38:56.857133 1 shared_informer.go:247] Caches are synced for job I1115 11:38:56.857383 1 shared_informer.go:247] Caches are synced for attach detach I1115 11:38:56.858393 1 shared_informer.go:247] Caches are synced for persistent volume I1115 11:38:56.858451 1 shared_informer.go:247] Caches are synced for ReplicationController I1115 11:38:56.858463 1 shared_informer.go:247] Caches are synced for endpoint I1115 11:38:56.858557 1 shared_informer.go:247] Caches are synced for expand I1115 11:38:56.857392 1 shared_informer.go:247] Caches are synced for TTL I1115 11:38:56.857552 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I1115 11:38:56.859430 1 shared_informer.go:247] Caches are synced for PVC protection I1115 11:38:56.859443 1 shared_informer.go:247] Caches are synced for service account I1115 11:38:56.859499 1 shared_informer.go:247] Caches are synced for HPA I1115 11:38:56.859534 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1115 11:38:56.859537 1 shared_informer.go:247] Caches are synced for cronjob I1115 11:38:56.859556 1 shared_informer.go:247] Caches are synced for ephemeral I1115 11:38:56.863218 1 shared_informer.go:247] Caches are synced for GC I1115 11:38:56.864975 1 shared_informer.go:247] Caches are synced for TTL after finished I1115 11:38:56.869981 1 shared_informer.go:247] Caches are synced for endpoint_slice I1115 11:38:56.874409 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1115 11:38:56.901465 1 shared_informer.go:247] Caches are synced for taint I1115 11:38:56.901528 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: W1115 11:38:56.901580 1 node_lifecycle_controller.go:1013] Missing timestamp for Node minikube. Assuming now as a timestamp. I1115 11:38:56.901608 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal. I1115 11:38:56.901731 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I1115 11:38:56.901750 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I1115 11:38:56.982822 1 shared_informer.go:247] Caches are synced for deployment I1115 11:38:56.987007 1 shared_informer.go:247] Caches are synced for ReplicaSet I1115 11:38:57.004435 1 shared_informer.go:247] Caches are synced for crt configmap I1115 11:38:57.005666 1 shared_informer.go:247] Caches are synced for bootstrap_signer I1115 11:38:57.014085 1 shared_informer.go:247] Caches are synced for resource quota I1115 11:38:57.058928 1 shared_informer.go:247] Caches are synced for resource quota I1115 11:38:57.106956 1 shared_informer.go:247] Caches are synced for disruption I1115 11:38:57.106983 1 disruption.go:371] Sending events to api server. I1115 11:38:57.412885 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 1" I1115 11:38:57.483724 1 shared_informer.go:247] Caches are synced for garbage collector I1115 11:38:57.520790 1 shared_informer.go:247] Caches are synced for garbage collector I1115 11:38:57.520824 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1115 11:38:57.613737 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-r6b4r" * * ==> kube-proxy [f3b2787a58e9] <== * I1115 11:38:58.105688 1 node.go:172] Successfully retrieved node IP: 172.17.0.2 I1115 11:38:58.105974 1 server_others.go:140] Detected node IP 172.17.0.2 W1115 11:38:58.106028 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy I1115 11:38:58.190033 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary I1115 11:38:58.190084 1 server_others.go:212] Using iptables Proxier. I1115 11:38:58.190091 1 server_others.go:219] creating dualStackProxier for iptables. W1115 11:38:58.190120 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6 I1115 11:38:58.190941 1 server.go:649] Version: v1.22.3 I1115 11:38:58.194695 1 config.go:315] Starting service config controller I1115 11:38:58.194741 1 config.go:224] Starting endpoint slice config controller I1115 11:38:58.194904 1 shared_informer.go:240] Waiting for caches to sync for service config I1115 11:38:58.194925 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I1115 11:38:58.295518 1 shared_informer.go:247] Caches are synced for service config I1115 11:38:58.295830 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [e1a88102d5d2] <== * I1115 11:38:39.688732 1 serving.go:347] Generated self-signed cert in-memory W1115 11:38:41.737189 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1115 11:38:41.737230 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1115 11:38:41.737237 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous. W1115 11:38:41.737240 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1115 11:38:41.771805 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259 I1115 11:38:41.772003 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I1115 11:38:41.772021 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1115 11:38:41.772040 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" E1115 11:38:41.775148 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1115 11:38:41.775470 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1115 11:38:41.775606 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1115 11:38:41.775496 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1115 11:38:41.775566 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1115 11:38:41.775501 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E1115 11:38:41.775636 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E1115 11:38:41.775700 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1115 11:38:41.775722 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1115 11:38:41.775723 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1115 11:38:41.775750 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1115 11:38:41.777072 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1115 11:38:41.777174 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1115 11:38:41.777200 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E1115 11:38:41.777223 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1115 11:38:42.680936 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E1115 11:38:42.704269 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1115 11:38:42.859759 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1115 11:38:42.905907 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope I1115 11:38:43.072992 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2021-11-15 11:37:58 UTC, end at Mon 2021-11-15 11:39:58 UTC. -- Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.375589 3044 cpu_manager.go:210] "Reconciling" reconcilePeriod="10s" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.375604 3044 state_mem.go:36] "Initialized new in-memory state store" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.375699 3044 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.375707 3044 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.375711 3044 policy_none.go:49] "None policy: Start" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.378081 3044 memory_manager.go:168] "Starting memorymanager" policy="None" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.378105 3044 state_mem.go:35] "Initializing new in-memory state store" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.378184 3044 state_mem.go:75] "Updated machine memory state" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.378978 3044 manager.go:607] "Failed to retrieve checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.379143 3044 plugin_manager.go:114] "Starting Kubelet Plugin Manager" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.394246 3044 topology_manager.go:200] "Topology Admit Handler" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.394359 3044 topology_manager.go:200] "Topology Admit Handler" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.394389 3044 topology_manager.go:200] "Topology Admit Handler" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.394411 3044 topology_manager.go:200] "Topology Admit Handler" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484013 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f388112705940e3dc780daadca94caa-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"4f388112705940e3dc780daadca94caa\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484046 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484062 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484086 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eee9e2da42102bf0a05e1e7b00e318bf-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"eee9e2da42102bf0a05e1e7b00e318bf\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484098 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/50353c457aa3a0a518d24c81c5262aa7-etcd-certs\") pod \"etcd-minikube\" (UID: \"50353c457aa3a0a518d24c81c5262aa7\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484109 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f388112705940e3dc780daadca94caa-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"4f388112705940e3dc780daadca94caa\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484120 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484162 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f388112705940e3dc780daadca94caa-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"4f388112705940e3dc780daadca94caa\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484187 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484204 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484220 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484242 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/50353c457aa3a0a518d24c81c5262aa7-etcd-data\") pod \"etcd-minikube\" (UID: \"50353c457aa3a0a518d24c81c5262aa7\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484260 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f388112705940e3dc780daadca94caa-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"4f388112705940e3dc780daadca94caa\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484275 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f388112705940e3dc780daadca94caa-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"4f388112705940e3dc780daadca94caa\") " Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.484291 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b8f48de5a060759b091c9bd8713f19c-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"8b8f48de5a060759b091c9bd8713f19c\") " Nov 15 11:38:45 minikube kubelet[3044]: E1115 11:38:45.595160 3044 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Nov 15 11:38:45 minikube kubelet[3044]: I1115 11:38:45.957395 3044 apiserver.go:52] "Watching apiserver" Nov 15 11:38:46 minikube kubelet[3044]: I1115 11:38:46.195969 3044 reconciler.go:157] "Reconciler: start to sync state" Nov 15 11:38:46 minikube kubelet[3044]: E1115 11:38:46.565457 3044 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Nov 15 11:38:46 minikube kubelet[3044]: E1115 11:38:46.761546 3044 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Nov 15 11:38:46 minikube kubelet[3044]: E1115 11:38:46.962227 3044 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Nov 15 11:38:47 minikube kubelet[3044]: E1115 11:38:47.165900 3044 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Nov 15 11:38:56 minikube kubelet[3044]: I1115 11:38:56.789745 3044 kuberuntime_manager.go:1078] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Nov 15 11:38:56 minikube kubelet[3044]: I1115 11:38:56.790228 3044 docker_service.go:359] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Nov 15 11:38:56 minikube kubelet[3044]: I1115 11:38:56.790302 3044 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Nov 15 11:38:56 minikube kubelet[3044]: I1115 11:38:56.798354 3044 topology_manager.go:200] "Topology Admit Handler" Nov 15 11:38:56 minikube kubelet[3044]: I1115 11:38:56.910684 3044 topology_manager.go:200] "Topology Admit Handler" Nov 15 11:38:56 minikube kubelet[3044]: I1115 11:38:56.990936 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3cf985ae-0f11-4239-9935-327dbed99c8f-kube-proxy\") pod \"kube-proxy-r5jcl\" (UID: \"3cf985ae-0f11-4239-9935-327dbed99c8f\") " Nov 15 11:38:56 minikube kubelet[3044]: I1115 11:38:56.991002 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cf985ae-0f11-4239-9935-327dbed99c8f-xtables-lock\") pod \"kube-proxy-r5jcl\" (UID: \"3cf985ae-0f11-4239-9935-327dbed99c8f\") " Nov 15 11:38:56 minikube kubelet[3044]: I1115 11:38:56.991020 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cf985ae-0f11-4239-9935-327dbed99c8f-lib-modules\") pod \"kube-proxy-r5jcl\" (UID: \"3cf985ae-0f11-4239-9935-327dbed99c8f\") " Nov 15 11:38:56 minikube kubelet[3044]: I1115 11:38:56.991035 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxdtg\" (UniqueName: \"kubernetes.io/projected/3cf985ae-0f11-4239-9935-327dbed99c8f-kube-api-access-lxdtg\") pod \"kube-proxy-r5jcl\" (UID: \"3cf985ae-0f11-4239-9935-327dbed99c8f\") " Nov 15 11:38:57 minikube kubelet[3044]: I1115 11:38:57.093472 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpdwb\" (UniqueName: \"kubernetes.io/projected/5ce82f43-152d-4668-9a22-330937b45225-kube-api-access-tpdwb\") pod \"storage-provisioner\" (UID: \"5ce82f43-152d-4668-9a22-330937b45225\") " Nov 15 11:38:57 minikube kubelet[3044]: I1115 11:38:57.093528 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5ce82f43-152d-4668-9a22-330937b45225-tmp\") pod \"storage-provisioner\" (UID: \"5ce82f43-152d-4668-9a22-330937b45225\") " Nov 15 11:38:57 minikube kubelet[3044]: E1115 11:38:57.099054 3044 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 15 11:38:57 minikube kubelet[3044]: E1115 11:38:57.099094 3044 projected.go:199] Error preparing data for projected volume kube-api-access-lxdtg for pod kube-system/kube-proxy-r5jcl: configmap "kube-root-ca.crt" not found Nov 15 11:38:57 minikube kubelet[3044]: E1115 11:38:57.099143 3044 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/3cf985ae-0f11-4239-9935-327dbed99c8f-kube-api-access-lxdtg podName:3cf985ae-0f11-4239-9935-327dbed99c8f nodeName:}" failed. No retries permitted until 2021-11-15 11:38:57.599127085 +0000 UTC m=+12.902837590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lxdtg" (UniqueName: "kubernetes.io/projected/3cf985ae-0f11-4239-9935-327dbed99c8f-kube-api-access-lxdtg") pod "kube-proxy-r5jcl" (UID: "3cf985ae-0f11-4239-9935-327dbed99c8f") : configmap "kube-root-ca.crt" not found Nov 15 11:38:57 minikube kubelet[3044]: E1115 11:38:57.198181 3044 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 15 11:38:57 minikube kubelet[3044]: E1115 11:38:57.198216 3044 projected.go:199] Error preparing data for projected volume kube-api-access-tpdwb for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found Nov 15 11:38:57 minikube kubelet[3044]: E1115 11:38:57.198261 3044 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/5ce82f43-152d-4668-9a22-330937b45225-kube-api-access-tpdwb podName:5ce82f43-152d-4668-9a22-330937b45225 nodeName:}" failed. No retries permitted until 2021-11-15 11:38:57.698248251 +0000 UTC m=+13.001958756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tpdwb" (UniqueName: "kubernetes.io/projected/5ce82f43-152d-4668-9a22-330937b45225-kube-api-access-tpdwb") pod "storage-provisioner" (UID: "5ce82f43-152d-4668-9a22-330937b45225") : configmap "kube-root-ca.crt" not found Nov 15 11:38:57 minikube kubelet[3044]: I1115 11:38:57.617524 3044 topology_manager.go:200] "Topology Admit Handler" Nov 15 11:38:57 minikube kubelet[3044]: I1115 11:38:57.803741 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmfjr\" (UniqueName: \"kubernetes.io/projected/5076ec7b-21a0-4ad9-b8b1-8627cd351582-kube-api-access-lmfjr\") pod \"coredns-78fcd69978-r6b4r\" (UID: \"5076ec7b-21a0-4ad9-b8b1-8627cd351582\") " Nov 15 11:38:57 minikube kubelet[3044]: I1115 11:38:57.804082 3044 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5076ec7b-21a0-4ad9-b8b1-8627cd351582-config-volume\") pod \"coredns-78fcd69978-r6b4r\" (UID: \"5076ec7b-21a0-4ad9-b8b1-8627cd351582\") " Nov 15 11:38:58 minikube kubelet[3044]: I1115 11:38:58.511653 3044 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-r6b4r through plugin: invalid network status for" Nov 15 11:38:58 minikube kubelet[3044]: I1115 11:38:58.511997 3044 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b013a8e354faa5d22af7abe73e92da7f015172d50296d32f9514ada190761450" Nov 15 11:38:59 minikube kubelet[3044]: I1115 11:38:59.521647 3044 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-r6b4r through plugin: invalid network status for" Nov 15 11:39:28 minikube kubelet[3044]: I1115 11:39:28.857418 3044 scope.go:110] "RemoveContainer" containerID="a13b6f8a6a5199ca81312ac12493a2c1b068f136f75c41291986e59b55b80f6a" * * ==> storage-provisioner [a13b6f8a6a51] <== * I1115 11:38:58.387035 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F1115 11:39:28.392757 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout * * ==> storage-provisioner [ea3fd217a504] <== * I1115 11:39:28.988052 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I1115 11:39:28.999314 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I1115 11:39:28.999455 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I1115 11:39:29.013073 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I1115 11:39:29.013182 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_94368b2d-d2e1-4e2e-a032-d625e8ddfdb7! I1115 11:39:29.013288 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6657dfdb-4026-46be-90fc-31d40b4a2f2d", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_94368b2d-d2e1-4e2e-a032-d625e8ddfdb7 became leader I1115 11:39:29.114031 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_94368b2d-d2e1-4e2e-a032-d625e8ddfdb7!
RA489 commented 2 years ago

/kind support

jakoberpf commented 2 years ago

Does anyone have an idea what could be the issue?

tspearconquest commented 2 years ago

I started seeing this recently on my Mac (not an M1) but I am using the flag --kubernetes-version=v1.18.19 and it works fine if I use the flag --kubernetes-version=v1.19.13

It was working fine, I took a week off for Christmas, come back and it's not working now. I did some googling and found that kubernetes v1.19 was the first to support cgroups v2. This seems to be related to the error but I'm not sure what has changed in minikube that would cause this. Interestingly, your logs show using kubernetes v1.22.3 which is the latest k8s release, and you are still having this issue, but yet when I change the kubernetes version from the older release to the newer (but not latest) release, it works fine for me. That tells me that my root cause might be different from yours.

jakoberpf commented 2 years ago

@tspearconquest Interesting that you saw the same issue and where able to fix it with and older kubernetes version. I tried the same, but am still seeing the same error.

tspearconquest commented 2 years ago

Which MacOS version are you running? Is there anything in your networking configuration that you manually configured on the Mac?

Does the cluster still work, for example do the pods start up and can you still access them over the network in any way?

taeuuoo commented 2 years ago

If you just wanna run anyway, make docker network manually(I used subnet 192.168.9.0/24). Then minikube start might work.

jakoberpf commented 2 years ago

@tspearconquest I am on 12.0.1 and my network configuration is quite basic, as just DHCP on private WIFI. I can access the cluster ressources via portforwarding, but not via ingresses

@taeuuoo Your previous commen actually did the trick. Running docker network create --subnet 192.168.9.0/24 --driver bridge minikube before minikube start work fixed my error.

Is this in general a problem of minikube (not beeing able to create this network by itself) or could it be related to the M1 arch or my (default) network conifguration?

klaases commented 2 years ago

From what I can tell, the the network was already being used by docker, which would be related to your network configuration. For minikube, subnet creation should iterate up to a point, approx. 20 tries.

For additional debugging purposes, provide additional details from docker.

Please share the output of docker network ls

And also share the output of docker inspect [name of the network]

That way we can see what networks docker is using and troubleshoot further.

klaases commented 2 years ago

Hi @jakoberpf โ€“ is this issue still occurring? Are additional details available? If so, please feel free to re-open the issue by commenting with /reopen. This issue will be closed as additional information was unavailable and some time has passed.

Additional information that may be helpful:

Thank you for sharing your experience!