kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.16k stars 4.87k forks source link

addons: ambassador crd validation failure #12442

Open phumpal opened 3 years ago

phumpal commented 3 years ago

Steps to reproduce the issue:

  1. minkube delete
  2. brew upgrade minikube
  3. minikube config set driver hyperkit
  4. minikube start
  5. minikube addons enable registry-creds
  6. minikube addons enable ambassador

minikube addons enable ambassador

**out** ```shell minikube addons enable ambassador ▪ Using image quay.io/datawire/ambassador-operator:v1.2.3 ❌ Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/ambassador-operator-crds.yaml -f /etc/kubernetes/addons/ambassador-operator.yaml -f /etc/kubernetes/addons/ambassadorinstallation.yaml: Process exited with status 1 stdout: namespace/ambassador unchanged serviceaccount/ambassador-operator unchanged role.rbac.authorization.k8s.io/ambassador-operator unchanged clusterrole.rbac.authorization.k8s.io/ambassador-operator-cluster unchanged rolebinding.rbac.authorization.k8s.io/ambassador-operator unchanged clusterrolebinding.rbac.authorization.k8s.io/ambassador-operator-cluster unchanged configmap/static-helm-values unchanged deployment.apps/ambassador-operator unchanged stderr: error validating "/etc/kubernetes/addons/ambassador-operator-crds.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "additionalPrinterColumns" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): unknown field "subresources" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): unknown field "validation" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false unable to recognize "/etc/kubernetes/addons/ambassadorinstallation.yaml": no matches for kind "AmbassadorInstallation" in version "getambassador.io/v2" ] ``` **logs.txt** ```shell * * ==> Audit <== * |--------------|-----------------------|----------|----------------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |--------------|-----------------------|----------|----------------|---------|-------------------------------|-------------------------------| | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:29:56 CDT | Thu, 02 Sep 2021 22:30:05 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:31:47 CDT | Thu, 02 Sep 2021 22:32:55 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:33:01 CDT | Thu, 02 Sep 2021 22:33:27 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:34:47 CDT | Thu, 02 Sep 2021 22:34:58 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:39:00 CDT | Thu, 02 Sep 2021 22:39:30 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:40:13 CDT | Thu, 02 Sep 2021 22:41:30 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:42:01 CDT | Thu, 02 Sep 2021 22:42:32 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:43:47 CDT | Thu, 02 Sep 2021 22:43:54 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:44:27 CDT | Thu, 02 Sep 2021 22:46:21 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:46:25 CDT | Thu, 02 Sep 2021 22:47:09 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:47:11 CDT | Thu, 02 Sep 2021 22:47:19 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:48:23 CDT | Thu, 02 Sep 2021 22:49:29 CDT | | tunnel | --logtostderr | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:49:59 CDT | Thu, 02 Sep 2021 22:50:14 CDT | | tunnel | --logtostderr | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:50:26 CDT | Thu, 02 Sep 2021 22:50:27 CDT | | tunnel | --logtostderr | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:50:35 CDT | Thu, 02 Sep 2021 22:50:38 CDT | | ip | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:55:06 CDT | Thu, 02 Sep 2021 22:55:06 CDT | | stop | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:55:26 CDT | Thu, 02 Sep 2021 22:55:29 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:53:17 CDT | Thu, 02 Sep 2021 22:55:32 CDT | | start | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:56:40 CDT | Thu, 02 Sep 2021 22:57:19 CDT | | ip | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:58:55 CDT | Thu, 02 Sep 2021 22:58:55 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:58:42 CDT | Thu, 02 Sep 2021 22:59:17 CDT | | stop | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 22:59:14 CDT | Thu, 02 Sep 2021 22:59:17 CDT | | update-check | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:00:25 CDT | Thu, 02 Sep 2021 23:00:25 CDT | | delete | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:00:28 CDT | Thu, 02 Sep 2021 23:00:28 CDT | | start | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:01:32 CDT | Thu, 02 Sep 2021 23:02:17 CDT | | ip | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:02:23 CDT | Thu, 02 Sep 2021 23:02:23 CDT | | logs | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:05:02 CDT | Thu, 02 Sep 2021 23:05:04 CDT | | delete | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:08:38 CDT | Thu, 02 Sep 2021 23:08:41 CDT | | start | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:10:05 CDT | Thu, 02 Sep 2021 23:10:50 CDT | | addons | enable ambassador | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:11:11 CDT | Thu, 02 Sep 2021 23:11:15 CDT | | addons | enable registry-creds | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:12:00 CDT | Thu, 02 Sep 2021 23:12:01 CDT | | ip | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:15:10 CDT | Thu, 02 Sep 2021 23:15:10 CDT | | ip | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:18:18 CDT | Thu, 02 Sep 2021 23:18:18 CDT | | ip | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:19:28 CDT | Thu, 02 Sep 2021 23:19:28 CDT | | tunnel | | minikube | patrick.humpal | v1.22.0 | Thu, 02 Sep 2021 23:14:07 CDT | Thu, 02 Sep 2021 23:22:30 CDT | | delete | | minikube | patrick.humpal | v1.22.0 | Thu, 09 Sep 2021 14:19:54 CDT | Thu, 09 Sep 2021 14:19:55 CDT | | config | set driver hyperkit | minikube | patrick.humpal | v1.22.0 | Thu, 09 Sep 2021 14:22:13 CDT | Thu, 09 Sep 2021 14:22:13 CDT | | start | | minikube | patrick.humpal | v1.22.0 | Thu, 09 Sep 2021 14:22:14 CDT | Thu, 09 Sep 2021 14:23:02 CDT | | addons | enable registry-creds | minikube | patrick.humpal | v1.22.0 | Thu, 09 Sep 2021 14:23:02 CDT | Thu, 09 Sep 2021 14:23:02 CDT | | addons | enable ambassador | minikube | patrick.humpal | v1.22.0 | Thu, 09 Sep 2021 14:23:02 CDT | Thu, 09 Sep 2021 14:23:05 CDT | | delete | | minikube | patrick.humpal | v1.22.0 | Thu, 09 Sep 2021 14:23:18 CDT | Thu, 09 Sep 2021 14:23:22 CDT | | config | set driver hyperkit | minikube | patrick.humpal | v1.22.0 | Thu, 09 Sep 2021 14:23:31 CDT | Thu, 09 Sep 2021 14:23:31 CDT | | delete | | minikube | patrick.humpal | v1.22.0 | Thu, 09 Sep 2021 14:23:35 CDT | Thu, 09 Sep 2021 14:23:35 CDT | | delete | | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:24:12 CDT | Thu, 09 Sep 2021 14:24:12 CDT | | config | set driver hyperkit | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:24:17 CDT | Thu, 09 Sep 2021 14:24:17 CDT | | start | | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:24:17 CDT | Thu, 09 Sep 2021 14:25:49 CDT | | addons | enable registry-creds | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:25:49 CDT | Thu, 09 Sep 2021 14:25:49 CDT | | logs | --file=logs.txt | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:30:41 CDT | Thu, 09 Sep 2021 14:30:43 CDT | | addons | | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:33:55 CDT | Thu, 09 Sep 2021 14:33:55 CDT | | addons | list | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:34:01 CDT | Thu, 09 Sep 2021 14:34:01 CDT | | delete | | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:34:06 CDT | Thu, 09 Sep 2021 14:34:11 CDT | | config | set driver hyperkit | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:34:32 CDT | Thu, 09 Sep 2021 14:34:32 CDT | | start | | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:34:33 CDT | Thu, 09 Sep 2021 14:35:14 CDT | | addons | enable registry-creds | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:35:14 CDT | Thu, 09 Sep 2021 14:35:14 CDT | | delete | | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:41:06 CDT | Thu, 09 Sep 2021 14:41:11 CDT | | cache | delete | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:41:16 CDT | Thu, 09 Sep 2021 14:41:16 CDT | | cache | | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:41:29 CDT | Thu, 09 Sep 2021 14:41:29 CDT | | config | set driver hyperkit | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:41:36 CDT | Thu, 09 Sep 2021 14:41:36 CDT | | start | | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:41:36 CDT | Thu, 09 Sep 2021 14:42:14 CDT | | addons | enable registry-creds | minikube | patrick.humpal | v1.23.0 | Thu, 09 Sep 2021 14:42:14 CDT | Thu, 09 Sep 2021 14:42:14 CDT | |--------------|-----------------------|----------|----------------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2021/09/09 14:41:36 Running on machine: RMT-PHUMPAL Binary: Built with gc go1.17 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0909 14:41:36.478137 12165 out.go:298] Setting OutFile to fd 1 ... I0909 14:41:36.478370 12165 out.go:350] isatty.IsTerminal(1) = true I0909 14:41:36.478372 12165 out.go:311] Setting ErrFile to fd 2... I0909 14:41:36.478376 12165 out.go:350] isatty.IsTerminal(2) = true I0909 14:41:36.478445 12165 root.go:313] Updating PATH: /Users/patrick.humpal/.minikube/bin I0909 14:41:36.478805 12165 out.go:305] Setting JSON to false I0909 14:41:36.508252 12165 start.go:111] hostinfo: {"hostname":"RMT-PHUMPAL.local","uptime":17441,"bootTime":1631199055,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.5.2","kernelVersion":"20.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"fecbf22b-fbbe-36de-9664-f12a7dd41d3d"} W0909 14:41:36.508340 12165 start.go:119] gopshost.Virtualization returned error: not implemented yet I0909 14:41:36.527655 12165 out.go:177] 😄 minikube v1.23.0 on Darwin 11.5.2 I0909 14:41:36.528017 12165 notify.go:169] Checking for updates... I0909 14:41:36.528731 12165 driver.go:343] Setting default libvirt URI to qemu:///system I0909 14:41:36.585674 12165 out.go:177] ✨ Using the hyperkit driver based on user configuration I0909 14:41:36.585763 12165 start.go:278] selected driver: hyperkit I0909 14:41:36.585779 12165 start.go:751] validating driver "hyperkit" against I0909 14:41:36.585822 12165 start.go:762] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc:} I0909 14:41:36.587447 12165 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0909 14:41:36.587848 12165 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/patrick.humpal/.minikube/bin:/usr/local/opt/curl/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/MacGPG2/bin:/Users/patrick.humpal/go/bin I0909 14:41:36.602384 12165 install.go:137] /Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit version is 1.21.0 I0909 14:41:36.608940 12165 install.go:79] stdout: /Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit I0909 14:41:36.608958 12165 install.go:81] /Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit looks good I0909 14:41:36.609004 12165 start_flags.go:264] no existing cluster config was found, will generate one from the flags I0909 14:41:36.609344 12165 start_flags.go:345] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB I0909 14:41:36.609433 12165 start_flags.go:719] Wait components to verify : map[apiserver:true system_pods:true] I0909 14:41:36.609452 12165 cni.go:93] Creating CNI manager for "" I0909 14:41:36.609459 12165 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0909 14:41:36.609471 12165 start_flags.go:278] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c Memory:6000 CPUs:2 DiskSize:12288 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} I0909 14:41:36.609590 12165 iso.go:123] acquiring lock: {Name:mk04009062883a908a0cca386f782d1eae900114 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0909 14:41:36.648047 12165 out.go:177] 👍 Starting control plane node minikube in cluster minikube I0909 14:41:36.648118 12165 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker I0909 14:41:36.648253 12165 preload.go:147] Found local preload: /Users/patrick.humpal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 I0909 14:41:36.648308 12165 cache.go:56] Caching tarball of preloaded images I0909 14:41:36.648711 12165 preload.go:173] Found /Users/patrick.humpal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0909 14:41:36.648766 12165 cache.go:59] Finished verifying existence of preloaded tar for v1.22.1 on docker I0909 14:41:36.649332 12165 profile.go:148] Saving config to /Users/patrick.humpal/.minikube/profiles/minikube/config.json ... I0909 14:41:36.649380 12165 lock.go:36] WriteFile acquiring /Users/patrick.humpal/.minikube/profiles/minikube/config.json: {Name:mk436fd4a6136f66498083eb7bb19afb39a3f3d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0909 14:41:36.650172 12165 cache.go:205] Successfully downloaded all kic artifacts I0909 14:41:36.650220 12165 start.go:313] acquiring machines lock for minikube: {Name:mk4f972bfc3d40c90edb55a4a44a437ea7f3c692 Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0909 14:41:36.650363 12165 start.go:317] acquired machines lock for "minikube" in 128.345µs I0909 14:41:36.650400 12165 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.23.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c Memory:6000 CPUs:2 DiskSize:12288 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true} I0909 14:41:36.650467 12165 start.go:126] createHost starting for "" (driver="hyperkit") I0909 14:41:36.670050 12165 out.go:204] 🔥 Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=12288MB) ... I0909 14:41:36.670590 12165 main.go:130] libmachine: Found binary path at /Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit I0909 14:41:36.670692 12165 main.go:130] libmachine: Launching plugin server for driver hyperkit I0909 14:41:36.683911 12165 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:52261 I0909 14:41:36.684496 12165 main.go:130] libmachine: () Calling .GetVersion I0909 14:41:36.685060 12165 main.go:130] libmachine: Using API Version 1 I0909 14:41:36.685076 12165 main.go:130] libmachine: () Calling .SetConfigRaw I0909 14:41:36.685387 12165 main.go:130] libmachine: () Calling .GetMachineName I0909 14:41:36.685527 12165 main.go:130] libmachine: (minikube) Calling .GetMachineName I0909 14:41:36.685633 12165 main.go:130] libmachine: (minikube) Calling .DriverName I0909 14:41:36.685755 12165 start.go:160] libmachine.API.Create for "minikube" (driver="hyperkit") I0909 14:41:36.685787 12165 client.go:168] LocalClient.Create starting I0909 14:41:36.685859 12165 main.go:130] libmachine: Reading certificate data from /Users/patrick.humpal/.minikube/certs/ca.pem I0909 14:41:36.685952 12165 main.go:130] libmachine: Decoding PEM data... I0909 14:41:36.685968 12165 main.go:130] libmachine: Parsing certificate... I0909 14:41:36.686073 12165 main.go:130] libmachine: Reading certificate data from /Users/patrick.humpal/.minikube/certs/cert.pem I0909 14:41:36.686133 12165 main.go:130] libmachine: Decoding PEM data... I0909 14:41:36.686149 12165 main.go:130] libmachine: Parsing certificate... I0909 14:41:36.686163 12165 main.go:130] libmachine: Running pre-create checks... I0909 14:41:36.686169 12165 main.go:130] libmachine: (minikube) Calling .PreCreateCheck I0909 14:41:36.686308 12165 main.go:130] libmachine: (minikube) DBG | exe=/Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0909 14:41:36.686599 12165 main.go:130] libmachine: (minikube) Calling .GetConfigRaw I0909 14:41:36.689463 12165 main.go:130] libmachine: Creating machine... I0909 14:41:36.689470 12165 main.go:130] libmachine: (minikube) Calling .Create I0909 14:41:36.689643 12165 main.go:130] libmachine: (minikube) DBG | exe=/Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0909 14:41:36.689800 12165 main.go:130] libmachine: (minikube) DBG | I0909 14:41:36.689590 12172 common.go:101] Making disk image using store path: /Users/patrick.humpal/.minikube I0909 14:41:36.689899 12165 main.go:130] libmachine: (minikube) Downloading /Users/patrick.humpal/.minikube/cache/boot2docker.iso from file:///Users/patrick.humpal/.minikube/cache/iso/minikube-v1.23.0.iso... I0909 14:41:36.841740 12165 main.go:130] libmachine: (minikube) DBG | I0909 14:41:36.841685 12172 common.go:108] Creating ssh key: /Users/patrick.humpal/.minikube/machines/minikube/id_rsa... I0909 14:41:36.911553 12165 main.go:130] libmachine: (minikube) DBG | I0909 14:41:36.911467 12172 common.go:114] Creating raw disk image: /Users/patrick.humpal/.minikube/machines/minikube/minikube.rawdisk... I0909 14:41:36.911561 12165 main.go:130] libmachine: (minikube) DBG | Writing magic tar header I0909 14:41:36.911572 12165 main.go:130] libmachine: (minikube) DBG | Writing SSH key tar header I0909 14:41:36.911837 12165 main.go:130] libmachine: (minikube) DBG | I0909 14:41:36.911770 12172 common.go:128] Fixing permissions on /Users/patrick.humpal/.minikube/machines/minikube ... I0909 14:41:37.058041 12165 main.go:130] libmachine: (minikube) DBG | exe=/Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0909 14:41:37.058053 12165 main.go:130] libmachine: (minikube) DBG | clean start, hyperkit pid file doesn't exist: /Users/patrick.humpal/.minikube/machines/minikube/hyperkit.pid I0909 14:41:37.058066 12165 main.go:130] libmachine: (minikube) DBG | Using UUID f1dcd89a-11a5-11ec-aa85-acde48001122 I0909 14:41:37.222762 12165 main.go:130] libmachine: (minikube) DBG | Generated MAC f6:b4:b0:a:5c:35 I0909 14:41:37.222788 12165 main.go:130] libmachine: (minikube) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube I0909 14:41:37.222822 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/patrick.humpal/.minikube/machines/minikube", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f1dcd89a-11a5-11ec-aa85-acde48001122", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000110240)}, ISOImages:[]string{"/Users/patrick.humpal/.minikube/machines/minikube/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/patrick.humpal/.minikube/machines/minikube/bzimage", Initrd:"/Users/patrick.humpal/.minikube/machines/minikube/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)} I0909 14:41:37.222860 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/patrick.humpal/.minikube/machines/minikube", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f1dcd89a-11a5-11ec-aa85-acde48001122", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000110240)}, ISOImages:[]string{"/Users/patrick.humpal/.minikube/machines/minikube/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/patrick.humpal/.minikube/machines/minikube/bzimage", Initrd:"/Users/patrick.humpal/.minikube/machines/minikube/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)} I0909 14:41:37.222972 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/patrick.humpal/.minikube/machines/minikube/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f1dcd89a-11a5-11ec-aa85-acde48001122", "-s", "2:0,virtio-blk,/Users/patrick.humpal/.minikube/machines/minikube/minikube.rawdisk", "-s", "3,ahci-cd,/Users/patrick.humpal/.minikube/machines/minikube/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/patrick.humpal/.minikube/machines/minikube/tty,log=/Users/patrick.humpal/.minikube/machines/minikube/console-ring", "-f", "kexec,/Users/patrick.humpal/.minikube/machines/minikube/bzimage,/Users/patrick.humpal/.minikube/machines/minikube/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube"} I0909 14:41:37.222997 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/patrick.humpal/.minikube/machines/minikube/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f1dcd89a-11a5-11ec-aa85-acde48001122 -s 2:0,virtio-blk,/Users/patrick.humpal/.minikube/machines/minikube/minikube.rawdisk -s 3,ahci-cd,/Users/patrick.humpal/.minikube/machines/minikube/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/patrick.humpal/.minikube/machines/minikube/tty,log=/Users/patrick.humpal/.minikube/machines/minikube/console-ring -f kexec,/Users/patrick.humpal/.minikube/machines/minikube/bzimage,/Users/patrick.humpal/.minikube/machines/minikube/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube" I0909 14:41:37.223031 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger REDACTED I0909 14:41:37.228302 12165 main.go:130] libmachine: (minikube) DBG | Searching for f6:b4:b0:a:5c:35 in /var/db/dhcpd_leases ... I0909 14:41:37.228397 12165 main.go:130] libmachine: (minikube) DBG | Found 4 entries in /var/db/dhcpd_leases! I0909 14:41:37.228413 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:52:4a:f9:d2:77:72 ID:1,52:4a:f9:d2:77:72 Lease:0x613bb352} I0909 14:41:37.228429 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:fe:4e:18:43:2a:27 ID:1,fe:4e:18:43:2a:27 Lease:0x613bb120} I0909 14:41:37.228443 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:18:8d:65:c6:33 ID:1,ca:18:8d:65:c6:33 Lease:0x613a5f29} I0909 14:41:37.228453 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:2e:2a:2:ce:10:88 ID:1,2e:2a:2:ce:10:88 Lease:0x6132f1a7} I0909 14:41:37.234778 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications I0909 14:41:37.301639 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:37 INFO : hyperkit: stderr: /Users/patrick.humpal/.minikube/machines/minikube/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD I0909 14:41:37.302423 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 20 unspecified don't care: bit is 0 I0909 14:41:37.302457 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0 I0909 14:41:37.302465 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0 I0909 14:41:37.302471 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0 I0909 14:41:37.953639 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0 I0909 14:41:38.059047 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 20 unspecified don't care: bit is 0 I0909 14:41:38.059079 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0 I0909 14:41:38.059084 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0 I0909 14:41:38.059095 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0 I0909 14:41:38.059952 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1 I0909 14:41:39.228775 12165 main.go:130] libmachine: (minikube) DBG | Attempt 1 I0909 14:41:39.228789 12165 main.go:130] libmachine: (minikube) DBG | exe=/Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0909 14:41:39.228999 12165 main.go:130] libmachine: (minikube) DBG | hyperkit pid from json: 12173 I0909 14:41:39.230191 12165 main.go:130] libmachine: (minikube) DBG | Searching for f6:b4:b0:a:5c:35 in /var/db/dhcpd_leases ... I0909 14:41:39.230356 12165 main.go:130] libmachine: (minikube) DBG | Found 4 entries in /var/db/dhcpd_leases! I0909 14:41:39.230382 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:52:4a:f9:d2:77:72 ID:1,52:4a:f9:d2:77:72 Lease:0x613bb352} I0909 14:41:39.230412 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:fe:4e:18:43:2a:27 ID:1,fe:4e:18:43:2a:27 Lease:0x613bb120} I0909 14:41:39.230418 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:18:8d:65:c6:33 ID:1,ca:18:8d:65:c6:33 Lease:0x613a5f29} I0909 14:41:39.230425 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:2e:2a:2:ce:10:88 ID:1,2e:2a:2:ce:10:88 Lease:0x6132f1a7} I0909 14:41:41.230816 12165 main.go:130] libmachine: (minikube) DBG | Attempt 2 I0909 14:41:41.230827 12165 main.go:130] libmachine: (minikube) DBG | exe=/Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0909 14:41:41.231041 12165 main.go:130] libmachine: (minikube) DBG | hyperkit pid from json: 12173 I0909 14:41:41.232069 12165 main.go:130] libmachine: (minikube) DBG | Searching for f6:b4:b0:a:5c:35 in /var/db/dhcpd_leases ... I0909 14:41:41.232161 12165 main.go:130] libmachine: (minikube) DBG | Found 4 entries in /var/db/dhcpd_leases! I0909 14:41:41.232195 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:52:4a:f9:d2:77:72 ID:1,52:4a:f9:d2:77:72 Lease:0x613bb352} I0909 14:41:41.232247 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:fe:4e:18:43:2a:27 ID:1,fe:4e:18:43:2a:27 Lease:0x613bb120} I0909 14:41:41.232279 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:18:8d:65:c6:33 ID:1,ca:18:8d:65:c6:33 Lease:0x613a5f29} I0909 14:41:41.232284 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:2e:2a:2:ce:10:88 ID:1,2e:2a:2:ce:10:88 Lease:0x6132f1a7} I0909 14:41:42.231343 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:42 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0 I0909 14:41:42.231355 12165 main.go:130] libmachine: (minikube) DBG | 2021/09/09 14:41:42 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0 I0909 14:41:43.236933 12165 main.go:130] libmachine: (minikube) DBG | Attempt 3 I0909 14:41:43.236947 12165 main.go:130] libmachine: (minikube) DBG | exe=/Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0909 14:41:43.237057 12165 main.go:130] libmachine: (minikube) DBG | hyperkit pid from json: 12173 I0909 14:41:43.238053 12165 main.go:130] libmachine: (minikube) DBG | Searching for f6:b4:b0:a:5c:35 in /var/db/dhcpd_leases ... I0909 14:41:43.238123 12165 main.go:130] libmachine: (minikube) DBG | Found 4 entries in /var/db/dhcpd_leases! I0909 14:41:43.238131 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:52:4a:f9:d2:77:72 ID:1,52:4a:f9:d2:77:72 Lease:0x613bb352} I0909 14:41:43.238139 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:fe:4e:18:43:2a:27 ID:1,fe:4e:18:43:2a:27 Lease:0x613bb120} I0909 14:41:43.238146 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:18:8d:65:c6:33 ID:1,ca:18:8d:65:c6:33 Lease:0x613a5f29} I0909 14:41:43.238153 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:2e:2a:2:ce:10:88 ID:1,2e:2a:2:ce:10:88 Lease:0x6132f1a7} I0909 14:41:45.240347 12165 main.go:130] libmachine: (minikube) DBG | Attempt 4 I0909 14:41:45.240359 12165 main.go:130] libmachine: (minikube) DBG | exe=/Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0909 14:41:45.240457 12165 main.go:130] libmachine: (minikube) DBG | hyperkit pid from json: 12173 I0909 14:41:45.241258 12165 main.go:130] libmachine: (minikube) DBG | Searching for f6:b4:b0:a:5c:35 in /var/db/dhcpd_leases ... I0909 14:41:45.241415 12165 main.go:130] libmachine: (minikube) DBG | Found 4 entries in /var/db/dhcpd_leases! I0909 14:41:45.241450 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:52:4a:f9:d2:77:72 ID:1,52:4a:f9:d2:77:72 Lease:0x613bb352} I0909 14:41:45.241456 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:fe:4e:18:43:2a:27 ID:1,fe:4e:18:43:2a:27 Lease:0x613bb120} I0909 14:41:45.241461 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:18:8d:65:c6:33 ID:1,ca:18:8d:65:c6:33 Lease:0x613a5f29} I0909 14:41:45.241483 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:2e:2a:2:ce:10:88 ID:1,2e:2a:2:ce:10:88 Lease:0x6132f1a7} I0909 14:41:47.246776 12165 main.go:130] libmachine: (minikube) DBG | Attempt 5 I0909 14:41:47.246801 12165 main.go:130] libmachine: (minikube) DBG | exe=/Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0909 14:41:47.247118 12165 main.go:130] libmachine: (minikube) DBG | hyperkit pid from json: 12173 I0909 14:41:47.248923 12165 main.go:130] libmachine: (minikube) DBG | Searching for f6:b4:b0:a:5c:35 in /var/db/dhcpd_leases ... I0909 14:41:47.249012 12165 main.go:130] libmachine: (minikube) DBG | Found 5 entries in /var/db/dhcpd_leases! I0909 14:41:47.249025 12165 main.go:130] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:f6:b4:b0:a:5c:35 ID:1,f6:b4:b0:a:5c:35 Lease:0x613bb4f9} I0909 14:41:47.249038 12165 main.go:130] libmachine: (minikube) DBG | Found match: f6:b4:b0:a:5c:35 I0909 14:41:47.249048 12165 main.go:130] libmachine: (minikube) DBG | IP: 192.168.64.6 I0909 14:41:47.249158 12165 main.go:130] libmachine: (minikube) Calling .GetConfigRaw I0909 14:41:47.250793 12165 main.go:130] libmachine: (minikube) Calling .DriverName I0909 14:41:47.250999 12165 main.go:130] libmachine: (minikube) Calling .DriverName I0909 14:41:47.251171 12165 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes... I0909 14:41:47.251188 12165 main.go:130] libmachine: (minikube) Calling .GetState I0909 14:41:47.251328 12165 main.go:130] libmachine: (minikube) DBG | exe=/Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0909 14:41:47.251520 12165 main.go:130] libmachine: (minikube) DBG | hyperkit pid from json: 12173 I0909 14:41:47.252593 12165 main.go:130] libmachine: Detecting operating system of created instance... I0909 14:41:47.252606 12165 main.go:130] libmachine: Waiting for SSH to be available... I0909 14:41:47.252612 12165 main.go:130] libmachine: Getting to WaitForSSH function... I0909 14:41:47.252617 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:41:47.252778 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:41:47.252914 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.253066 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.253179 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:41:47.253408 12165 main.go:130] libmachine: Using SSH client type: native I0909 14:41:47.253674 12165 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x43a0020] 0x43a3100 [] 0s} 192.168.64.6 22 } I0909 14:41:47.253681 12165 main.go:130] libmachine: About to run SSH command: exit 0 I0909 14:41:47.326609 12165 main.go:130] libmachine: SSH cmd err, output: : I0909 14:41:47.326619 12165 main.go:130] libmachine: Detecting the provisioner... I0909 14:41:47.326624 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:41:47.326780 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:41:47.326874 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.326965 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.327043 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:41:47.327223 12165 main.go:130] libmachine: Using SSH client type: native I0909 14:41:47.327395 12165 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x43a0020] 0x43a3100 [] 0s} 192.168.64.6 22 } I0909 14:41:47.327400 12165 main.go:130] libmachine: About to run SSH command: cat /etc/os-release I0909 14:41:47.395938 12165 main.go:130] libmachine: SSH cmd err, output: : NAME=Buildroot VERSION=2021.02.4 ID=buildroot VERSION_ID=2021.02.4 PRETTY_NAME="Buildroot 2021.02.4" I0909 14:41:47.396007 12165 main.go:130] libmachine: found compatible host: buildroot I0909 14:41:47.396011 12165 main.go:130] libmachine: Provisioning with buildroot... I0909 14:41:47.396016 12165 main.go:130] libmachine: (minikube) Calling .GetMachineName I0909 14:41:47.396164 12165 buildroot.go:166] provisioning hostname "minikube" I0909 14:41:47.396173 12165 main.go:130] libmachine: (minikube) Calling .GetMachineName I0909 14:41:47.396263 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:41:47.396359 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:41:47.396442 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.396522 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.396618 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:41:47.396775 12165 main.go:130] libmachine: Using SSH client type: native I0909 14:41:47.396936 12165 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x43a0020] 0x43a3100 [] 0s} 192.168.64.6 22 } I0909 14:41:47.396942 12165 main.go:130] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0909 14:41:47.476785 12165 main.go:130] libmachine: SSH cmd err, output: : minikube I0909 14:41:47.476808 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:41:47.476944 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:41:47.477044 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.477121 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.477230 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:41:47.477409 12165 main.go:130] libmachine: Using SSH client type: native I0909 14:41:47.477545 12165 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x43a0020] 0x43a3100 [] 0s} 192.168.64.6 22 } I0909 14:41:47.477559 12165 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0909 14:41:47.545921 12165 main.go:130] libmachine: SSH cmd err, output: : I0909 14:41:47.545935 12165 buildroot.go:172] set auth options {CertDir:/Users/patrick.humpal/.minikube CaCertPath:/Users/patrick.humpal/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/patrick.humpal/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/patrick.humpal/.minikube/machines/server.pem ServerKeyPath:/Users/patrick.humpal/.minikube/machines/server-key.pem ClientKeyPath:/Users/patrick.humpal/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/patrick.humpal/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/patrick.humpal/.minikube} I0909 14:41:47.545947 12165 buildroot.go:174] setting up certificates I0909 14:41:47.545957 12165 provision.go:83] configureAuth start I0909 14:41:47.545962 12165 main.go:130] libmachine: (minikube) Calling .GetMachineName I0909 14:41:47.546115 12165 main.go:130] libmachine: (minikube) Calling .GetIP I0909 14:41:47.546209 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:41:47.546294 12165 provision.go:138] copyHostCerts I0909 14:41:47.546418 12165 exec_runner.go:145] found /Users/patrick.humpal/.minikube/key.pem, removing ... I0909 14:41:47.546424 12165 exec_runner.go:208] rm: /Users/patrick.humpal/.minikube/key.pem I0909 14:41:47.546771 12165 exec_runner.go:152] cp: /Users/patrick.humpal/.minikube/certs/key.pem --> /Users/patrick.humpal/.minikube/key.pem (1675 bytes) I0909 14:41:47.547049 12165 exec_runner.go:145] found /Users/patrick.humpal/.minikube/ca.pem, removing ... I0909 14:41:47.547052 12165 exec_runner.go:208] rm: /Users/patrick.humpal/.minikube/ca.pem I0909 14:41:47.547278 12165 exec_runner.go:152] cp: /Users/patrick.humpal/.minikube/certs/ca.pem --> /Users/patrick.humpal/.minikube/ca.pem (1099 bytes) I0909 14:41:47.547483 12165 exec_runner.go:145] found /Users/patrick.humpal/.minikube/cert.pem, removing ... I0909 14:41:47.547486 12165 exec_runner.go:208] rm: /Users/patrick.humpal/.minikube/cert.pem I0909 14:41:47.547722 12165 exec_runner.go:152] cp: /Users/patrick.humpal/.minikube/certs/cert.pem --> /Users/patrick.humpal/.minikube/cert.pem (1143 bytes) I0909 14:41:47.547943 12165 provision.go:112] generating server cert: /Users/patrick.humpal/.minikube/machines/server.pem ca-key=/Users/patrick.humpal/.minikube/certs/ca.pem private-key=/Users/patrick.humpal/.minikube/certs/ca-key.pem org=patrick.humpal.minikube san=[192.168.64.6 192.168.64.6 localhost 127.0.0.1 minikube minikube] I0909 14:41:47.615271 12165 provision.go:172] copyRemoteCerts I0909 14:41:47.615353 12165 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0909 14:41:47.615369 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:41:47.615530 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:41:47.615611 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.615689 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:41:47.615773 12165 sshutil.go:53] new ssh client: &{IP:192.168.64.6 Port:22 SSHKeyPath:/Users/patrick.humpal/.minikube/machines/minikube/id_rsa Username:docker} I0909 14:41:47.655572 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes) I0909 14:41:47.672782 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0909 14:41:47.688944 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1099 bytes) I0909 14:41:47.706319 12165 provision.go:86] duration metric: configureAuth took 160.34774ms I0909 14:41:47.706329 12165 buildroot.go:189] setting minikube options for container-runtime I0909 14:41:47.706488 12165 config.go:177] Loaded profile config "minikube": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.22.1 I0909 14:41:47.706499 12165 main.go:130] libmachine: (minikube) Calling .DriverName I0909 14:41:47.706631 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:41:47.706722 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:41:47.706801 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.706861 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.706936 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:41:47.707060 12165 main.go:130] libmachine: Using SSH client type: native I0909 14:41:47.707226 12165 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x43a0020] 0x43a3100 [] 0s} 192.168.64.6 22 } I0909 14:41:47.707231 12165 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0909 14:41:47.775381 12165 main.go:130] libmachine: SSH cmd err, output: : tmpfs I0909 14:41:47.775387 12165 buildroot.go:70] root file system type: tmpfs I0909 14:41:47.775550 12165 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0909 14:41:47.775563 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:41:47.775710 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:41:47.775807 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.775910 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.775995 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:41:47.776136 12165 main.go:130] libmachine: Using SSH client type: native I0909 14:41:47.776285 12165 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x43a0020] 0x43a3100 [] 0s} 192.168.64.6 22 } I0909 14:41:47.776333 12165 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0909 14:41:47.853841 12165 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0909 14:41:47.853858 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:41:47.854005 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:41:47.854089 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.854175 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:47.854257 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:41:47.854399 12165 main.go:130] libmachine: Using SSH client type: native I0909 14:41:47.854553 12165 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x43a0020] 0x43a3100 [] 0s} 192.168.64.6 22 } I0909 14:41:47.854562 12165 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0909 14:41:48.439228 12165 main.go:130] libmachine: SSH cmd err, output: : diff: can't stat '/lib/systemd/system/docker.service': No such file or directory Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service. I0909 14:41:48.439255 12165 main.go:130] libmachine: Checking connection to Docker... I0909 14:41:48.439282 12165 main.go:130] libmachine: (minikube) Calling .GetURL I0909 14:41:48.439529 12165 main.go:130] libmachine: Docker is up and running! I0909 14:41:48.439534 12165 main.go:130] libmachine: Reticulating splines... I0909 14:41:48.439538 12165 client.go:171] LocalClient.Create took 11.753667468s I0909 14:41:48.439544 12165 start.go:168] duration metric: libmachine.API.Create for "minikube" took 11.753712502s I0909 14:41:48.439552 12165 start.go:267] post-start starting for "minikube" (driver="hyperkit") I0909 14:41:48.439554 12165 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0909 14:41:48.439561 12165 main.go:130] libmachine: (minikube) Calling .DriverName I0909 14:41:48.439813 12165 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0909 14:41:48.439822 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:41:48.439929 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:41:48.440032 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:48.440144 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:41:48.440242 12165 sshutil.go:53] new ssh client: &{IP:192.168.64.6 Port:22 SSHKeyPath:/Users/patrick.humpal/.minikube/machines/minikube/id_rsa Username:docker} I0909 14:41:48.484272 12165 ssh_runner.go:152] Run: cat /etc/os-release I0909 14:41:48.487937 12165 info.go:137] Remote host: Buildroot 2021.02.4 I0909 14:41:48.487948 12165 filesync.go:126] Scanning /Users/patrick.humpal/.minikube/addons for local assets ... I0909 14:41:48.488069 12165 filesync.go:126] Scanning /Users/patrick.humpal/.minikube/files for local assets ... I0909 14:41:48.488133 12165 start.go:270] post-start completed in 48.57727ms I0909 14:41:48.488153 12165 main.go:130] libmachine: (minikube) Calling .GetConfigRaw I0909 14:41:48.488992 12165 main.go:130] libmachine: (minikube) Calling .GetIP I0909 14:41:48.489134 12165 profile.go:148] Saving config to /Users/patrick.humpal/.minikube/profiles/minikube/config.json ... I0909 14:41:48.489656 12165 start.go:129] duration metric: createHost completed in 11.839104332s I0909 14:41:48.489661 12165 start.go:80] releasing machines lock for "minikube", held for 11.83921216s I0909 14:41:48.489684 12165 main.go:130] libmachine: (minikube) Calling .DriverName I0909 14:41:48.489778 12165 main.go:130] libmachine: (minikube) Calling .GetIP I0909 14:41:48.489854 12165 main.go:130] libmachine: (minikube) Calling .DriverName I0909 14:41:48.489917 12165 main.go:130] libmachine: (minikube) Calling .DriverName I0909 14:41:48.490632 12165 main.go:130] libmachine: (minikube) Calling .DriverName I0909 14:41:48.490813 12165 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/ I0909 14:41:48.490821 12165 ssh_runner.go:152] Run: systemctl --version I0909 14:41:48.490829 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:41:48.490832 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:41:48.490925 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:41:48.490939 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:41:48.491008 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:48.491029 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:41:48.491078 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:41:48.491099 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:41:48.491161 12165 sshutil.go:53] new ssh client: &{IP:192.168.64.6 Port:22 SSHKeyPath:/Users/patrick.humpal/.minikube/machines/minikube/id_rsa Username:docker} I0909 14:41:48.491174 12165 sshutil.go:53] new ssh client: &{IP:192.168.64.6 Port:22 SSHKeyPath:/Users/patrick.humpal/.minikube/machines/minikube/id_rsa Username:docker} I0909 14:41:48.528541 12165 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker I0909 14:41:48.528643 12165 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}} I0909 14:41:48.770469 12165 docker.go:558] Got preloaded images: I0909 14:41:48.770478 12165 docker.go:564] k8s.gcr.io/kube-apiserver:v1.22.1 wasn't preloaded I0909 14:41:48.770588 12165 ssh_runner.go:152] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0909 14:41:48.779294 12165 ssh_runner.go:152] Run: which lz4 I0909 14:41:48.782958 12165 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4 I0909 14:41:48.786558 12165 ssh_runner.go:309] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1 stdout: stderr: stat: cannot statx '/preloaded.tar.lz4': No such file or directory I0909 14:41:48.786586 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (540060231 bytes) I0909 14:41:50.533209 12165 docker.go:523] Took 1.750302 seconds to copy over tarball I0909 14:41:50.533417 12165 ssh_runner.go:152] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4 I0909 14:41:57.152466 12165 ssh_runner.go:192] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (6.618991198s) I0909 14:41:57.152476 12165 ssh_runner.go:103] rm: /preloaded.tar.lz4 I0909 14:41:57.182158 12165 ssh_runner.go:152] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0909 14:41:57.190047 12165 ssh_runner.go:319] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3149 bytes) I0909 14:41:57.202022 12165 ssh_runner.go:152] Run: sudo systemctl daemon-reload I0909 14:41:57.297123 12165 ssh_runner.go:152] Run: sudo systemctl restart docker I0909 14:41:59.545525 12165 ssh_runner.go:192] Completed: sudo systemctl restart docker: (2.248375989s) I0909 14:41:59.545775 12165 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd I0909 14:41:59.555528 12165 ssh_runner.go:152] Run: sudo systemctl cat docker.service I0909 14:41:59.567333 12165 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd I0909 14:41:59.576365 12165 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio I0909 14:41:59.585157 12165 ssh_runner.go:152] Run: sudo systemctl stop -f crio I0909 14:41:59.609554 12165 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio I0909 14:41:59.622488 12165 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0909 14:41:59.635454 12165 ssh_runner.go:152] Run: sudo systemctl unmask docker.service I0909 14:41:59.738083 12165 ssh_runner.go:152] Run: sudo systemctl enable docker.socket I0909 14:41:59.839608 12165 ssh_runner.go:152] Run: sudo systemctl daemon-reload I0909 14:41:59.937940 12165 ssh_runner.go:152] Run: sudo systemctl start docker I0909 14:41:59.948790 12165 ssh_runner.go:152] Run: docker version --format {{.Server.Version}} I0909 14:41:59.979281 12165 ssh_runner.go:152] Run: docker version --format {{.Server.Version}} I0909 14:42:00.051180 12165 out.go:204] 🐳 Preparing Kubernetes v1.22.1 on Docker 20.10.8 ... I0909 14:42:00.051518 12165 ssh_runner.go:152] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts I0909 14:42:00.057363 12165 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0909 14:42:00.069020 12165 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker I0909 14:42:00.069096 12165 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}} I0909 14:42:00.097222 12165 docker.go:558] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.22.1 k8s.gcr.io/kube-controller-manager:v1.22.1 k8s.gcr.io/kube-proxy:v1.22.1 k8s.gcr.io/kube-scheduler:v1.22.1 k8s.gcr.io/etcd:3.5.0-0 k8s.gcr.io/coredns/coredns:v1.8.4 gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/pause:3.5 kubernetesui/dashboard:v2.1.0 kubernetesui/metrics-scraper:v1.0.4 -- /stdout -- I0909 14:42:00.097231 12165 docker.go:489] Images already preloaded, skipping extraction I0909 14:42:00.097320 12165 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}} I0909 14:42:00.123369 12165 docker.go:558] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.22.1 k8s.gcr.io/kube-controller-manager:v1.22.1 k8s.gcr.io/kube-scheduler:v1.22.1 k8s.gcr.io/kube-proxy:v1.22.1 k8s.gcr.io/etcd:3.5.0-0 k8s.gcr.io/coredns/coredns:v1.8.4 gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/pause:3.5 kubernetesui/dashboard:v2.1.0 kubernetesui/metrics-scraper:v1.0.4 -- /stdout -- I0909 14:42:00.123382 12165 cache_images.go:78] Images are preloaded, skipping loading I0909 14:42:00.123472 12165 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}} I0909 14:42:00.159364 12165 cni.go:93] Creating CNI manager for "" I0909 14:42:00.159371 12165 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0909 14:42:00.159385 12165 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0909 14:42:00.159396 12165 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.6 APIServerPort:8443 KubernetesVersion:v1.22.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.64.6 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0909 14:42:00.159478 12165 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.64.6 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.64.6 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.64.6"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.22.1 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0909 14:42:00.159544 12165 kubeadm.go:909] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.22.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.6 [Install] config: {KubernetesVersion:v1.22.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0909 14:42:00.159635 12165 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.1 I0909 14:42:00.167124 12165 binaries.go:44] Found k8s binaries, skipping transfer I0909 14:42:00.167199 12165 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0909 14:42:00.172906 12165 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I0909 14:42:00.184498 12165 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0909 14:42:00.195189 12165 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes) I0909 14:42:00.206419 12165 ssh_runner.go:152] Run: grep 192.168.64.6 control-plane.minikube.internal$ /etc/hosts I0909 14:42:00.208692 12165 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.6 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0909 14:42:00.217047 12165 certs.go:52] Setting up /Users/patrick.humpal/.minikube/profiles/minikube for IP: 192.168.64.6 I0909 14:42:00.217162 12165 certs.go:179] skipping minikubeCA CA generation: /Users/patrick.humpal/.minikube/ca.key I0909 14:42:00.217230 12165 certs.go:179] skipping proxyClientCA CA generation: /Users/patrick.humpal/.minikube/proxy-client-ca.key I0909 14:42:00.217281 12165 certs.go:297] generating minikube-user signed cert: /Users/patrick.humpal/.minikube/profiles/minikube/client.key I0909 14:42:00.217286 12165 crypto.go:69] Generating cert /Users/patrick.humpal/.minikube/profiles/minikube/client.crt with IP's: [] I0909 14:42:00.266332 12165 crypto.go:157] Writing cert to /Users/patrick.humpal/.minikube/profiles/minikube/client.crt ... I0909 14:42:00.266340 12165 lock.go:36] WriteFile acquiring /Users/patrick.humpal/.minikube/profiles/minikube/client.crt: {Name:mk86d999a7a4057f8387bc7dd53dfcde4fc39489 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0909 14:42:00.266612 12165 crypto.go:165] Writing key to /Users/patrick.humpal/.minikube/profiles/minikube/client.key ... I0909 14:42:00.266617 12165 lock.go:36] WriteFile acquiring /Users/patrick.humpal/.minikube/profiles/minikube/client.key: {Name:mkbdc2a625ac254066a5f8503b020111a5e011c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0909 14:42:00.266791 12165 certs.go:297] generating minikube signed cert: /Users/patrick.humpal/.minikube/profiles/minikube/apiserver.key.62db8c78 I0909 14:42:00.266794 12165 crypto.go:69] Generating cert /Users/patrick.humpal/.minikube/profiles/minikube/apiserver.crt.62db8c78 with IP's: [192.168.64.6 10.96.0.1 127.0.0.1 10.0.0.1] I0909 14:42:00.363028 12165 crypto.go:157] Writing cert to /Users/patrick.humpal/.minikube/profiles/minikube/apiserver.crt.62db8c78 ... I0909 14:42:00.363037 12165 lock.go:36] WriteFile acquiring /Users/patrick.humpal/.minikube/profiles/minikube/apiserver.crt.62db8c78: {Name:mkd88e529d76cbff2da119a841b86ba0823635fe Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0909 14:42:00.363353 12165 crypto.go:165] Writing key to /Users/patrick.humpal/.minikube/profiles/minikube/apiserver.key.62db8c78 ... I0909 14:42:00.363358 12165 lock.go:36] WriteFile acquiring /Users/patrick.humpal/.minikube/profiles/minikube/apiserver.key.62db8c78: {Name:mk135a07f4a91fe5871e5ca56a9d49151a2ba4a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0909 14:42:00.363569 12165 certs.go:308] copying /Users/patrick.humpal/.minikube/profiles/minikube/apiserver.crt.62db8c78 -> /Users/patrick.humpal/.minikube/profiles/minikube/apiserver.crt I0909 14:42:00.363767 12165 certs.go:312] copying /Users/patrick.humpal/.minikube/profiles/minikube/apiserver.key.62db8c78 -> /Users/patrick.humpal/.minikube/profiles/minikube/apiserver.key I0909 14:42:00.363961 12165 certs.go:297] generating aggregator signed cert: /Users/patrick.humpal/.minikube/profiles/minikube/proxy-client.key I0909 14:42:00.363964 12165 crypto.go:69] Generating cert /Users/patrick.humpal/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0909 14:42:00.502119 12165 crypto.go:157] Writing cert to /Users/patrick.humpal/.minikube/profiles/minikube/proxy-client.crt ... I0909 14:42:00.502128 12165 lock.go:36] WriteFile acquiring /Users/patrick.humpal/.minikube/profiles/minikube/proxy-client.crt: {Name:mkcf1a747ad35e884c3336c84d464dd7e411f25b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0909 14:42:00.502425 12165 crypto.go:165] Writing key to /Users/patrick.humpal/.minikube/profiles/minikube/proxy-client.key ... I0909 14:42:00.502429 12165 lock.go:36] WriteFile acquiring /Users/patrick.humpal/.minikube/profiles/minikube/proxy-client.key: {Name:mke29b289b4bcd60eeed0e1e4ea2a6bf4966c3b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0909 14:42:00.502878 12165 certs.go:376] found cert: /Users/patrick.humpal/.minikube/certs/Users/patrick.humpal/.minikube/certs/ca-key.pem (1679 bytes) I0909 14:42:00.502930 12165 certs.go:376] found cert: /Users/patrick.humpal/.minikube/certs/Users/patrick.humpal/.minikube/certs/ca.pem (1099 bytes) I0909 14:42:00.502974 12165 certs.go:376] found cert: /Users/patrick.humpal/.minikube/certs/Users/patrick.humpal/.minikube/certs/cert.pem (1143 bytes) I0909 14:42:00.503024 12165 certs.go:376] found cert: /Users/patrick.humpal/.minikube/certs/Users/patrick.humpal/.minikube/certs/key.pem (1675 bytes) I0909 14:42:00.504229 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0909 14:42:00.523480 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0909 14:42:00.543900 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0909 14:42:00.560997 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0909 14:42:00.579218 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0909 14:42:00.596653 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0909 14:42:00.611822 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0909 14:42:00.627980 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0909 14:42:00.644723 12165 ssh_runner.go:319] scp /Users/patrick.humpal/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0909 14:42:00.662274 12165 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0909 14:42:00.673872 12165 ssh_runner.go:152] Run: openssl version I0909 14:42:00.677487 12165 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0909 14:42:00.685151 12165 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0909 14:42:00.688279 12165 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Jun 30 16:16 /usr/share/ca-certificates/minikubeCA.pem I0909 14:42:00.688335 12165 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0909 14:42:00.692108 12165 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0909 14:42:00.699460 12165 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.23.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c Memory:6000 CPUs:2 DiskSize:12288 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.6 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} I0909 14:42:00.699550 12165 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0909 14:42:00.719578 12165 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0909 14:42:00.726408 12165 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0909 14:42:00.732738 12165 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0909 14:42:00.739618 12165 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0909 14:42:00.739634 12165 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem" I0909 14:42:01.110500 12165 out.go:204] ▪ Generating certificates and keys ... I0909 14:42:03.308338 12165 out.go:204] ▪ Booting up control plane ... I0909 14:42:12.385452 12165 out.go:204] ▪ Configuring RBAC rules ... I0909 14:42:12.767744 12165 cni.go:93] Creating CNI manager for "" I0909 14:42:12.767751 12165 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0909 14:42:12.767789 12165 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0909 14:42:12.767936 12165 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0909 14:42:12.767936 12165 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl label nodes minikube.k8s.io/version=v1.23.0 minikube.k8s.io/commit=5931455374810b1bbeb222a9713ae2c756daee10 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_09_09T14_42_12_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0909 14:42:12.967893 12165 kubeadm.go:985] duration metric: took 200.096452ms to wait for elevateKubeSystemPrivileges. I0909 14:42:12.967934 12165 ops.go:34] apiserver oom_adj: -16 I0909 14:42:12.968003 12165 kubeadm.go:392] StartCluster complete in 12.268463918s I0909 14:42:12.968016 12165 settings.go:142] acquiring lock: {Name:mke96ca65cfef8995ffa047b94c252754beee7d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0909 14:42:12.968136 12165 settings.go:150] Updating kubeconfig: /Users/patrick.humpal/.kube/config I0909 14:42:12.968888 12165 lock.go:36] WriteFile acquiring /Users/patrick.humpal/.kube/config: {Name:mkef888f351d08f888a49446bfcaf9e31fe65490 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0909 14:42:13.493261 12165 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I0909 14:42:13.493291 12165 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.64.6 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true} I0909 14:42:13.493303 12165 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0909 14:42:13.493326 12165 addons.go:404] enableAddons start: toEnable=map[], additional=[] I0909 14:42:13.513419 12165 out.go:177] 🔎 Verifying Kubernetes components... I0909 14:42:13.493517 12165 config.go:177] Loaded profile config "minikube": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.22.1 I0909 14:42:13.513498 12165 addons.go:65] Setting default-storageclass=true in profile "minikube" I0909 14:42:13.513498 12165 addons.go:65] Setting storage-provisioner=true in profile "minikube" I0909 14:42:13.513513 12165 addons.go:153] Setting addon storage-provisioner=true in "minikube" I0909 14:42:13.513513 12165 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" W0909 14:42:13.513517 12165 addons.go:165] addon storage-provisioner should already be in state true I0909 14:42:13.513529 12165 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet I0909 14:42:13.513542 12165 host.go:66] Checking if "minikube" exists ... I0909 14:42:13.514098 12165 main.go:130] libmachine: Found binary path at /Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit I0909 14:42:13.514117 12165 main.go:130] libmachine: Launching plugin server for driver hyperkit I0909 14:42:13.514178 12165 main.go:130] libmachine: Found binary path at /Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit I0909 14:42:13.514232 12165 main.go:130] libmachine: Launching plugin server for driver hyperkit I0909 14:42:13.527574 12165 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:52287 I0909 14:42:13.528051 12165 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:52291 I0909 14:42:13.528295 12165 main.go:130] libmachine: () Calling .GetVersion I0909 14:42:13.528634 12165 main.go:130] libmachine: () Calling .GetVersion I0909 14:42:13.528825 12165 main.go:130] libmachine: Using API Version 1 I0909 14:42:13.528835 12165 main.go:130] libmachine: () Calling .SetConfigRaw I0909 14:42:13.529094 12165 main.go:130] libmachine: Using API Version 1 I0909 14:42:13.529116 12165 main.go:130] libmachine: () Calling .SetConfigRaw I0909 14:42:13.529255 12165 main.go:130] libmachine: () Calling .GetMachineName I0909 14:42:13.529342 12165 main.go:130] libmachine: (minikube) Calling .GetState I0909 14:42:13.529381 12165 main.go:130] libmachine: () Calling .GetMachineName I0909 14:42:13.529436 12165 main.go:130] libmachine: (minikube) DBG | exe=/Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0909 14:42:13.529620 12165 main.go:130] libmachine: (minikube) DBG | hyperkit pid from json: 12173 I0909 14:42:13.529952 12165 main.go:130] libmachine: Found binary path at /Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit I0909 14:42:13.529974 12165 main.go:130] libmachine: Launching plugin server for driver hyperkit I0909 14:42:13.543428 12165 addons.go:153] Setting addon default-storageclass=true in "minikube" W0909 14:42:13.543464 12165 addons.go:165] addon default-storageclass should already be in state true I0909 14:42:13.543492 12165 host.go:66] Checking if "minikube" exists ... I0909 14:42:13.543529 12165 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:52295 I0909 14:42:13.544132 12165 main.go:130] libmachine: Found binary path at /Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit I0909 14:42:13.544135 12165 main.go:130] libmachine: () Calling .GetVersion I0909 14:42:13.544152 12165 main.go:130] libmachine: Launching plugin server for driver hyperkit I0909 14:42:13.545276 12165 main.go:130] libmachine: Using API Version 1 I0909 14:42:13.545328 12165 main.go:130] libmachine: () Calling .SetConfigRaw I0909 14:42:13.545717 12165 main.go:130] libmachine: () Calling .GetMachineName I0909 14:42:13.546346 12165 main.go:130] libmachine: (minikube) Calling .GetState I0909 14:42:13.546667 12165 main.go:130] libmachine: (minikube) DBG | exe=/Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0909 14:42:13.546674 12165 main.go:130] libmachine: (minikube) DBG | hyperkit pid from json: 12173 I0909 14:42:13.547984 12165 main.go:130] libmachine: (minikube) Calling .DriverName I0909 14:42:13.554828 12165 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:52299 I0909 14:42:13.568470 12165 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0909 14:42:13.568652 12165 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml I0909 14:42:13.568657 12165 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0909 14:42:13.568669 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:42:13.568809 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:42:13.568916 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:42:13.568960 12165 main.go:130] libmachine: () Calling .GetVersion I0909 14:42:13.569010 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:42:13.569087 12165 sshutil.go:53] new ssh client: &{IP:192.168.64.6 Port:22 SSHKeyPath:/Users/patrick.humpal/.minikube/machines/minikube/id_rsa Username:docker} I0909 14:42:13.569302 12165 main.go:130] libmachine: Using API Version 1 I0909 14:42:13.569320 12165 main.go:130] libmachine: () Calling .SetConfigRaw I0909 14:42:13.569604 12165 main.go:130] libmachine: () Calling .GetMachineName I0909 14:42:13.570056 12165 main.go:130] libmachine: Found binary path at /Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit I0909 14:42:13.570077 12165 main.go:130] libmachine: Launching plugin server for driver hyperkit I0909 14:42:13.578734 12165 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.64.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0909 14:42:13.580053 12165 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:52304 I0909 14:42:13.580553 12165 main.go:130] libmachine: () Calling .GetVersion I0909 14:42:13.581112 12165 main.go:130] libmachine: Using API Version 1 I0909 14:42:13.581124 12165 main.go:130] libmachine: () Calling .SetConfigRaw I0909 14:42:13.581477 12165 main.go:130] libmachine: () Calling .GetMachineName I0909 14:42:13.581597 12165 main.go:130] libmachine: (minikube) Calling .GetState I0909 14:42:13.581781 12165 main.go:130] libmachine: (minikube) DBG | exe=/Users/patrick.humpal/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0909 14:42:13.581944 12165 main.go:130] libmachine: (minikube) DBG | hyperkit pid from json: 12173 I0909 14:42:13.583621 12165 main.go:130] libmachine: (minikube) Calling .DriverName I0909 14:42:13.583859 12165 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml I0909 14:42:13.583866 12165 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0909 14:42:13.583877 12165 main.go:130] libmachine: (minikube) Calling .GetSSHHostname I0909 14:42:13.583988 12165 main.go:130] libmachine: (minikube) Calling .GetSSHPort I0909 14:42:13.584124 12165 main.go:130] libmachine: (minikube) Calling .GetSSHKeyPath I0909 14:42:13.584247 12165 main.go:130] libmachine: (minikube) Calling .GetSSHUsername I0909 14:42:13.584352 12165 sshutil.go:53] new ssh client: &{IP:192.168.64.6 Port:22 SSHKeyPath:/Users/patrick.humpal/.minikube/machines/minikube/id_rsa Username:docker} I0909 14:42:13.586267 12165 api_server.go:50] waiting for apiserver process to appear ... I0909 14:42:13.586345 12165 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0909 14:42:13.674463 12165 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0909 14:42:13.694107 12165 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0909 14:42:13.845649 12165 start.go:729] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS I0909 14:42:13.845680 12165 api_server.go:70] duration metric: took 352.372914ms to wait for apiserver process to appear ... I0909 14:42:13.845686 12165 api_server.go:86] waiting for apiserver healthz status ... I0909 14:42:13.845712 12165 api_server.go:239] Checking apiserver healthz at https://192.168.64.6:8443/healthz ... I0909 14:42:13.852128 12165 api_server.go:265] https://192.168.64.6:8443/healthz returned 200: ok I0909 14:42:13.852978 12165 api_server.go:139] control plane version: v1.22.1 I0909 14:42:13.852985 12165 api_server.go:129] duration metric: took 7.297182ms to wait for apiserver health ... I0909 14:42:13.852992 12165 system_pods.go:43] waiting for kube-system pods to appear ... I0909 14:42:13.861032 12165 system_pods.go:59] 4 kube-system pods found I0909 14:42:13.861047 12165 system_pods.go:61] "etcd-minikube" [3a673316-8d3b-476a-8be9-e63196f2e309] Pending I0909 14:42:13.861050 12165 system_pods.go:61] "kube-apiserver-minikube" [ddef7f53-1a2b-41f8-85cd-4d5b1c90d166] Pending I0909 14:42:13.861052 12165 system_pods.go:61] "kube-controller-manager-minikube" [a6aa47dc-6d34-40ab-b726-65a1768c861a] Pending I0909 14:42:13.861054 12165 system_pods.go:61] "kube-scheduler-minikube" [70224908-d5f6-443c-9647-4e3ae5b77024] Pending I0909 14:42:13.861056 12165 system_pods.go:74] duration metric: took 8.062119ms to wait for pod list to return data ... I0909 14:42:13.861062 12165 kubeadm.go:547] duration metric: took 367.754838ms to wait for : map[apiserver:true system_pods:true] ... I0909 14:42:13.861069 12165 node_conditions.go:102] verifying NodePressure condition ... I0909 14:42:13.864171 12165 node_conditions.go:122] node storage ephemeral capacity is 10941756Ki I0909 14:42:13.864181 12165 node_conditions.go:123] node cpu capacity is 2 I0909 14:42:13.864188 12165 node_conditions.go:105] duration metric: took 3.117246ms to run NodePressure ... I0909 14:42:13.864193 12165 start.go:231] waiting for startup goroutines ... I0909 14:42:13.977113 12165 main.go:130] libmachine: Making call to close driver server I0909 14:42:13.977184 12165 main.go:130] libmachine: (minikube) Calling .Close I0909 14:42:13.977451 12165 main.go:130] libmachine: (minikube) DBG | Closing plugin on server side I0909 14:42:13.977470 12165 main.go:130] libmachine: Successfully made call to close driver server I0909 14:42:13.977489 12165 main.go:130] libmachine: Making call to close connection to plugin binary I0909 14:42:13.977499 12165 main.go:130] libmachine: Making call to close driver server I0909 14:42:13.977507 12165 main.go:130] libmachine: (minikube) Calling .Close I0909 14:42:13.977721 12165 main.go:130] libmachine: (minikube) DBG | Closing plugin on server side I0909 14:42:13.977726 12165 main.go:130] libmachine: Successfully made call to close driver server I0909 14:42:13.977742 12165 main.go:130] libmachine: Making call to close connection to plugin binary I0909 14:42:13.982846 12165 main.go:130] libmachine: Making call to close driver server I0909 14:42:13.982853 12165 main.go:130] libmachine: (minikube) Calling .Close I0909 14:42:13.983011 12165 main.go:130] libmachine: Successfully made call to close driver server I0909 14:42:13.983015 12165 main.go:130] libmachine: Making call to close connection to plugin binary I0909 14:42:13.983018 12165 main.go:130] libmachine: Making call to close driver server I0909 14:42:13.983021 12165 main.go:130] libmachine: (minikube) Calling .Close I0909 14:42:13.983046 12165 main.go:130] libmachine: (minikube) DBG | Closing plugin on server side I0909 14:42:13.983166 12165 main.go:130] libmachine: (minikube) DBG | Closing plugin on server side I0909 14:42:13.983167 12165 main.go:130] libmachine: Successfully made call to close driver server I0909 14:42:13.983179 12165 main.go:130] libmachine: Making call to close connection to plugin binary I0909 14:42:13.983191 12165 main.go:130] libmachine: Making call to close driver server I0909 14:42:13.983200 12165 main.go:130] libmachine: (minikube) Calling .Close I0909 14:42:13.983357 12165 main.go:130] libmachine: Successfully made call to close driver server I0909 14:42:13.983356 12165 main.go:130] libmachine: (minikube) DBG | Closing plugin on server side I0909 14:42:13.983369 12165 main.go:130] libmachine: Making call to close connection to plugin binary I0909 14:42:14.019520 12165 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass I0909 14:42:14.019571 12165 addons.go:406] enableAddons completed in 526.251378ms I0909 14:42:14.087534 12165 start.go:462] kubectl: 1.22.1, cluster: 1.22.1 (minor skew: 0) I0909 14:42:14.106355 12165 out.go:177] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Journal begins at Thu 2021-09-09 19:41:44 UTC, ends at Thu 2021-09-09 19:48:39 UTC. -- Sep 09 19:41:57 minikube dockerd[2196]: time="2021-09-09T19:41:57.472970681Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Sep 09 19:41:57 minikube dockerd[2196]: time="2021-09-09T19:41:57.473057279Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.462216968Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.462780438Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.462832885Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.462876377Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.462920175Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.462962314Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.463191789Z" level=info msg="Loading containers: start." Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.534975564Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.569874627Z" level=info msg="Loading containers: done." Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.585239836Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8 Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.585361234Z" level=info msg="Daemon has completed initialization" Sep 09 19:41:58 minikube systemd[1]: Started Docker Application Container Engine. Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.602284328Z" level=info msg="API listen on [::]:2376" Sep 09 19:41:58 minikube dockerd[2196]: time="2021-09-09T19:41:58.617193556Z" level=info msg="API listen on /var/run/docker.sock" Sep 09 19:42:03 minikube dockerd[2203]: time="2021-09-09T19:42:03.834497393Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/900601fd464b4167d02a4f17619cf0450ca8862774b289444a5013f32a241ade pid=3098 Sep 09 19:42:03 minikube dockerd[2203]: time="2021-09-09T19:42:03.848948843Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/824c90de24b633c618d2c1352bb8c70e22bc35451cfdbb3e7b4fbdc7bb15600a pid=3108 Sep 09 19:42:03 minikube dockerd[2203]: time="2021-09-09T19:42:03.850014633Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e925e7c6e1b3cb93e34062c98edfa616c2c5756c157f92c88664437cc8e7dffb pid=3103 Sep 09 19:42:03 minikube dockerd[2203]: time="2021-09-09T19:42:03.875525536Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/204ab6700f5c9e706ea2cbbd91d322afd0309421ee0ec064d8394fe601a65aaa pid=3145 Sep 09 19:42:04 minikube dockerd[2203]: time="2021-09-09T19:42:04.282664391Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b31c6bda0563db65cdc04ca5930e00987cb8af478a3e41fbce357b76bd1dc6bc pid=3272 Sep 09 19:42:04 minikube dockerd[2203]: time="2021-09-09T19:42:04.487303828Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/75c0941fe1b19c8de5a34ea5fb3010fb14d876aaa8094fede3fcdf9b070ffb9f pid=3304 Sep 09 19:42:04 minikube dockerd[2203]: time="2021-09-09T19:42:04.676248659Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ad90c5de850715a84e7d0fa37228095cf006e393211bab4bee76f5c9346fb5fc pid=3356 Sep 09 19:42:04 minikube dockerd[2203]: time="2021-09-09T19:42:04.784014624Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cced40a2180e9e60d56bfce260a5eacc17a320cc5127e1c654b4e886f5db518a pid=3390 Sep 09 19:42:27 minikube dockerd[2203]: time="2021-09-09T19:42:27.766829230Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5396928311ac2ff18802a2d6d601345d86cda3d71114b68b3c908a64d70e0280 pid=4199 Sep 09 19:42:27 minikube dockerd[2203]: time="2021-09-09T19:42:27.881071875Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/25f775889acbd92c81264c7cd7935b22c28f034c1b748612a4d908418846c230 pid=4251 Sep 09 19:42:28 minikube dockerd[2203]: time="2021-09-09T19:42:28.306714577Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8b4deaf8d7625f9a5d66643c2b7f4f7edd1756d1160a0ec652977684c78a4a41 pid=4392 Sep 09 19:42:28 minikube dockerd[2203]: time="2021-09-09T19:42:28.376026242Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0f2692d430a779da5fd8a0c701b7cfaa7339a804f2927f15bbbd530e80a2f732 pid=4434 Sep 09 19:42:28 minikube dockerd[2203]: time="2021-09-09T19:42:28.692259495Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8d446ec9065a2b1c4746b6408db5cfa4caac6561aea978e598e47d7893fe400c pid=4536 Sep 09 19:42:28 minikube dockerd[2203]: time="2021-09-09T19:42:28.975483139Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/34e468806e04ebf14f0d75486a2696341d1b7e18cf4ecd91894146ef8d7314fe pid=4575 Sep 09 19:42:29 minikube dockerd[2196]: time="2021-09-09T19:42:29.225859406Z" level=warning msg="reference for unknown type: " digest="sha256:492f33e0828a371aa23331d75c11c251b21499e31287f026269e3f6ec6da34ed" remote="quay.io/datawire/ambassador-operator@sha256:492f33e0828a371aa23331d75c11c251b21499e31287f026269e3f6ec6da34ed" Sep 09 19:42:29 minikube dockerd[2203]: time="2021-09-09T19:42:29.316754832Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/14742825193c4409f3d9f5d5e9befc4118e25ad4f740d5d9fdf1834ec6729c78 pid=4628 Sep 09 19:42:38 minikube dockerd[2203]: time="2021-09-09T19:42:38.890924103Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/40685498826cd445cb8802dd4baf2be28b71ed17656746b782d366093c7c63d9 pid=4805 Sep 09 19:42:40 minikube dockerd[2196]: time="2021-09-09T19:42:40.627286804Z" level=info msg="ignoring event" container=40685498826cd445cb8802dd4baf2be28b71ed17656746b782d366093c7c63d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 09 19:42:40 minikube dockerd[2203]: time="2021-09-09T19:42:40.628139142Z" level=info msg="shim disconnected" id=40685498826cd445cb8802dd4baf2be28b71ed17656746b782d366093c7c63d9 Sep 09 19:42:40 minikube dockerd[2203]: time="2021-09-09T19:42:40.628280719Z" level=error msg="copy shim log" error="read /proc/self/fd/71: file already closed" Sep 09 19:42:41 minikube dockerd[2203]: time="2021-09-09T19:42:41.146206307Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/973a16947fe64b61d1a84cd733d7f4246ec5fdc399ff36a53e7017a47eb4746b pid=4912 Sep 09 19:42:42 minikube dockerd[2203]: time="2021-09-09T19:42:42.909058336Z" level=info msg="shim disconnected" id=973a16947fe64b61d1a84cd733d7f4246ec5fdc399ff36a53e7017a47eb4746b Sep 09 19:42:42 minikube dockerd[2203]: time="2021-09-09T19:42:42.909737137Z" level=error msg="copy shim log" error="read /proc/self/fd/71: file already closed" Sep 09 19:42:42 minikube dockerd[2196]: time="2021-09-09T19:42:42.914459266Z" level=info msg="ignoring event" container=973a16947fe64b61d1a84cd733d7f4246ec5fdc399ff36a53e7017a47eb4746b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 09 19:42:58 minikube dockerd[2203]: time="2021-09-09T19:42:58.267854988Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6869fed0a2950c0a48135f7a37d43770096fdb4a47178b8ed27a74f564ec214f pid=5108 Sep 09 19:42:59 minikube dockerd[2196]: time="2021-09-09T19:42:59.971072808Z" level=info msg="ignoring event" container=6869fed0a2950c0a48135f7a37d43770096fdb4a47178b8ed27a74f564ec214f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 09 19:42:59 minikube dockerd[2203]: time="2021-09-09T19:42:59.971211285Z" level=info msg="shim disconnected" id=6869fed0a2950c0a48135f7a37d43770096fdb4a47178b8ed27a74f564ec214f Sep 09 19:42:59 minikube dockerd[2203]: time="2021-09-09T19:42:59.971253753Z" level=error msg="copy shim log" error="read /proc/self/fd/71: file already closed" Sep 09 19:43:26 minikube dockerd[2203]: time="2021-09-09T19:43:26.263953525Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ef4d63e087e5a0503a1900c26bb98fcd21fa1085e24fc2b55d3f5fa20627f615 pid=5333 Sep 09 19:43:28 minikube dockerd[2203]: time="2021-09-09T19:43:28.026568856Z" level=info msg="shim disconnected" id=ef4d63e087e5a0503a1900c26bb98fcd21fa1085e24fc2b55d3f5fa20627f615 Sep 09 19:43:28 minikube dockerd[2203]: time="2021-09-09T19:43:28.026645250Z" level=error msg="copy shim log" error="read /proc/self/fd/71: file already closed" Sep 09 19:43:28 minikube dockerd[2196]: time="2021-09-09T19:43:28.026754607Z" level=info msg="ignoring event" container=ef4d63e087e5a0503a1900c26bb98fcd21fa1085e24fc2b55d3f5fa20627f615 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 09 19:44:13 minikube dockerd[2203]: time="2021-09-09T19:44:13.298391753Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d8fdc3b62a29e1487a2e42bbc3bb1641668606995a6c40ba830d2cc93620d941 pid=5602 Sep 09 19:44:15 minikube dockerd[2203]: time="2021-09-09T19:44:15.030565192Z" level=info msg="shim disconnected" id=d8fdc3b62a29e1487a2e42bbc3bb1641668606995a6c40ba830d2cc93620d941 Sep 09 19:44:15 minikube dockerd[2196]: time="2021-09-09T19:44:15.031302107Z" level=info msg="ignoring event" container=d8fdc3b62a29e1487a2e42bbc3bb1641668606995a6c40ba830d2cc93620d941 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 09 19:44:15 minikube dockerd[2203]: time="2021-09-09T19:44:15.031853788Z" level=error msg="copy shim log" error="read /proc/self/fd/71: file already closed" Sep 09 19:45:37 minikube dockerd[2203]: time="2021-09-09T19:45:37.279519373Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e pid=6040 Sep 09 19:45:38 minikube dockerd[2196]: time="2021-09-09T19:45:38.994827362Z" level=info msg="ignoring event" container=e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 09 19:45:39 minikube dockerd[2203]: time="2021-09-09T19:45:38.995631827Z" level=info msg="shim disconnected" id=e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e Sep 09 19:45:39 minikube dockerd[2203]: time="2021-09-09T19:45:38.995671208Z" level=error msg="copy shim log" error="read /proc/self/fd/71: file already closed" Sep 09 19:48:21 minikube dockerd[2203]: time="2021-09-09T19:48:21.278247397Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/316fbe24f824d67cc36f0d248b1d92e9fa44f369262486c65014a497ba14ae5a pid=6840 Sep 09 19:48:23 minikube dockerd[2196]: time="2021-09-09T19:48:23.007375395Z" level=info msg="ignoring event" container=316fbe24f824d67cc36f0d248b1d92e9fa44f369262486c65014a497ba14ae5a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 09 19:48:23 minikube dockerd[2203]: time="2021-09-09T19:48:23.008290676Z" level=info msg="shim disconnected" id=316fbe24f824d67cc36f0d248b1d92e9fa44f369262486c65014a497ba14ae5a Sep 09 19:48:23 minikube dockerd[2203]: time="2021-09-09T19:48:23.008358049Z" level=error msg="copy shim log" error="read /proc/self/fd/71: file already closed" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 316fbe24f824d a4fefcde6b458 18 seconds ago Exited ambassador-operator 6 8b4deaf8d7625 14742825193c4 6e38f40d628db 6 minutes ago Running storage-provisioner 0 8d446ec9065a2 34e468806e04e 8d147537fb7d1 6 minutes ago Running coredns 0 0f2692d430a77 25f775889acbd 36c4ebbc9d979 6 minutes ago Running kube-proxy 0 5396928311ac2 cced40a2180e9 aca5ededae9c8 6 minutes ago Running kube-scheduler 0 204ab6700f5c9 ad90c5de85071 0048118155842 6 minutes ago Running etcd 0 e925e7c6e1b3c 75c0941fe1b19 f30469a2491a5 6 minutes ago Running kube-apiserver 0 824c90de24b63 b31c6bda0563d 6e002eb89a881 6 minutes ago Running kube-controller-manager 0 900601fd464b4 * * ==> coredns [34e468806e04] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = 08e2b174e0f0a30a2e82df9c995f4a34 CoreDNS-1.8.4 linux/amd64, go1.16.4, 053c4d5 * * ==> describe nodes <== * Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=5931455374810b1bbeb222a9713ae2c756daee10 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_09_09T14_42_12_0700 minikube.k8s.io/version=v1.23.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 09 Sep 2021 19:42:08 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Thu, 09 Sep 2021 19:48:30 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 09 Sep 2021 19:47:44 +0000 Thu, 09 Sep 2021 19:42:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 09 Sep 2021 19:47:44 +0000 Thu, 09 Sep 2021 19:42:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 09 Sep 2021 19:47:44 +0000 Thu, 09 Sep 2021 19:42:06 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 09 Sep 2021 19:47:44 +0000 Thu, 09 Sep 2021 19:42:23 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.64.6 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 10941756Ki hugepages-2Mi: 0 memory: 5952468Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 10941756Ki hugepages-2Mi: 0 memory: 5952468Ki pods: 110 System Info: Machine ID: eef25aafb7d54257b6839341c51e92bf System UUID: f1dc11ec-0000-0000-aa85-acde48001122 Boot ID: ec6b224f-cd95-4c8e-ade4-cf1729385ee7 Kernel Version: 4.19.202 OS Image: Buildroot 2021.02.4 Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.8 Kubelet Version: v1.22.1 Kube-Proxy Version: v1.22.1 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- ambassador ambassador-operator-7589b768fc-v8d6n 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m14s kube-system coredns-78fcd69978-ch559 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (2%!)(MISSING) 6m14s kube-system etcd-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 6m27s kube-system kube-apiserver-minikube 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m27s kube-system kube-controller-manager-minikube 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m27s kube-system kube-proxy-8f9hm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m15s kube-system kube-scheduler-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m27s kube-system registry-creds-85b974c7d7-qzp5d 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m14s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m26s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (2%!)(MISSING) 170Mi (2%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 6m36s (x4 over 6m36s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6m36s (x4 over 6m36s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m36s (x4 over 6m36s) kubelet Node minikube status is now: NodeHasSufficientPID Normal Starting 6m27s kubelet Starting kubelet. Normal NodeHasSufficientMemory 6m27s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6m27s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m27s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeNotReady 6m27s kubelet Node minikube status is now: NodeNotReady Normal NodeAllocatableEnforced 6m27s kubelet Updated Node Allocatable limit across pods Normal NodeReady 6m16s kubelet Node minikube status is now: NodeReady * * ==> dmesg <== * [Sep 9 19:41] ERROR: earlyprintk= earlyser already used [ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.126231] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20180810/tbprint-173) [ +4.350722] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-182) [ +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-618) [ +0.007534] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +2.738330] systemd-fstab-generator[1116]: Ignoring "noauto" for root device [ +0.025102] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. [ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.) [ +0.676819] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1640 comm=systemd-network [ +0.340418] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [ +0.358371] vboxguest: loading out-of-tree module taints kernel. [ +0.003050] vboxguest: PCI device not found, probably running on physical hardware. [ +1.513591] systemd-fstab-generator[1999]: Ignoring "noauto" for root device [ +0.116335] systemd-fstab-generator[2010]: Ignoring "noauto" for root device [ +9.150133] systemd-fstab-generator[2186]: Ignoring "noauto" for root device [ +2.192288] kauditd_printk_skb: 68 callbacks suppressed [ +0.242002] systemd-fstab-generator[2350]: Ignoring "noauto" for root device [ +0.102768] systemd-fstab-generator[2361]: Ignoring "noauto" for root device [ +0.109127] systemd-fstab-generator[2372]: Ignoring "noauto" for root device [Sep 9 19:42] systemd-fstab-generator[2597]: Ignoring "noauto" for root device [ +9.268428] systemd-fstab-generator[3761]: Ignoring "noauto" for root device [ +15.449771] kauditd_printk_skb: 155 callbacks suppressed [ +11.145316] kauditd_printk_skb: 62 callbacks suppressed [ +20.133170] kauditd_printk_skb: 20 callbacks suppressed [Sep 9 19:43] kauditd_printk_skb: 2 callbacks suppressed [ +21.013875] NFSD: Unable to end grace period: -110 [Sep 9 19:44] kauditd_printk_skb: 2 callbacks suppressed [Sep 9 19:45] kauditd_printk_skb: 2 callbacks suppressed [Sep 9 19:48] kauditd_printk_skb: 2 callbacks suppressed * * ==> etcd [ad90c5de8507] <== * {"level":"info","ts":"2021-09-09T19:42:05.527Z","caller":"etcdmain/etcd.go:72","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.64.6:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--initial-advertise-peer-urls=https://192.168.64.6:2380","--initial-cluster=minikube=https://192.168.64.6:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.64.6:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.64.6:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2021-09-09T19:42:05.528Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.64.6:2380"]} {"level":"info","ts":"2021-09-09T19:42:05.528Z","caller":"embed/etcd.go:478","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2021-09-09T19:42:05.528Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.6:2379"]} {"level":"info","ts":"2021-09-09T19:42:05.533Z","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.0","git-sha":"946a5a6f2","go-version":"go1.16.3","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.64.6:2380"],"listen-peer-urls":["https://192.168.64.6:2380"],"advertise-client-urls":["https://192.168.64.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.64.6:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2021-09-09T19:42:05.536Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"2.998824ms"} {"level":"info","ts":"2021-09-09T19:42:05.541Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"6056477fb4721a4b","cluster-id":"7b9570fefd946251"} {"level":"info","ts":"2021-09-09T19:42:05.541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6056477fb4721a4b switched to configuration voters=()"} {"level":"info","ts":"2021-09-09T19:42:05.541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6056477fb4721a4b became follower at term 0"} {"level":"info","ts":"2021-09-09T19:42:05.541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6056477fb4721a4b [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2021-09-09T19:42:05.542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6056477fb4721a4b became follower at term 1"} {"level":"info","ts":"2021-09-09T19:42:05.542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6056477fb4721a4b switched to configuration voters=(6941814489451993675)"} {"level":"warn","ts":"2021-09-09T19:42:05.550Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2021-09-09T19:42:05.558Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2021-09-09T19:42:05.565Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2021-09-09T19:42:05.571Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"6056477fb4721a4b","local-server-version":"3.5.0","cluster-version":"to_be_decided"} {"level":"info","ts":"2021-09-09T19:42:05.574Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"6056477fb4721a4b","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2021-09-09T19:42:05.574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6056477fb4721a4b switched to configuration voters=(6941814489451993675)"} {"level":"info","ts":"2021-09-09T19:42:05.575Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"7b9570fefd946251","local-member-id":"6056477fb4721a4b","added-peer-id":"6056477fb4721a4b","added-peer-peer-urls":["https://192.168.64.6:2380"]} {"level":"info","ts":"2021-09-09T19:42:05.586Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2021-09-09T19:42:05.586Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"6056477fb4721a4b","initial-advertise-peer-urls":["https://192.168.64.6:2380"],"listen-peer-urls":["https://192.168.64.6:2380"],"advertise-client-urls":["https://192.168.64.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2021-09-09T19:42:05.586Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2021-09-09T19:42:05.586Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.64.6:2380"} {"level":"info","ts":"2021-09-09T19:42:05.586Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.64.6:2380"} {"level":"info","ts":"2021-09-09T19:42:06.144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6056477fb4721a4b is starting a new election at term 1"} {"level":"info","ts":"2021-09-09T19:42:06.144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6056477fb4721a4b became pre-candidate at term 1"} {"level":"info","ts":"2021-09-09T19:42:06.144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6056477fb4721a4b received MsgPreVoteResp from 6056477fb4721a4b at term 1"} {"level":"info","ts":"2021-09-09T19:42:06.144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6056477fb4721a4b became candidate at term 2"} {"level":"info","ts":"2021-09-09T19:42:06.144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6056477fb4721a4b received MsgVoteResp from 6056477fb4721a4b at term 2"} {"level":"info","ts":"2021-09-09T19:42:06.144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6056477fb4721a4b became leader at term 2"} {"level":"info","ts":"2021-09-09T19:42:06.144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6056477fb4721a4b elected leader 6056477fb4721a4b at term 2"} {"level":"info","ts":"2021-09-09T19:42:06.144Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"6056477fb4721a4b","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.64.6:2379]}","request-path":"/0/members/6056477fb4721a4b/attributes","cluster-id":"7b9570fefd946251","publish-timeout":"7s"} {"level":"info","ts":"2021-09-09T19:42:06.145Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2021-09-09T19:42:06.147Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2021-09-09T19:42:06.151Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.6:2379"} {"level":"info","ts":"2021-09-09T19:42:06.151Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2021-09-09T19:42:06.151Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2021-09-09T19:42:06.151Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2021-09-09T19:42:06.151Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2021-09-09T19:42:06.161Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"7b9570fefd946251","local-member-id":"6056477fb4721a4b","cluster-version":"3.5"} {"level":"info","ts":"2021-09-09T19:42:06.161Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2021-09-09T19:42:06.162Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} * * ==> kernel <== * 19:48:39 up 7 min, 0 users, load average: 0.14, 0.27, 0.15 Linux minikube 4.19.202 #1 SMP Thu Sep 2 18:19:24 UTC 2021 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2021.02.4" * * ==> kube-apiserver [75c0941fe1b1] <== * W0909 19:42:07.138675 1 genericapiserver.go:455] Skipping API apps/v1beta2 because it has no resources. W0909 19:42:07.138710 1 genericapiserver.go:455] Skipping API apps/v1beta1 because it has no resources. W0909 19:42:07.140645 1 genericapiserver.go:455] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. I0909 19:42:07.144455 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0909 19:42:07.144490 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W0909 19:42:07.188817 1 genericapiserver.go:455] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0909 19:42:08.661290 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0909 19:42:08.661350 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0909 19:42:08.661535 1 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I0909 19:42:08.661898 1 secure_serving.go:266] Serving securely on [::]:8443 I0909 19:42:08.661955 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0909 19:42:08.665970 1 controller.go:83] Starting OpenAPI AggregationController I0909 19:42:08.666910 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0909 19:42:08.666917 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0909 19:42:08.666935 1 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0909 19:42:08.669610 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0909 19:42:08.669746 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0909 19:42:08.670137 1 available_controller.go:491] Starting AvailableConditionController I0909 19:42:08.670214 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0909 19:42:08.670377 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0909 19:42:08.670519 1 apf_controller.go:299] Starting API Priority and Fairness config controller I0909 19:42:08.670838 1 autoregister_controller.go:141] Starting autoregister controller I0909 19:42:08.670972 1 cache.go:32] Waiting for caches to sync for autoregister controller I0909 19:42:08.679999 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0909 19:42:08.680046 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" E0909 19:42:08.690444 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.64.6, ResourceVersion: 0, AdditionalErrorMsg: I0909 19:42:08.723667 1 controller.go:85] Starting OpenAPI controller I0909 19:42:08.723898 1 naming_controller.go:291] Starting NamingConditionController I0909 19:42:08.723982 1 establishing_controller.go:76] Starting EstablishingController I0909 19:42:08.724152 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0909 19:42:08.724269 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0909 19:42:08.724320 1 crd_finalizer.go:266] Starting CRDFinalizer I0909 19:42:08.725776 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0909 19:42:08.725834 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I0909 19:42:08.768317 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0909 19:42:08.771232 1 cache.go:39] Caches are synced for autoregister controller I0909 19:42:08.771261 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0909 19:42:08.771394 1 apf_controller.go:304] Running API Priority and Fairness config worker I0909 19:42:08.771267 1 cache.go:39] Caches are synced for AvailableConditionController controller I0909 19:42:08.790295 1 shared_informer.go:247] Caches are synced for node_authorizer I0909 19:42:08.824975 1 controller.go:611] quota admission added evaluator for: namespaces I0909 19:42:08.838205 1 shared_informer.go:247] Caches are synced for crd-autoregister I0909 19:42:09.661606 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0909 19:42:09.661639 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0909 19:42:09.676137 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000 I0909 19:42:09.681793 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000 I0909 19:42:09.681836 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I0909 19:42:10.079406 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0909 19:42:10.113800 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0909 19:42:10.209523 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.64.6] I0909 19:42:10.210747 1 controller.go:611] quota admission added evaluator for: endpoints I0909 19:42:10.214287 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0909 19:42:10.780982 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0909 19:42:11.715825 1 controller.go:611] quota admission added evaluator for: deployments.apps I0909 19:42:11.759814 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0909 19:42:12.124479 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io E0909 19:42:13.501948 1 fieldmanager.go:197] "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (apps/v1, Kind=Deployment) to smd typed: .spec.template.spec.containers[name=\"registry-creds\"].env: duplicate entries for key [name=\"awsregion\"]" VersionKind="/, Kind=" I0909 19:42:24.932667 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps E0909 19:42:25.432391 1 fieldmanager.go:197] "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (apps/v1, Kind=ReplicaSet) to smd typed: .spec.template.spec.containers[name=\"registry-creds\"].env: duplicate entries for key [name=\"awsregion\"]" VersionKind="/, Kind=" I0909 19:42:25.432665 1 controller.go:611] quota admission added evaluator for: replicasets.apps * * ==> kube-controller-manager [b31c6bda0563] <== * I0909 19:42:24.579352 1 pv_protection_controller.go:83] Starting PV protection controller I0909 19:42:24.579562 1 shared_informer.go:240] Waiting for caches to sync for PV protection I0909 19:42:24.586417 1 shared_informer.go:240] Waiting for caches to sync for resource quota W0909 19:42:24.592817 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0909 19:42:24.625215 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0909 19:42:24.625488 1 shared_informer.go:247] Caches are synced for endpoint I0909 19:42:24.628497 1 shared_informer.go:247] Caches are synced for GC I0909 19:42:24.628561 1 shared_informer.go:247] Caches are synced for ephemeral I0909 19:42:24.628605 1 shared_informer.go:247] Caches are synced for persistent volume I0909 19:42:24.629336 1 shared_informer.go:247] Caches are synced for daemon sets I0909 19:42:24.629859 1 shared_informer.go:247] Caches are synced for PVC protection I0909 19:42:24.630924 1 shared_informer.go:247] Caches are synced for endpoint_slice I0909 19:42:24.633080 1 shared_informer.go:247] Caches are synced for attach detach I0909 19:42:24.629869 1 shared_informer.go:247] Caches are synced for TTL I0909 19:42:24.636165 1 shared_informer.go:247] Caches are synced for job I0909 19:42:24.643625 1 shared_informer.go:247] Caches are synced for ReplicaSet I0909 19:42:24.649032 1 shared_informer.go:247] Caches are synced for taint I0909 19:42:24.649296 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: W0909 19:42:24.649465 1 node_lifecycle_controller.go:1013] Missing timestamp for Node minikube. Assuming now as a timestamp. I0909 19:42:24.649657 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal. I0909 19:42:24.650020 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0909 19:42:24.650197 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0909 19:42:24.652590 1 shared_informer.go:247] Caches are synced for namespace I0909 19:42:24.658177 1 shared_informer.go:247] Caches are synced for service account I0909 19:42:24.678279 1 shared_informer.go:247] Caches are synced for TTL after finished I0909 19:42:24.678603 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0909 19:42:24.678621 1 shared_informer.go:247] Caches are synced for deployment I0909 19:42:24.680907 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0909 19:42:24.681054 1 shared_informer.go:247] Caches are synced for crt configmap I0909 19:42:24.681278 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0909 19:42:24.681674 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0909 19:42:24.682204 1 shared_informer.go:247] Caches are synced for HPA I0909 19:42:24.678628 1 shared_informer.go:247] Caches are synced for expand I0909 19:42:24.683005 1 shared_informer.go:247] Caches are synced for PV protection I0909 19:42:24.683932 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0909 19:42:24.684121 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0909 19:42:24.687541 1 shared_informer.go:247] Caches are synced for disruption I0909 19:42:24.687765 1 disruption.go:371] Sending events to api server. I0909 19:42:24.687959 1 shared_informer.go:247] Caches are synced for cronjob I0909 19:42:24.688808 1 shared_informer.go:247] Caches are synced for node I0909 19:42:24.688825 1 range_allocator.go:172] Starting range CIDR allocator I0909 19:42:24.688829 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0909 19:42:24.688834 1 shared_informer.go:247] Caches are synced for cidrallocator I0909 19:42:24.693227 1 shared_informer.go:247] Caches are synced for stateful set I0909 19:42:24.695444 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0909 19:42:24.701365 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0909 19:42:24.701850 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24] I0909 19:42:24.728792 1 shared_informer.go:247] Caches are synced for ReplicationController I0909 19:42:24.875808 1 shared_informer.go:247] Caches are synced for resource quota I0909 19:42:24.888739 1 shared_informer.go:247] Caches are synced for resource quota I0909 19:42:24.940724 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8f9hm" I0909 19:42:25.326109 1 shared_informer.go:247] Caches are synced for garbage collector I0909 19:42:25.377973 1 shared_informer.go:247] Caches are synced for garbage collector I0909 19:42:25.378143 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0909 19:42:25.438082 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 1" I0909 19:42:25.438130 1 event.go:291] "Event occurred" object="kube-system/registry-creds" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set registry-creds-85b974c7d7 to 1" I0909 19:42:25.444095 1 event.go:291] "Event occurred" object="ambassador/ambassador-operator" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set ambassador-operator-7589b768fc to 1" I0909 19:42:25.689667 1 event.go:291] "Event occurred" object="kube-system/registry-creds-85b974c7d7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-creds-85b974c7d7-qzp5d" I0909 19:42:25.699211 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-ch559" I0909 19:42:25.699256 1 event.go:291] "Event occurred" object="ambassador/ambassador-operator-7589b768fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ambassador-operator-7589b768fc-v8d6n" * * ==> kube-proxy [25f775889acb] <== * I0909 19:42:27.997147 1 node.go:172] Successfully retrieved node IP: 192.168.64.6 I0909 19:42:27.997202 1 server_others.go:140] Detected node IP 192.168.64.6 W0909 19:42:27.997233 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy W0909 19:42:28.018587 1 server_others.go:197] No iptables support for IPv6: exit status 3 I0909 19:42:28.018623 1 server_others.go:208] kube-proxy running in single-stack IPv4 mode I0909 19:42:28.018634 1 server_others.go:212] Using iptables Proxier. I0909 19:42:28.018954 1 server.go:649] Version: v1.22.1 I0909 19:42:28.019554 1 config.go:315] Starting service config controller I0909 19:42:28.019642 1 shared_informer.go:240] Waiting for caches to sync for service config I0909 19:42:28.019656 1 config.go:224] Starting endpoint slice config controller I0909 19:42:28.019659 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config E0909 19:42:28.027046 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16a33eced06f6a18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc046b6c90127fce2, ext:70215936, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-minikube", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "minikube.16a33eced06f6a18" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!) I0909 19:42:28.120506 1 shared_informer.go:247] Caches are synced for endpoint slice config I0909 19:42:28.120733 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [cced40a2180e] <== * I0909 19:42:06.529164 1 serving.go:347] Generated self-signed cert in-memory W0909 19:42:08.738646 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0909 19:42:08.738721 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0909 19:42:08.738736 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous. W0909 19:42:08.738867 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0909 19:42:08.761264 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259 I0909 19:42:08.761495 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0909 19:42:08.761696 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0909 19:42:08.761895 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" E0909 19:42:08.763407 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0909 19:42:08.766072 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0909 19:42:08.766271 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0909 19:42:08.766496 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0909 19:42:08.766743 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0909 19:42:08.766923 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0909 19:42:08.766932 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0909 19:42:08.767234 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0909 19:42:08.767290 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0909 19:42:08.767349 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0909 19:42:08.767399 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0909 19:42:08.767738 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0909 19:42:08.767444 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0909 19:42:08.769013 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0909 19:42:08.769637 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0909 19:42:09.587179 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0909 19:42:09.686091 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0909 19:42:09.720821 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0909 19:42:09.721115 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0909 19:42:09.757125 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0909 19:42:09.806511 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0909 19:42:09.836599 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0909 19:42:09.842644 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0909 19:42:09.890488 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0909 19:42:09.924207 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope I0909 19:42:11.662413 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file E0909 19:42:12.478624 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" E0909 19:42:12.483181 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" E0909 19:42:12.484189 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" * * ==> kubelet <== * -- Journal begins at Thu 2021-09-09 19:41:44 UTC, ends at Thu 2021-09-09 19:48:40 UTC. -- Sep 09 19:44:16 minikube kubelet[3768]: I0909 19:44:16.814979 3768 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ambassador/ambassador-operator-7589b768fc-v8d6n through plugin: invalid network status for" Sep 09 19:44:28 minikube kubelet[3768]: E0909 19:44:28.721445 3768 kubelet.go:1720] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[gcr-creds], unattached volumes=[gcr-creds kube-api-access-mxnfx]: timed out waiting for the condition" pod="kube-system/registry-creds-85b974c7d7-qzp5d" Sep 09 19:44:28 minikube kubelet[3768]: E0909 19:44:28.722181 3768 pod_workers.go:747] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[gcr-creds kube-api-access-mxnfx]: timed out waiting for the condition" pod="kube-system/registry-creds-85b974c7d7-qzp5d" podUID=13a5e19d-e807-444c-9313-0155c9a886f0 Sep 09 19:44:31 minikube kubelet[3768]: I0909 19:44:31.213639 3768 scope.go:110] "RemoveContainer" containerID="d8fdc3b62a29e1487a2e42bbc3bb1641668606995a6c40ba830d2cc93620d941" Sep 09 19:44:31 minikube kubelet[3768]: E0909 19:44:31.214175 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:44:33 minikube kubelet[3768]: E0909 19:44:33.990564 3768 secret.go:195] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found Sep 09 19:44:33 minikube kubelet[3768]: E0909 19:44:33.990668 3768 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/13a5e19d-e807-444c-9313-0155c9a886f0-gcr-creds podName:13a5e19d-e807-444c-9313-0155c9a886f0 nodeName:}" failed. No retries permitted until 2021-09-09 19:46:35.990653039 +0000 UTC m=+263.234550242 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/13a5e19d-e807-444c-9313-0155c9a886f0-gcr-creds") pod "registry-creds-85b974c7d7-qzp5d" (UID: "13a5e19d-e807-444c-9313-0155c9a886f0") : secret "registry-creds-gcr" not found Sep 09 19:44:46 minikube kubelet[3768]: I0909 19:44:46.213086 3768 scope.go:110] "RemoveContainer" containerID="d8fdc3b62a29e1487a2e42bbc3bb1641668606995a6c40ba830d2cc93620d941" Sep 09 19:44:46 minikube kubelet[3768]: E0909 19:44:46.213951 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:44:57 minikube kubelet[3768]: I0909 19:44:57.213388 3768 scope.go:110] "RemoveContainer" containerID="d8fdc3b62a29e1487a2e42bbc3bb1641668606995a6c40ba830d2cc93620d941" Sep 09 19:44:57 minikube kubelet[3768]: E0909 19:44:57.213931 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:45:12 minikube kubelet[3768]: I0909 19:45:12.212209 3768 scope.go:110] "RemoveContainer" containerID="d8fdc3b62a29e1487a2e42bbc3bb1641668606995a6c40ba830d2cc93620d941" Sep 09 19:45:12 minikube kubelet[3768]: E0909 19:45:12.212615 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:45:26 minikube kubelet[3768]: I0909 19:45:26.212772 3768 scope.go:110] "RemoveContainer" containerID="d8fdc3b62a29e1487a2e42bbc3bb1641668606995a6c40ba830d2cc93620d941" Sep 09 19:45:26 minikube kubelet[3768]: E0909 19:45:26.213440 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:45:37 minikube kubelet[3768]: I0909 19:45:37.213862 3768 scope.go:110] "RemoveContainer" containerID="d8fdc3b62a29e1487a2e42bbc3bb1641668606995a6c40ba830d2cc93620d941" Sep 09 19:45:37 minikube kubelet[3768]: I0909 19:45:37.351836 3768 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ambassador/ambassador-operator-7589b768fc-v8d6n through plugin: invalid network status for" Sep 09 19:45:38 minikube kubelet[3768]: I0909 19:45:38.398609 3768 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ambassador/ambassador-operator-7589b768fc-v8d6n through plugin: invalid network status for" Sep 09 19:45:39 minikube kubelet[3768]: I0909 19:45:39.414782 3768 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ambassador/ambassador-operator-7589b768fc-v8d6n through plugin: invalid network status for" Sep 09 19:45:39 minikube kubelet[3768]: I0909 19:45:39.421173 3768 scope.go:110] "RemoveContainer" containerID="d8fdc3b62a29e1487a2e42bbc3bb1641668606995a6c40ba830d2cc93620d941" Sep 09 19:45:39 minikube kubelet[3768]: I0909 19:45:39.421522 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:45:39 minikube kubelet[3768]: E0909 19:45:39.425826 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:45:40 minikube kubelet[3768]: I0909 19:45:40.429498 3768 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ambassador/ambassador-operator-7589b768fc-v8d6n through plugin: invalid network status for" Sep 09 19:45:54 minikube kubelet[3768]: I0909 19:45:54.213768 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:45:54 minikube kubelet[3768]: E0909 19:45:54.214473 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:46:06 minikube kubelet[3768]: I0909 19:46:06.213005 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:46:06 minikube kubelet[3768]: E0909 19:46:06.213998 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:46:21 minikube kubelet[3768]: I0909 19:46:21.214167 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:46:21 minikube kubelet[3768]: E0909 19:46:21.215218 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:46:33 minikube kubelet[3768]: I0909 19:46:33.214111 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:46:33 minikube kubelet[3768]: E0909 19:46:33.215538 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:46:36 minikube kubelet[3768]: E0909 19:46:36.036986 3768 secret.go:195] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found Sep 09 19:46:36 minikube kubelet[3768]: E0909 19:46:36.038445 3768 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/13a5e19d-e807-444c-9313-0155c9a886f0-gcr-creds podName:13a5e19d-e807-444c-9313-0155c9a886f0 nodeName:}" failed. No retries permitted until 2021-09-09 19:48:38.038359653 +0000 UTC m=+385.282256877 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/13a5e19d-e807-444c-9313-0155c9a886f0-gcr-creds") pod "registry-creds-85b974c7d7-qzp5d" (UID: "13a5e19d-e807-444c-9313-0155c9a886f0") : secret "registry-creds-gcr" not found Sep 09 19:46:44 minikube kubelet[3768]: E0909 19:46:44.213324 3768 kubelet.go:1720] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[gcr-creds], unattached volumes=[gcr-creds kube-api-access-mxnfx]: timed out waiting for the condition" pod="kube-system/registry-creds-85b974c7d7-qzp5d" Sep 09 19:46:44 minikube kubelet[3768]: E0909 19:46:44.213352 3768 pod_workers.go:747] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[gcr-creds kube-api-access-mxnfx]: timed out waiting for the condition" pod="kube-system/registry-creds-85b974c7d7-qzp5d" podUID=13a5e19d-e807-444c-9313-0155c9a886f0 Sep 09 19:46:46 minikube kubelet[3768]: I0909 19:46:46.213178 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:46:46 minikube kubelet[3768]: E0909 19:46:46.214230 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:46:58 minikube kubelet[3768]: I0909 19:46:58.213285 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:46:58 minikube kubelet[3768]: E0909 19:46:58.213876 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:47:12 minikube kubelet[3768]: I0909 19:47:12.213310 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:47:12 minikube kubelet[3768]: E0909 19:47:12.213786 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:47:23 minikube kubelet[3768]: I0909 19:47:23.213466 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:47:23 minikube kubelet[3768]: E0909 19:47:23.213896 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:47:38 minikube kubelet[3768]: I0909 19:47:38.213185 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:47:38 minikube kubelet[3768]: E0909 19:47:38.214748 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:47:53 minikube kubelet[3768]: I0909 19:47:53.213333 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:47:53 minikube kubelet[3768]: E0909 19:47:53.213678 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:48:07 minikube kubelet[3768]: I0909 19:48:07.212438 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:48:07 minikube kubelet[3768]: E0909 19:48:07.213970 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:48:21 minikube kubelet[3768]: I0909 19:48:21.212579 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:48:21 minikube kubelet[3768]: I0909 19:48:21.504823 3768 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ambassador/ambassador-operator-7589b768fc-v8d6n through plugin: invalid network status for" Sep 09 19:48:23 minikube kubelet[3768]: I0909 19:48:23.525293 3768 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ambassador/ambassador-operator-7589b768fc-v8d6n through plugin: invalid network status for" Sep 09 19:48:23 minikube kubelet[3768]: I0909 19:48:23.531259 3768 scope.go:110] "RemoveContainer" containerID="e51bc6725615298f31805a9753687bcc86dba71252a158de965150e7cdaa142e" Sep 09 19:48:23 minikube kubelet[3768]: I0909 19:48:23.531866 3768 scope.go:110] "RemoveContainer" containerID="316fbe24f824d67cc36f0d248b1d92e9fa44f369262486c65014a497ba14ae5a" Sep 09 19:48:23 minikube kubelet[3768]: E0909 19:48:23.532573 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:48:24 minikube kubelet[3768]: I0909 19:48:24.538175 3768 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ambassador/ambassador-operator-7589b768fc-v8d6n through plugin: invalid network status for" Sep 09 19:48:36 minikube kubelet[3768]: I0909 19:48:36.213483 3768 scope.go:110] "RemoveContainer" containerID="316fbe24f824d67cc36f0d248b1d92e9fa44f369262486c65014a497ba14ae5a" Sep 09 19:48:36 minikube kubelet[3768]: E0909 19:48:36.215684 3768 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ambassador-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ambassador-operator pod=ambassador-operator-7589b768fc-v8d6n_ambassador(32c0d645-cf07-4349-9fc5-705144feaa73)\"" pod="ambassador/ambassador-operator-7589b768fc-v8d6n" podUID=32c0d645-cf07-4349-9fc5-705144feaa73 Sep 09 19:48:38 minikube kubelet[3768]: E0909 19:48:38.061189 3768 secret.go:195] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found Sep 09 19:48:38 minikube kubelet[3768]: E0909 19:48:38.062006 3768 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/13a5e19d-e807-444c-9313-0155c9a886f0-gcr-creds podName:13a5e19d-e807-444c-9313-0155c9a886f0 nodeName:}" failed. No retries permitted until 2021-09-09 19:50:40.061976252 +0000 UTC m=+507.305873512 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/13a5e19d-e807-444c-9313-0155c9a886f0-gcr-creds") pod "registry-creds-85b974c7d7-qzp5d" (UID: "13a5e19d-e807-444c-9313-0155c9a886f0") : secret "registry-creds-gcr" not found * * ==> storage-provisioner [14742825193c] <== * I0909 19:42:29.425583 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0909 19:42:29.435218 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0909 19:42:29.435409 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0909 19:42:29.444830 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0909 19:42:29.445240 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bb7214ef-4294-4ebe-8124-ce3e51268074", APIVersion:"v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_e3e7715f-8246-47a2-9c24-3d00e6749f47 became leader I0909 19:42:29.445398 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_e3e7715f-8246-47a2-9c24-3d00e6749f47! I0909 19:42:29.546491 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_e3e7715f-8246-47a2-9c24-3d00e6749f47! ```
spowelljr commented 3 years ago

Hi @phumpal, thanks for reporting your issue with minikube!

I was able to reproduce your error, this regression is caused by https://github.com/kubernetes/minikube/pull/12230.

This is something that will have to be looked into and fixed, thank you very much for brining it to our attention!

phumpal commented 3 years ago

@spowelljr :+1: thanks!

FWIW for those looking for a potential workaround while this is being patched

cd /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/Formula
git reset e9f6dbbd7f9ee794feb56ba2a3ce73ffc446ab1b minikube.rb
git checkout -f minikube.rb
HOMEBREW_NO_AUTO_UPDATE=1 brew install minikube
brew link minikube
brew pin minikube # keep it at 1.22
minikube config set kubernetes-version v1.21.2
minikube delete
minikube start

P.S. I'm not sure who maintains the Homebrew formula but installing a specific version doesn't appear to be an option (e.g. brew install minikube@1.22.0) but that's probably a GH issue for a different org/repo ;)

spowelljr commented 3 years ago

Hi @phumpal, I've done some digging and found the root of the problem. The issue is with the 1.23.0 release of minikube we updated the default Kubernetes version to 1.22. In that version of Kubernetes they deprecated the apiextensions.k8s.io/v1beta1 API version, unfortunately the source code of Ambassador still uses apiextensions.k8s.io/v1beta1.

I've created an issue with Ambassador about the issue: https://github.com/datawire/ambassador-operator/issues/73

We're unfortunately stuck until that issue is resolved.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale