Closed gkwan closed 5 years ago
From the log looks like there is an internet or server issue ?. Delete the directory ~/.minikube and try again
I deleted the directory ~/.minikube and tried again: The exact command to reproduce the issue: minikube start --vm-driver kvm2 --v=5
The full output of the command that failed:
docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s docker-machine-driver-kvm2: 32.00 MiB / 32.00 MiB 100.00% 12.09 MiB p/s πΏ Downloading VM boot image ... minikube-v1.4.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s minikube-v1.4.0.iso: 135.73 MiB / 135.73 MiB [] 100.00% 11.22 MiB p/s 13s π₯ Creating kvm2 VM (CPUs=2, Memory=2000MB, Disk=20000MB) ... π³ Preparing Kubernetes v1.16.0 on Docker 18.09.9 ... πΎ Downloading kubeadm v1.16.0 πΎ Downloading kubelet v1.16.0 π Pulling images ... β Unable to pull images, which may be OK: running cmd: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm config images pull --co /v1.16.0:$PATH kubeadm config images pull --config /var/tmp/minikube/kubeadm.yaml stdout: stderr: failed to pull image "k8s.gcr.io/kube-apiserver:v1.16.0": output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: loo eout , error: exit status 1 To see the stack trace of this error execute with --v=5 or higher : Process exited with status 1 π Launching Kubernetes ...
π£ Error starting cluster: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm. minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable- s-etcd.yaml,Port-10250,Swap [init] Using Kubernetes version: v1.16.0 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.45 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.45 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can t [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 75.505942 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node minikube as control-plane by adding the label "node-role.kubernetes.io/master=''" [bootstrap-token] Using token: q78ib7.f2y9dub9z8zlklv3 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace error execution phase addon/coredns: unable to create serviceaccount: etcdserver: request timed out To see the stack trace of this error execute with --v=5 or higher
: Process exited with status 1
The output of the minikube logs
command:
==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 5a56798553d33 4689081edb103 35 seconds ago Exited storage-provisioner 5 d71b4d9e4ff07 fffc136d538e4 301ddc62b80b1 5 minutes ago Running kube-scheduler 1 2007ca2796849 66127a4938d27 06a629a7e51cd 5 minutes ago Running kube-controller-manager 2 9d86d22a06f66 2b82118d18264 06a629a7e51cd 7 minutes ago Exited kube-controller-manager 1 9d86d22a06f66 a9ed9b3361ba3 b2756210eeabf 8 minutes ago Running etcd 0 3308e69d40dee b40a0ef26b566 301ddc62b80b1 8 minutes ago Exited kube-scheduler 0 2007ca2796849 895b491d1c3c7 b305571ca60a5 8 minutes ago Running kube-apiserver 0 3a5dfcfe911c2 1dc64293486a6 bd12a212f9dcb 8 minutes ago Running kube-addon-manager 0 d9d5b465d44d6
==> dmesg <== [Oct14 19:09] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ +0.103397] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11 [ +19.979304] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ +0.022410] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ +0.022759] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10 [ +0.122865] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +1.267517] systemd-fstab-generator[1088]: Ignoring "noauto" for root device [ +0.010210] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ +0.000005] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [ +0.660525] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [ +3.538365] vboxguest: loading out-of-tree module taints kernel. [ +0.004185] vboxguest: PCI device not found, probably running on physical hardware. [ +1.802692] systemd-fstab-generator[1935]: Ignoring "noauto" for root device [Oct14 19:11] NFSD: Unable to end grace period: -110 [Oct14 19:16] systemd-fstab-generator[2588]: Ignoring "noauto" for root device [Oct14 19:17] systemd-fstab-generator[3637]: Ignoring "noauto" for root device [Oct14 19:18] kauditd_printk_skb: 68 callbacks suppressed
==> kernel <== 19:26:56 up 17 min, 0 users, load average: 0.74, 1.27, 1.10 Linux minikube 4.15.0 #1 SMP Wed Sep 18 07:44:58 PDT 2019 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2018.05.3"
==> kube-addon-manager [1dc64293486a] <== INFO: == Kubernetes addon ensure completed at 2019-10-14T19:26:13+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T19:26:15+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T19:26:19+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T19:26:24+00:00e =rro= r: noI oNFObj: ecLetsa dperas esedle tcoti aonpp dlyis abled.er ror: nINo FOo:b j==ec tsKu pbeasrnseetd esto a adppdolyn enserrorure: n coo mpoblejectets pd asat se20d 19to-10 ap-1p4Tly 19:26:2er4+ror00: :0no 0 ob==je cts pINFasO:s =ed= Rtoec aoppnclyil ing wiertroh r:d nepo reobcajetectd s plaasbseled = t= o applyINF O: == errReorco: ncinol inogbje wcittsh apaddssone-dm tano agaperpl ly abel ==err oserrv:ic no objects passed to apply error: no objects passed to apply error: no objects passed to apply eaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T19:26:26+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T19:26:30+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T19:26:31+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T19:26:35+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T19:26:37+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T19:26:39+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T19:26:41+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T19:26:45+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T19:26:47+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T19:26:50+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T19:26:51+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T19:26:55+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label ==
==> kube-apiserver [895b491d1c3c] <== Trace[504477144]: [5.733798352s] [5.733696245s] About to write a response I1014 19:24:03.190482 1 trace.go:116] Trace[853053590]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2019-10-14 19:23:57.883175042 +0000 UTC m=+313.431015358) (total time: 5.307283142s): Trace[853053590]: [5.307283142s] [5.307283142s] END I1014 19:24:03.190746 1 trace.go:116] Trace[1299838644]: "List" url:/apis/batch/v1/jobs (started: 2019-10-14 19:23:57.882841766 +0000 UTC m=+313.430682076) (total time: 5.307882476s): Trace[1299838644]: [5.307811449s] [5.307493442s] Listing from storage done I1014 19:24:03.216203 1 trace.go:116] Trace[2041465023]: "GuaranteedUpdate etcd3" type:core.Event (started: 2019-10-14 19:24:02.313218573 +0000 UTC m=+317.861058964) (total time: 902.951234ms): Trace[2041465023]: [876.424902ms] [876.424902ms] initial value restored I1014 19:24:03.216326 1 trace.go:116] Trace[2055079722]: "Patch" url:/api/v1/namespaces/kube-system/events/etcd-minikube.15cd99acc3205462 (started: 2019-10-14 19:24:02.31283834 +0000 UTC m=+317.860678660) (total time: 903.467702ms): Trace[2055079722]: [876.806965ms] [876.637054ms] About to apply patch I1014 19:24:31.199910 1 trace.go:116] Trace[760048711]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler (started: 2019-10-14 19:24:25.343157127 +0000 UTC m=+340.890997512) (total time: 5.856723053s): Trace[760048711]: [5.856642985s] [5.85645047s] About to write a response I1014 19:24:31.200543 1 trace.go:116] Trace[210278051]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager (started: 2019-10-14 19:24:25.486107417 +0000 UTC m=+341.033947726) (total time: 5.714416371s): Trace[210278051]: [5.71438987s] [5.714238226s] About to write a response I1014 19:24:31.203500 1 trace.go:116] Trace[2096366644]: "List etcd3" key:/minions,resourceVersion:,limit:0,continue: (started: 2019-10-14 19:24:27.716501397 +0000 UTC m=+343.264341650) (total time: 3.4869583s): Trace[2096366644]: [3.4869583s] [3.4869583s] END I1014 19:24:31.203947 1 trace.go:116] Trace[70163926]: "Get" url:/api/v1/namespaces/default (started: 2019-10-14 19:24:27.456724347 +0000 UTC m=+343.004564658) (total time: 3.747190282s): Trace[70163926]: [3.74713081s] [3.747027495s] About to write a response I1014 19:24:31.204167 1 trace.go:116] Trace[1085853979]: "List" url:/api/v1/nodes (started: 2019-10-14 19:24:27.716426216 +0000 UTC m=+343.264266473) (total time: 3.48772645s): Trace[1085853979]: [3.487523189s] [3.487457578s] Listing from storage done I1014 19:24:39.515455 1 trace.go:116] Trace[594849275]: "Create" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings (started: 2019-10-14 19:24:38.811945459 +0000 UTC m=+354.359785668) (total time: 703.463297ms): Trace[594849275]: [703.463297ms] [703.26373ms] END E1014 19:25:16.970928 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"} I1014 19:25:16.971882 1 trace.go:116] Trace[276970036]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager (started: 2019-10-14 19:25:09.966276703 +0000 UTC m=+385.514117076) (total time: 7.005541147s): Trace[276970036]: [7.005541147s] [7.005295141s] END I1014 19:25:17.324296 1 trace.go:116] Trace[530164247]: "GuaranteedUpdate etcd3" type:coordination.Lease (started: 2019-10-14 19:25:13.233977527 +0000 UTC m=+388.781817823) (total time: 4.090205233s): Trace[530164247]: [4.090159868s] [4.089681304s] Transaction committed I1014 19:25:17.324482 1 trace.go:116] Trace[1375421231]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube (started: 2019-10-14 19:25:13.233662064 +0000 UTC m=+388.781502357) (total time: 4.090771854s): Trace[1375421231]: [4.090672584s] [4.090435794s] Object stored in database I1014 19:25:17.330881 1 trace.go:116] Trace[537467480]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler (started: 2019-10-14 19:25:11.696338194 +0000 UTC m=+387.244178490) (total time: 5.634469132s): Trace[537467480]: [5.634353924s] [5.634205552s] About to write a response I1014 19:25:17.333810 1 trace.go:116] Trace[1687286059]: "List etcd3" key:/minions,resourceVersion:,limit:0,continue: (started: 2019-10-14 19:25:11.215934872 +0000 UTC m=+386.763775180) (total time: 6.117785357s): Trace[1687286059]: [6.117785357s] [6.117785357s] END I1014 19:25:17.334325 1 trace.go:116] Trace[983203750]: "List" url:/api/v1/nodes (started: 2019-10-14 19:25:11.21582215 +0000 UTC m=+386.763662446) (total time: 6.118452846s): Trace[983203750]: [6.118193321s] [6.118093768s] Listing from storage done I1014 19:25:17.339291 1 trace.go:116] Trace[101433671]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2019-10-14 19:25:13.386752084 +0000 UTC m=+388.934592395) (total time: 3.95248274s): Trace[101433671]: [3.95248274s] [3.95248274s] END I1014 19:25:17.339577 1 trace.go:116] Trace[1824976761]: "List" url:/apis/batch/v1/jobs (started: 2019-10-14 19:25:13.38646578 +0000 UTC m=+388.934306090) (total time: 3.95306623s): Trace[1824976761]: [3.952853978s] [3.952582603s] Listing from storage done I1014 19:25:17.339928 1 trace.go:116] Trace[1399053890]: "Get" url:/api/v1/namespaces/kube-system/pods/storage-provisioner (started: 2019-10-14 19:25:13.075414592 +0000 UTC m=+388.623254881) (total time: 4.26449203s): Trace[1399053890]: [4.264417507s] [4.264323739s] About to write a response I1014 19:25:17.381150 1 trace.go:116] Trace[1478917580]: "GuaranteedUpdate etcd3" type:core.Event (started: 2019-10-14 19:25:12.313869912 +0000 UTC m=+387.861710333) (total time: 5.067219715s): Trace[1478917580]: [5.029569965s] [5.029569965s] initial value restored I1014 19:25:17.381844 1 trace.go:116] Trace[2111065475]: "Patch" url:/api/v1/namespaces/kube-system/events/etcd-minikube.15cd99acc3205462 (started: 2019-10-14 19:25:12.313275094 +0000 UTC m=+387.861115386) (total time: 5.068526204s): Trace[2111065475]: [5.030166275s] [5.029825601s] About to apply patch I1014 19:26:22.568208 1 trace.go:116] Trace[729090554]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager (started: 2019-10-14 19:26:21.291075891 +0000 UTC m=+456.838916358) (total time: 1.277076741s): Trace[729090554]: [1.27688658s] [1.276691683s] About to write a response I1014 19:26:22.578819 1 trace.go:116] Trace[498151572]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler (started: 2019-10-14 19:26:21.935404257 +0000 UTC m=+457.483244616) (total time: 643.381565ms): Trace[498151572]: [643.335635ms] [643.110472ms] About to write a response I1014 19:26:23.347216 1 trace.go:116] Trace[300968029]: "GuaranteedUpdate etcd3" type:core.Endpoints (started: 2019-10-14 19:26:22.580320054 +0000 UTC m=+458.128160279) (total time: 766.862876ms): Trace[300968029]: [766.843391ms] [766.399004ms] Transaction committed I1014 19:26:23.347479 1 trace.go:116] Trace[1275983935]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler (started: 2019-10-14 19:26:22.580185779 +0000 UTC m=+458.128026019) (total time: 767.273412ms): Trace[1275983935]: [767.175878ms] [767.084355ms] Object stored in database I1014 19:26:23.349600 1 trace.go:116] Trace[1984833583]: "Get" url:/api/v1/namespaces/kube-system/pods/storage-provisioner (started: 2019-10-14 19:26:22.579605865 +0000 UTC m=+458.127446091) (total time: 769.965275ms): Trace[1984833583]: [769.810443ms] [769.769101ms] About to write a response I1014 19:26:23.351259 1 trace.go:116] Trace[2111354685]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2019-10-14 19:26:22.312427614 +0000 UTC m=+457.860267963) (total time: 1.038797173s): Trace[2111354685]: [265.116738ms] [265.116738ms] initial value restored Trace[2111354685]: [1.03276175s] [767.645012ms] Transaction prepared I1014 19:26:23.351502 1 trace.go:116] Trace[2089399199]: "Patch" url:/api/v1/namespaces/kube-system/events/etcd-minikube.15cd99acc3205462 (started: 2019-10-14 19:26:22.312175077 +0000 UTC m=+457.860015412) (total time: 1.0393015s): Trace[2089399199]: [265.372335ms] [265.234714ms] About to apply patch Trace[2089399199]: [1.039220576s] [773.512169ms] Object stored in database
==> kube-controller-manager [2b82118d1826] <== I1014 19:19:49.778062 1 graph_builder.go:282] GraphBuilder running I1014 19:19:49.778970 1 controllermanager.go:534] Started "garbagecollector" I1014 19:19:50.519956 1 controllermanager.go:534] Started "ttl" I1014 19:19:50.520022 1 ttl_controller.go:116] Starting TTL controller I1014 19:19:50.520250 1 shared_informer.go:197] Waiting for caches to sync for TTL E1014 19:19:50.718865 1 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W1014 19:19:50.718898 1 controllermanager.go:526] Skipping "service" I1014 19:19:50.777202 1 controllermanager.go:534] Started "persistentvolume-expander" I1014 19:19:50.778568 1 expand_controller.go:300] Starting expand controller I1014 19:19:50.778762 1 shared_informer.go:197] Waiting for caches to sync for expand I1014 19:19:51.591939 1 controllermanager.go:534] Started "deployment" I1014 19:19:51.592231 1 deployment_controller.go:152] Starting deployment controller I1014 19:19:51.592402 1 shared_informer.go:197] Waiting for caches to sync for deployment I1014 19:19:51.646178 1 node_lifecycle_controller.go:77] Sending events to api server E1014 19:19:51.646219 1 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided W1014 19:19:51.646390 1 controllermanager.go:526] Skipping "cloud-node-lifecycle" I1014 19:19:51.799253 1 controllermanager.go:534] Started "pv-protection" I1014 19:19:51.799274 1 pv_protection_controller.go:81] Starting PV protection controller I1014 19:19:51.799541 1 shared_informer.go:197] Waiting for caches to sync for PV protection I1014 19:19:51.799916 1 shared_informer.go:197] Waiting for caches to sync for resource quota I1014 19:19:51.811867 1 shared_informer.go:197] Waiting for caches to sync for garbage collector W1014 19:19:51.823273 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1014 19:19:51.845635 1 shared_informer.go:204] Caches are synced for daemon sets I1014 19:19:51.846555 1 shared_informer.go:204] Caches are synced for bootstrap_signer I1014 19:19:51.855910 1 shared_informer.go:204] Caches are synced for endpoint I1014 19:19:51.858777 1 shared_informer.go:204] Caches are synced for service account I1014 19:19:51.860954 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator I1014 19:19:51.879104 1 shared_informer.go:204] Caches are synced for GC I1014 19:19:51.882406 1 shared_informer.go:204] Caches are synced for HPA I1014 19:19:51.891654 1 shared_informer.go:204] Caches are synced for namespace I1014 19:19:51.899808 1 shared_informer.go:204] Caches are synced for ReplicationController I1014 19:19:51.899990 1 shared_informer.go:204] Caches are synced for PV protection I1014 19:19:51.899801 1 shared_informer.go:204] Caches are synced for certificate I1014 19:19:51.908312 1 shared_informer.go:204] Caches are synced for certificate I1014 19:19:51.920700 1 shared_informer.go:204] Caches are synced for TTL I1014 19:19:52.012415 1 shared_informer.go:204] Caches are synced for attach detach I1014 19:19:52.026549 1 shared_informer.go:204] Caches are synced for PVC protection I1014 19:19:52.069878 1 shared_informer.go:204] Caches are synced for persistent volume I1014 19:19:52.078999 1 shared_informer.go:204] Caches are synced for expand I1014 19:19:52.091151 1 shared_informer.go:204] Caches are synced for stateful set I1014 19:19:52.132305 1 shared_informer.go:204] Caches are synced for taint I1014 19:19:52.132591 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: W1014 19:19:52.132714 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp. I1014 19:19:52.133185 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal. I1014 19:19:52.132972 1 taint_manager.go:186] Starting NoExecuteTaintManager I1014 19:19:52.132999 1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"1b87bdb0-0a36-49e0-bed1-a44b1d9253b6", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I1014 19:19:52.217071 1 shared_informer.go:204] Caches are synced for job I1014 19:19:52.378448 1 shared_informer.go:204] Caches are synced for garbage collector I1014 19:19:52.378765 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1014 19:19:52.391347 1 shared_informer.go:204] Caches are synced for ReplicaSet I1014 19:19:52.393276 1 shared_informer.go:204] Caches are synced for deployment I1014 19:19:52.400344 1 shared_informer.go:204] Caches are synced for resource quota I1014 19:19:52.407404 1 shared_informer.go:204] Caches are synced for disruption I1014 19:19:52.407503 1 disruption.go:341] Sending events to api server. I1014 19:19:52.413474 1 shared_informer.go:204] Caches are synced for garbage collector I1014 19:19:52.431306 1 shared_informer.go:204] Caches are synced for resource quota E1014 19:19:52.825720 1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again E1014 19:19:53.004226 1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again I1014 19:20:46.492656 1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded F1014 19:20:46.493300 1 controllermanager.go:279] leaderelection lost
==> kube-controller-manager [66127a4938d2] <== I1014 19:21:21.656276 1 controllermanager.go:534] Started "persistentvolume-expander" I1014 19:21:21.656600 1 expand_controller.go:300] Starting expand controller I1014 19:21:21.657906 1 shared_informer.go:197] Waiting for caches to sync for expand I1014 19:21:21.809918 1 controllermanager.go:534] Started "replicationcontroller" I1014 19:21:21.811213 1 replica_set.go:182] Starting replicationcontroller controller I1014 19:21:21.812595 1 shared_informer.go:197] Waiting for caches to sync for ReplicationController I1014 19:21:21.954688 1 controllermanager.go:534] Started "podgc" I1014 19:21:21.955196 1 gc_controller.go:75] Starting GC controller I1014 19:21:21.955312 1 shared_informer.go:197] Waiting for caches to sync for GC I1014 19:21:22.103363 1 node_lifecycle_controller.go:294] Sending events to api server. I1014 19:21:22.104934 1 node_lifecycle_controller.go:327] Controller is using taint based evictions. I1014 19:21:22.106300 1 taint_manager.go:162] Sending events to api server. I1014 19:21:22.106946 1 node_lifecycle_controller.go:421] Controller will reconcile labels. I1014 19:21:22.107449 1 node_lifecycle_controller.go:434] Controller will taint node by condition. I1014 19:21:22.107835 1 controllermanager.go:534] Started "nodelifecycle" I1014 19:21:22.108282 1 node_lifecycle_controller.go:458] Starting node controller I1014 19:21:22.108876 1 shared_informer.go:197] Waiting for caches to sync for taint E1014 19:21:22.256869 1 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W1014 19:21:22.256912 1 controllermanager.go:526] Skipping "service" I1014 19:21:22.403206 1 controllermanager.go:534] Started "pv-protection" I1014 19:21:22.403254 1 pv_protection_controller.go:81] Starting PV protection controller I1014 19:21:22.404288 1 shared_informer.go:197] Waiting for caches to sync for PV protection I1014 19:21:22.408067 1 shared_informer.go:197] Waiting for caches to sync for garbage collector W1014 19:21:22.425487 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1014 19:21:22.435676 1 shared_informer.go:204] Caches are synced for PVC protection I1014 19:21:22.442428 1 shared_informer.go:204] Caches are synced for deployment I1014 19:21:22.451676 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator I1014 19:21:22.455808 1 shared_informer.go:204] Caches are synced for GC I1014 19:21:22.456194 1 shared_informer.go:204] Caches are synced for TTL I1014 19:21:22.457739 1 shared_informer.go:204] Caches are synced for certificate I1014 19:21:22.460357 1 shared_informer.go:204] Caches are synced for job I1014 19:21:22.469712 1 shared_informer.go:204] Caches are synced for bootstrap_signer I1014 19:21:22.496081 1 shared_informer.go:204] Caches are synced for endpoint I1014 19:21:22.501453 1 shared_informer.go:204] Caches are synced for ReplicaSet I1014 19:21:22.502123 1 shared_informer.go:204] Caches are synced for certificate I1014 19:21:22.506280 1 shared_informer.go:204] Caches are synced for HPA I1014 19:21:22.513168 1 shared_informer.go:204] Caches are synced for ReplicationController I1014 19:21:22.518553 1 shared_informer.go:204] Caches are synced for namespace I1014 19:21:22.526388 1 shared_informer.go:204] Caches are synced for service account I1014 19:21:22.590166 1 shared_informer.go:204] Caches are synced for persistent volume I1014 19:21:22.604813 1 shared_informer.go:204] Caches are synced for PV protection I1014 19:21:22.625690 1 shared_informer.go:204] Caches are synced for attach detach I1014 19:21:22.658670 1 shared_informer.go:204] Caches are synced for expand I1014 19:21:22.726491 1 shared_informer.go:204] Caches are synced for stateful set I1014 19:21:22.753367 1 shared_informer.go:204] Caches are synced for disruption I1014 19:21:22.753543 1 disruption.go:341] Sending events to api server. I1014 19:21:22.807794 1 shared_informer.go:204] Caches are synced for daemon sets I1014 19:21:22.809605 1 shared_informer.go:204] Caches are synced for taint I1014 19:21:22.809921 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: W1014 19:21:22.810155 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp. I1014 19:21:22.810243 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal. I1014 19:21:22.810708 1 taint_manager.go:186] Starting NoExecuteTaintManager I1014 19:21:22.811060 1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"1b87bdb0-0a36-49e0-bed1-a44b1d9253b6", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I1014 19:21:23.008522 1 shared_informer.go:204] Caches are synced for garbage collector I1014 19:21:23.018626 1 shared_informer.go:204] Caches are synced for garbage collector I1014 19:21:23.018712 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1014 19:21:23.018628 1 shared_informer.go:204] Caches are synced for resource quota I1014 19:21:23.106984 1 shared_informer.go:197] Waiting for caches to sync for resource quota I1014 19:21:23.207280 1 shared_informer.go:204] Caches are synced for resource quota E1014 19:25:16.974253 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: etcdserver: request timed out
==> kube-scheduler [b40a0ef26b56] <== E1014 19:18:56.524958 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1014 19:18:57.502510 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1014 19:18:57.505691 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1014 19:18:57.509622 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1014 19:18:57.512804 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1014 19:18:57.513756 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1014 19:18:57.515210 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1014 19:18:57.516024 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1014 19:18:57.516870 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1014 19:18:57.518615 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1014 19:18:57.532169 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1014 19:18:57.532309 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1014 19:18:58.504969 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1014 19:18:58.508707 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1014 19:18:58.511368 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1014 19:18:58.514514 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1014 19:18:58.515850 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1014 19:18:58.517527 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1014 19:18:58.519685 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1014 19:18:58.521092 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1014 19:18:58.521657 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1014 19:18:58.535345 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1014 19:18:58.536217 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1014 19:18:59.508961 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1014 19:18:59.511340 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1014 19:18:59.513534 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1014 19:18:59.516429 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1014 19:18:59.517581 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1014 19:18:59.519196 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1014 19:18:59.521337 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1014 19:18:59.521898 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1014 19:18:59.523151 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1014 19:18:59.537851 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1014 19:18:59.539635 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1014 19:19:00.516582 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1014 19:19:00.517952 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1014 19:19:00.521482 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1014 19:19:00.522320 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1014 19:19:00.522494 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1014 19:19:00.522923 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1014 19:19:00.523203 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1014 19:19:00.524059 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1014 19:19:00.524666 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1014 19:19:00.546603 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1014 19:19:00.546986 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1014 19:19:01.523401 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1014 19:19:01.526332 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1014 19:19:01.531018 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1014 19:19:01.531098 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1014 19:19:01.532698 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1014 19:19:01.532748 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1014 19:19:01.532794 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1014 19:19:01.532838 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1014 19:19:01.535213 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1014 19:19:01.547954 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1014 19:19:01.551592 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope I1014 19:19:03.530031 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler... I1014 19:19:03.602447 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler I1014 19:20:47.802507 1 leaderelection.go:287] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded F1014 19:20:47.802558 1 server.go:264] leaderelection lost
==> kube-scheduler [fffc136d538e] <== I1014 19:21:01.563397 1 serving.go:319] Generated self-signed cert in-memory I1014 19:21:02.835266 1 server.go:143] Version: v1.16.0 I1014 19:21:02.835519 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory W1014 19:21:02.836563 1 authorization.go:47] Authorization is disabled W1014 19:21:02.836589 1 authentication.go:79] Authentication is disabled I1014 19:21:02.836599 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I1014 19:21:02.836963 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259 I1014 19:21:03.844835 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler... I1014 19:21:19.505296 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler
==> kubelet <== -- Logs begin at Mon 2019-10-14 19:06:59 UTC, end at Mon 2019-10-14 19:26:56 UTC. -- Oct 14 19:18:49 minikube kubelet[3701]: E1014 19:18:49.953626 3701 kubelet.go:2267] node "minikube" not found Oct 14 19:18:50 minikube kubelet[3701]: E1014 19:18:50.053914 3701 kubelet.go:2267] node "minikube" not found Oct 14 19:18:50 minikube kubelet[3701]: E1014 19:18:50.154146 3701 kubelet.go:2267] node "minikube" not found Oct 14 19:18:50 minikube kubelet[3701]: E1014 19:18:50.254504 3701 kubelet.go:2267] node "minikube" not found Oct 14 19:18:50 minikube kubelet[3701]: E1014 19:18:50.354640 3701 kubelet.go:2267] node "minikube" not found Oct 14 19:18:50 minikube kubelet[3701]: E1014 19:18:50.428824 3701 controller.go:220] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found Oct 14 19:18:50 minikube kubelet[3701]: E1014 19:18:50.454914 3701 kubelet.go:2267] node "minikube" not found Oct 14 19:18:50 minikube kubelet[3701]: I1014 19:18:50.494508 3701 reconciler.go:154] Reconciler: start to sync state Oct 14 19:18:50 minikube kubelet[3701]: I1014 19:18:50.504596 3701 kubelet_node_status.go:75] Successfully registered node minikube Oct 14 19:18:50 minikube kubelet[3701]: E1014 19:18:50.530698 3701 controller.go:135] failed to ensure node lease exists, will retry in 7s, error: namespaces "kube-node-lease" not found Oct 14 19:18:51 minikube kubelet[3701]: E1014 19:18:51.351299 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd9990ffeacb2a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d442d792a, ext:20521122846, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d442d792a, ext:20521122846, loc:(time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:51 minikube kubelet[3701]: E1014 19:18:51.635301 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8db77", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb8977, ext:20584959060, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb8977, ext:20584959060, loc:(time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:52 minikube kubelet[3701]: E1014 19:18:52.620772 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8c3c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb71c7, ext:20584952995, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb71c7, ext:20584952995, loc:(time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:52 minikube kubelet[3701]: E1014 19:18:52.729986 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8e7bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb95bb, ext:20584962200, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb95bb, ext:20584962200, loc:(time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:53 minikube kubelet[3701]: E1014 19:18:53.488225 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999104c8029b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d490ab09b, ext:20602729335, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d490ab09b, ext:20602729335, loc:(time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:54 minikube kubelet[3701]: E1014 19:18:54.519405 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8e7bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb95bb, ext:20584962200, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d4a81902a, ext:20627297072, loc:(time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:54 minikube kubelet[3701]: E1014 19:18:54.608252 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8c3c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb71c7, ext:20584952995, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d4a814a14, ext:20627279128, loc:(time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:54 minikube kubelet[3701]: E1014 19:18:54.743191 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8db77", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb8977, ext:20584959060, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d4a8175fc, ext:20627290374, loc:(time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:55 minikube kubelet[3701]: E1014 19:18:55.586960 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8c3c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb71c7, ext:20584952995, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d4acc8210, ext:20632208660, loc:(time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:55 minikube kubelet[3701]: E1014 19:18:55.661462 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8db77", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb8977, ext:20584959060, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d4acd4fd0, ext:20632261334, loc:(time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:55 minikube kubelet[3701]: E1014 19:18:55.781468 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8e7bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb95bb, ext:20584962200, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d4acd6bed, ext:20632268533, loc:(time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:55 minikube kubelet[3701]: E1014 19:18:55.868413 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8c3c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb71c7, ext:20584952995, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d570ae4e2, ext:20837623748, loc:(time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:56 minikube kubelet[3701]: E1014 19:18:56.658749 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8db77", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb8977, ext:20584959060, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d570afdd1, ext:20837630130, loc:(time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:56 minikube kubelet[3701]: E1014 19:18:56.824951 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8e7bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb95bb, ext:20584962200, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d570b0b77, ext:20837633623, loc:(time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:57 minikube kubelet[3701]: E1014 19:18:57.720174 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8e7bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb95bb, ext:20584962200, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d6f3ec2f8, ext:21243676156, loc:(time.Location)(0x797f100)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:57 minikube kubelet[3701]: E1014 19:18:57.790534 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8c3c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb71c7, ext:20584952995, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d6f3e7fe7, ext:21243658988, loc:(time.Location)(0x797f100)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:58 minikube kubelet[3701]: E1014 19:18:58.587588 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8db77", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb8977, ext:20584959060, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d6f3ea86f, ext:21243669358, loc:(time.Location)(0x797f100)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:58 minikube kubelet[3701]: E1014 19:18:58.667298 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8c3c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb71c7, ext:20584952995, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505da3f4819a, ext:22054260349, loc:(time.Location)(0x797f100)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:58 minikube kubelet[3701]: E1014 19:18:58.847430 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8db77", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb8977, ext:20584959060, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505da3f49f51, ext:22054267952, loc:(time.Location)(0x797f100)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:59 minikube kubelet[3701]: E1014 19:18:59.676383 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8e7bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb95bb, ext:20584962200, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505da3f4af01, ext:22054271969, loc:(time.Location)(0x797f100)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:18:59 minikube kubelet[3701]: E1014 19:18:59.819402 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8c3c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb71c7, ext:20584952995, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505e0c7ce918, ext:23660546560, loc:(time.Location)(0x797f100)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:19:00 minikube kubelet[3701]: E1014 19:19:00.597825 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8db77", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb8977, ext:20584959060, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505e0c7d06cf, ext:23660554168, loc:(time.Location)(0x797f100)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:19:00 minikube kubelet[3701]: E1014 19:19:00.685742 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8e7bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb95bb, ext:20584962200, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505e0c7d18b1, ext:23660558780, loc:(time.Location)(0x797f100)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:19:00 minikube kubelet[3701]: E1014 19:19:00.769308 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8c3c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb71c7, ext:20584952995, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505ed8fc38e6, ext:26870216781, loc:(time.Location)(0x797f100)}}, Count:8, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:19:00 minikube kubelet[3701]: E1014 19:19:00.833235 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cd999103b8db77", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf61505d47fb8977, ext:20584959060, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf61505ed8fcc06f, ext:26870251373, loc:(time.Location)(0x797f100)}}, Count:8, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 14 19:20:06 minikube kubelet[3701]: I1014 19:20:06.189818 3701 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/d130f810-0604-4278-96d1-20fb66b4f8b7-tmp") pod "storage-provisioner" (UID: "d130f810-0604-4278-96d1-20fb66b4f8b7") Oct 14 19:20:06 minikube kubelet[3701]: I1014 19:20:06.189877 3701 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-k29jc" (UniqueName: "kubernetes.io/secret/d130f810-0604-4278-96d1-20fb66b4f8b7-storage-provisioner-token-k29jc") pod "storage-provisioner" (UID: "d130f810-0604-4278-96d1-20fb66b4f8b7") Oct 14 19:20:17 minikube kubelet[3701]: W1014 19:20:17.480014 3701 pod_container_deletor.go:75] Container "d71b4d9e4ff077f7c695a0fb6dfb27bc71fa397f882baccef574acbbf7a84d32" not found in pod's containers Oct 14 19:20:49 minikube kubelet[3701]: E1014 19:20:49.934286 3701 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-minikube.15cd99acc3205462", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"372", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-minikube", UID:"6161e72fedd3e055b187e2d72a66101b", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{etcd}"}, Reason:"Unhealthy", Message:"Liveness probe failed: HTTP probe failed with statuscode: 503", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63706677612, loc:(time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6150829277542a, ext:169760844127, loc:(time.Location)(0x797f100)}}, Count:2, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!) Oct 14 19:20:51 minikube kubelet[3701]: E1014 19:20:51.469177 3701 controller.go:170] failed to update node lease, error: etcdserver: request timed out Oct 14 19:20:56 minikube kubelet[3701]: E1014 19:20:56.768682 3701 controller.go:170] failed to update node lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "minikube": the object has been modified; please apply your changes to the latest version and try again Oct 14 19:20:59 minikube kubelet[3701]: E1014 19:20:59.004518 3701 remote_runtime.go:295] ContainerStatus "fffc136d538e479c9d5b7465b542af8e08fb2eb0114cb023901a53aea961f90b" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: fffc136d538e479c9d5b7465b542af8e08fb2eb0114cb023901a53aea961f90b Oct 14 19:20:59 minikube kubelet[3701]: E1014 19:20:59.005139 3701 kuberuntime_manager.go:935] getPodContainerStatuses for pod "kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" failed: rpc error: code = Unknown desc = Error: No such container: fffc136d538e479c9d5b7465b542af8e08fb2eb0114cb023901a53aea961f90b Oct 14 19:21:33 minikube kubelet[3701]: E1014 19:21:33.309095 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:22:19 minikube kubelet[3701]: E1014 19:22:19.867523 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:22:34 minikube kubelet[3701]: E1014 19:22:34.074360 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:23:28 minikube kubelet[3701]: E1014 19:23:28.982465 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:23:40 minikube kubelet[3701]: E1014 19:23:40.073230 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:23:53 minikube kubelet[3701]: E1014 19:23:53.078225 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:24:06 minikube kubelet[3701]: E1014 19:24:06.076880 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:24:19 minikube kubelet[3701]: E1014 19:24:19.559351 3701 remote_runtime.go:295] ContainerStatus "ceef438a02b2a5d227a7b5e336bee1f6e2f78ff53db93d98fe658d6409ab930d" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: ceef438a02b2a5d227a7b5e336bee1f6e2f78ff53db93d98fe658d6409ab930d Oct 14 19:24:19 minikube kubelet[3701]: E1014 19:24:19.559754 3701 kuberuntime_manager.go:935] getPodContainerStatuses for pod "storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" failed: rpc error: code = Unknown desc = Error: No such container: ceef438a02b2a5d227a7b5e336bee1f6e2f78ff53db93d98fe658d6409ab930d Oct 14 19:24:50 minikube kubelet[3701]: E1014 19:24:50.945672 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:25:06 minikube kubelet[3701]: E1014 19:25:06.074647 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:25:17 minikube kubelet[3701]: E1014 19:25:17.074646 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:25:28 minikube kubelet[3701]: E1014 19:25:28.073091 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:25:42 minikube kubelet[3701]: E1014 19:25:42.074611 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:25:55 minikube kubelet[3701]: E1014 19:25:55.074723 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:26:09 minikube kubelet[3701]: E1014 19:26:09.075932 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)" Oct 14 19:26:54 minikube kubelet[3701]: E1014 19:26:54.385182 3701 pod_workers.go:191] Error syncing pod d130f810-0604-4278-96d1-20fb66b4f8b7 ("storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d130f810-0604-4278-96d1-20fb66b4f8b7)"
==> storage-provisioner [5a56798553d3] <== F1014 19:26:53.969018 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
try to pull the same kind of image manually using docker command line. See if that is a docker issue ?
This one is a bit of a mystery to me.
This message seems to indicate that the VM can't speak to k8s.gcr.io, but it should still come up properly anyways:
failed to pull image "k8s.gcr.io/kube-apiserver:v1.16.0": output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.122.1:53: read udp 192.168.122.41:35360->192.168.122.1:53: i/o timeout
The logs seem to indicate that things eventually settled down. minikube status
will probably show a working cluster. I was going to suspect a machine that was overloaded, but the load average looks fine.
The craziest part is that this is repeatable on your system. I'm stumped. Help wanted.
try to pull the same kind of image manually using docker command line. See if that is a docker issue ?
@nanikjava Docker looks fine.
sudo docker pull k8s.gcr.io/kube-apiserver:v1.16.0 17:45:18 [sudo] password for user: v1.16.0: Pulling from kube-apiserver 39fafc05754f: Pull complete f7d981e9e2f5: Pull complete Digest: sha256:f4168527c91289da2708f62ae729fdde5fb484167dd05ffbb7ab666f60de96cd Status: Downloaded newer image for k8s.gcr.io/kube-apiserver:v1.16.0 k8s.gcr.io/kube-apiserver:v1.16.0
Not particularly sure what is the problem and the error is saying connection issue, but can't be certain. At this stage would recommend try and see if it works using virtualbox
minikube start --v=5
@nanikjava Interesting suggestion but we do not use virtualbox for our VMs.
There is non-responsiveness to security issues by Oracle as evident with this ticket:
https://www.virtualbox.org/ticket/17987
Would like to stick to the original problem why minikube failed on kvm.
This may be related to #5423
Do you mind upgrading to minikube v1.5.2 and sharing the output of minikube start
?
@tstromberg I upgraded to minikube v1.5.2 and we also replace the router/switch box. It's working now. Thanks very much for your help!
@gkwan that's great! Is everything working as expected now? If so, can this issue be closed?
@priyawadhwa Yes, it looks to be working as expected. Please close this issue.
The exact command to reproduce the issue: minikube start --vm-driver kvm2 --v=5
The full output of the command that failed:
minikube v1.4.0 on Arch 2019.1.20
π₯ Creating kvm2 VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
π³ Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
π Pulling images ...
β Unable to pull images, which may be OK: running cmd: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm config images pull --config /var/tmp/minikube/kubeadm.yaml:
command failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm config images pull --config /var/tmp/minikube/kubeadm.yaml
stdout:
stderr: failed to pull image "k8s.gcr.io/kube-apiserver:v1.16.0": output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.122.1:53: read udp 192.168.122.41:35360->192.168.122.1:53: i/o timeout
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher
: Process exited with status 1
π Launching Kubernetes ...
π£ Error starting cluster: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap [init] Using Kubernetes version: v1.16.0 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.138 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.39.138 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 162.756475 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node minikube as control-plane by adding the label "node-role.kubernetes.io/master=''" error execution phase mark-control-plane: error patching node "minikube" through apiserver: etcdserver: request timed out To see the stack trace of this error execute with --v=5 or higher
: Process exited with status 1
πΏ Sorry that minikube crashed. If this was unexpected, we would love to hear from you: π https://github.com/kubernetes/minikube/issues/new/choose
The output of the
==> Docker <==
-- Logs begin at Mon 2019-10-14 02:05:48 UTC, end at Mon 2019-10-14 02:52:37 UTC. --
Oct 14 02:21:01 minikube dockerd[2025]: time="2019-10-14T02:21:01.674209565Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b4dd86142447041116e774f26df8992270fc32fbba3b90d3c552e852922efb93/shim.sock" debug=false pid=6044
Oct 14 02:21:09 minikube dockerd[2025]: time="2019-10-14T02:21:09.315481605Z" level=info msg="shim reaped" id=7f1a5631ff892ef855e0813cc366349eb14ad2b7456ea77b57d19d6343ecd5cb
Oct 14 02:21:09 minikube dockerd[2025]: time="2019-10-14T02:21:09.327407497Z" level=warning msg="7f1a5631ff892ef855e0813cc366349eb14ad2b7456ea77b57d19d6343ecd5cb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7f1a5631ff892ef855e0813cc366349eb14ad2b7456ea77b57d19d6343ecd5cb/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:21:09 minikube dockerd[2025]: time="2019-10-14T02:21:09.326692803Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:21:30 minikube dockerd[2025]: time="2019-10-14T02:21:30.479714814Z" level=info msg="shim reaped" id=b4dd86142447041116e774f26df8992270fc32fbba3b90d3c552e852922efb93
Oct 14 02:21:30 minikube dockerd[2025]: time="2019-10-14T02:21:30.489957506Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:21:30 minikube dockerd[2025]: time="2019-10-14T02:21:30.490123217Z" level=warning msg="b4dd86142447041116e774f26df8992270fc32fbba3b90d3c552e852922efb93 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b4dd86142447041116e774f26df8992270fc32fbba3b90d3c552e852922efb93/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:21:35 minikube dockerd[2025]: time="2019-10-14T02:21:35.569530321Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7e17575cf174c1b60116cc61f06088a0e2eeeed0a67a7e07633d2e140e3637d2/shim.sock" debug=false pid=6271
Oct 14 02:22:06 minikube dockerd[2025]: time="2019-10-14T02:22:06.135206874Z" level=info msg="shim reaped" id=7e17575cf174c1b60116cc61f06088a0e2eeeed0a67a7e07633d2e140e3637d2
Oct 14 02:22:06 minikube dockerd[2025]: time="2019-10-14T02:22:06.145369824Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:22:06 minikube dockerd[2025]: time="2019-10-14T02:22:06.145559254Z" level=warning msg="7e17575cf174c1b60116cc61f06088a0e2eeeed0a67a7e07633d2e140e3637d2 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7e17575cf174c1b60116cc61f06088a0e2eeeed0a67a7e07633d2e140e3637d2/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:22:33 minikube dockerd[2025]: time="2019-10-14T02:22:33.942950643Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/73c5c820e7083d77cdc7acdb3bdb8bd88b2d09a80268e7bba692deddb3d5f056/shim.sock" debug=false pid=6543
Oct 14 02:23:03 minikube dockerd[2025]: time="2019-10-14T02:23:03.074450864Z" level=info msg="shim reaped" id=37d33180a7a8c42337586175dd869fb4cccac46a0604052b57032a4e574ae9ad
Oct 14 02:23:03 minikube dockerd[2025]: time="2019-10-14T02:23:03.086386051Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:23:03 minikube dockerd[2025]: time="2019-10-14T02:23:03.086739318Z" level=warning msg="37d33180a7a8c42337586175dd869fb4cccac46a0604052b57032a4e574ae9ad cleanup: failed to unmount IPC: umount /var/lib/docker/containers/37d33180a7a8c42337586175dd869fb4cccac46a0604052b57032a4e574ae9ad/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:23:04 minikube dockerd[2025]: time="2019-10-14T02:23:04.849710103Z" level=info msg="shim reaped" id=73c5c820e7083d77cdc7acdb3bdb8bd88b2d09a80268e7bba692deddb3d5f056
Oct 14 02:23:04 minikube dockerd[2025]: time="2019-10-14T02:23:04.860881363Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:23:04 minikube dockerd[2025]: time="2019-10-14T02:23:04.861247023Z" level=warning msg="73c5c820e7083d77cdc7acdb3bdb8bd88b2d09a80268e7bba692deddb3d5f056 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/73c5c820e7083d77cdc7acdb3bdb8bd88b2d09a80268e7bba692deddb3d5f056/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:23:07 minikube dockerd[2025]: time="2019-10-14T02:23:07.534958595Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/478b2642b2074bfeae3db79ad1ab74fbfa309f917857c431cd6ce855d83a57b9/shim.sock" debug=false pid=6707
Oct 14 02:23:37 minikube dockerd[2025]: time="2019-10-14T02:23:37.296481756Z" level=info msg="shim reaped" id=478b2642b2074bfeae3db79ad1ab74fbfa309f917857c431cd6ce855d83a57b9
Oct 14 02:23:37 minikube dockerd[2025]: time="2019-10-14T02:23:37.307059064Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:23:37 minikube dockerd[2025]: time="2019-10-14T02:23:37.307312042Z" level=warning msg="478b2642b2074bfeae3db79ad1ab74fbfa309f917857c431cd6ce855d83a57b9 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/478b2642b2074bfeae3db79ad1ab74fbfa309f917857c431cd6ce855d83a57b9/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:23:57 minikube dockerd[2025]: time="2019-10-14T02:23:57.438562201Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/87eb9d08fb8e51518607712081694386bd8b562fdee141876a155e6f49608290/shim.sock" debug=false pid=6979
Oct 14 02:23:57 minikube dockerd[2025]: time="2019-10-14T02:23:57.487894406Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/df9d847c5f0b1671c6a8635707d670054cbd68a1f336a79082f3019289cc5a0f/shim.sock" debug=false pid=7003
Oct 14 02:24:28 minikube dockerd[2025]: time="2019-10-14T02:24:28.145762782Z" level=info msg="shim reaped" id=87eb9d08fb8e51518607712081694386bd8b562fdee141876a155e6f49608290
Oct 14 02:24:28 minikube dockerd[2025]: time="2019-10-14T02:24:28.156209446Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:24:28 minikube dockerd[2025]: time="2019-10-14T02:24:28.156493534Z" level=warning msg="87eb9d08fb8e51518607712081694386bd8b562fdee141876a155e6f49608290 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/87eb9d08fb8e51518607712081694386bd8b562fdee141876a155e6f49608290/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:25:52 minikube dockerd[2025]: time="2019-10-14T02:25:52.022977129Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/24f2c660e498ebdadc26dc11b0162f68d8ab5ebae82f236d27f21c75437e12b0/shim.sock" debug=false pid=7717
Oct 14 02:26:22 minikube dockerd[2025]: time="2019-10-14T02:26:22.842683597Z" level=info msg="shim reaped" id=24f2c660e498ebdadc26dc11b0162f68d8ab5ebae82f236d27f21c75437e12b0
Oct 14 02:26:22 minikube dockerd[2025]: time="2019-10-14T02:26:22.853005543Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:26:22 minikube dockerd[2025]: time="2019-10-14T02:26:22.853286969Z" level=warning msg="24f2c660e498ebdadc26dc11b0162f68d8ab5ebae82f236d27f21c75437e12b0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/24f2c660e498ebdadc26dc11b0162f68d8ab5ebae82f236d27f21c75437e12b0/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:26:25 minikube dockerd[2025]: time="2019-10-14T02:26:25.182101374Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1b1e298ebd29cf91e59cc64aaf690fc951ad965d11fee46ca4ae773764bd8392/shim.sock" debug=false pid=7978
Oct 14 02:29:14 minikube dockerd[2025]: time="2019-10-14T02:29:14.852772497Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0257d7932b41c1462be315ef26bd66d6cc5d3091d415b4a3d4c71b76bbb63991/shim.sock" debug=false pid=9074
Oct 14 02:29:48 minikube dockerd[2025]: time="2019-10-14T02:29:48.495372097Z" level=info msg="shim reaped" id=0257d7932b41c1462be315ef26bd66d6cc5d3091d415b4a3d4c71b76bbb63991
Oct 14 02:29:48 minikube dockerd[2025]: time="2019-10-14T02:29:48.505670373Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:29:48 minikube dockerd[2025]: time="2019-10-14T02:29:48.505922548Z" level=warning msg="0257d7932b41c1462be315ef26bd66d6cc5d3091d415b4a3d4c71b76bbb63991 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/0257d7932b41c1462be315ef26bd66d6cc5d3091d415b4a3d4c71b76bbb63991/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:34:05 minikube dockerd[2025]: time="2019-10-14T02:34:05.994121427Z" level=info msg="shim reaped" id=df9d847c5f0b1671c6a8635707d670054cbd68a1f336a79082f3019289cc5a0f
Oct 14 02:34:06 minikube dockerd[2025]: time="2019-10-14T02:34:06.004994311Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:34:06 minikube dockerd[2025]: time="2019-10-14T02:34:06.005102673Z" level=warning msg="df9d847c5f0b1671c6a8635707d670054cbd68a1f336a79082f3019289cc5a0f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/df9d847c5f0b1671c6a8635707d670054cbd68a1f336a79082f3019289cc5a0f/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:34:07 minikube dockerd[2025]: time="2019-10-14T02:34:07.105664707Z" level=info msg="shim reaped" id=1b1e298ebd29cf91e59cc64aaf690fc951ad965d11fee46ca4ae773764bd8392
Oct 14 02:34:07 minikube dockerd[2025]: time="2019-10-14T02:34:07.116078934Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:34:07 minikube dockerd[2025]: time="2019-10-14T02:34:07.116284133Z" level=warning msg="1b1e298ebd29cf91e59cc64aaf690fc951ad965d11fee46ca4ae773764bd8392 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/1b1e298ebd29cf91e59cc64aaf690fc951ad965d11fee46ca4ae773764bd8392/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:34:17 minikube dockerd[2025]: time="2019-10-14T02:34:17.544217642Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6168c2e83555af86626d739de10cc3af57d1702b7a429d72ec63f28612a6c87f/shim.sock" debug=false pid=10925
Oct 14 02:34:53 minikube dockerd[2025]: time="2019-10-14T02:34:53.385208683Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ff8359c40fcc4448a112a0e1d4b13c33629f16a82aaacf74fd67cd1254057ac5/shim.sock" debug=false pid=11152
Oct 14 02:35:23 minikube dockerd[2025]: time="2019-10-14T02:35:23.768199898Z" level=info msg="shim reaped" id=ff8359c40fcc4448a112a0e1d4b13c33629f16a82aaacf74fd67cd1254057ac5
Oct 14 02:35:23 minikube dockerd[2025]: time="2019-10-14T02:35:23.781106941Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:35:23 minikube dockerd[2025]: time="2019-10-14T02:35:23.781418309Z" level=warning msg="ff8359c40fcc4448a112a0e1d4b13c33629f16a82aaacf74fd67cd1254057ac5 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ff8359c40fcc4448a112a0e1d4b13c33629f16a82aaacf74fd67cd1254057ac5/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:36:25 minikube dockerd[2025]: time="2019-10-14T02:36:25.333277469Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/81f2daf254f380df1ae51af9f22f30d1f19b427e6b16af19b8105f5237775930/shim.sock" debug=false pid=11791
Oct 14 02:40:33 minikube dockerd[2025]: time="2019-10-14T02:40:33.574884133Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0a4ccebfb137f6b37d769375431509fdc9f36e77b13c44220334e65278d88339/shim.sock" debug=false pid=13433
Oct 14 02:41:03 minikube dockerd[2025]: time="2019-10-14T02:41:03.985600033Z" level=info msg="shim reaped" id=0a4ccebfb137f6b37d769375431509fdc9f36e77b13c44220334e65278d88339
Oct 14 02:41:03 minikube dockerd[2025]: time="2019-10-14T02:41:03.996208506Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:41:03 minikube dockerd[2025]: time="2019-10-14T02:41:03.996719818Z" level=warning msg="0a4ccebfb137f6b37d769375431509fdc9f36e77b13c44220334e65278d88339 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/0a4ccebfb137f6b37d769375431509fdc9f36e77b13c44220334e65278d88339/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:46:07 minikube dockerd[2025]: time="2019-10-14T02:46:07.305200977Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/566c6f33a1440d5c5f246af9170f5e7f4945ed8eefc29115ab1496eb7beae5b2/shim.sock" debug=false pid=15777
Oct 14 02:46:37 minikube dockerd[2025]: time="2019-10-14T02:46:37.678221166Z" level=info msg="shim reaped" id=566c6f33a1440d5c5f246af9170f5e7f4945ed8eefc29115ab1496eb7beae5b2
Oct 14 02:46:37 minikube dockerd[2025]: time="2019-10-14T02:46:37.688584653Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="events.TaskDelete"
Oct 14 02:46:37 minikube dockerd[2025]: time="2019-10-14T02:46:37.688848082Z" level=warning msg="566c6f33a1440d5c5f246af9170f5e7f4945ed8eefc29115ab1496eb7beae5b2 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/566c6f33a1440d5c5f246af9170f5e7f4945ed8eefc29115ab1496eb7beae5b2/mounts/shm, flags: 0x2: no such file or directory"
Oct 14 02:51:48 minikube dockerd[2025]: time="2019-10-14T02:51:48.343263867Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/597f0c02cf476c9c1f0d52b6fa04e24ea26b62f9ba880e08f2d2cefe8591f1f1/shim.sock" debug=false pid=18046
Oct 14 02:52:18 minikube dockerd[2025]: time="2019-10-14T02:52:18.807465304Z" level=info msg="shim reaped" id=597f0c02cf476c9c1f0d52b6fa04e24ea26b62f9ba880e08f2d2cefe8591f1f1
Oct 14 02:52:18 minikube dockerd[2025]: time="2019-10-14T02:52:18.817175050Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 14 02:52:18 minikube dockerd[2025]: time="2019-10-14T02:52:18.817293492Z" level=warning msg="597f0c02cf476c9c1f0d52b6fa04e24ea26b62f9ba880e08f2d2cefe8591f1f1 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/597f0c02cf476c9c1f0d52b6fa04e24ea26b62f9ba880e08f2d2cefe8591f1f1/mounts/shm, flags: 0x2: no such file or directory"
minikube logs
command:==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 597f0c02cf476 4689081edb103 50 seconds ago Exited storage-provisioner 10 dc282a94b135c 81f2daf254f38 06a629a7e51cd 16 minutes ago Running kube-controller-manager 7 7128ff2c07816 6168c2e83555a 301ddc62b80b1 18 minutes ago Running kube-scheduler 5 f6e53078adc6b 1b1e298ebd29c 06a629a7e51cd 26 minutes ago Exited kube-controller-manager 6 7128ff2c07816 df9d847c5f0b1 301ddc62b80b1 28 minutes ago Exited kube-scheduler 4 f6e53078adc6b e7bb64d5019ba b305571ca60a5 36 minutes ago Running kube-apiserver 1 349638bddc15a dc47eb5a410ac bd12a212f9dcb 37 minutes ago Running kube-addon-manager 0 cc79d01079aa8 83ab903f76d57 b2756210eeabf 37 minutes ago Running etcd 0 c7682b64a4c1b 1399d87f9bc57 b305571ca60a5 37 minutes ago Exited kube-apiserver 0 349638bddc15a
==> dmesg <== [Oct14 02:07] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ +0.118020] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11 [Oct14 02:08] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ +0.023608] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ +0.021933] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10 [ +0.128260] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +1.206808] systemd-fstab-generator[1089]: Ignoring "noauto" for root device [ +0.004100] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [ +0.852559] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [ +2.479243] vboxguest: loading out-of-tree module taints kernel. [ +0.005195] vboxguest: PCI device not found, probably running on physical hardware. [ +5.030504] systemd-fstab-generator[1934]: Ignoring "noauto" for root device [Oct14 02:10] NFSD: Unable to end grace period: -110 [Oct14 02:13] systemd-fstab-generator[2595]: Ignoring "noauto" for root device [Oct14 02:14] systemd-fstab-generator[3607]: Ignoring "noauto" for root device [ +32.126722] kauditd_printk_skb: 68 callbacks suppressed [Oct14 02:23] hrtimer: interrupt took 2019149 ns
==> kernel <== 02:52:37 up 44 min, 0 users, load average: 0.62, 0.94, 1.32 Linux minikube 4.15.0 #1 SMP Wed Sep 18 07:44:58 PDT 2019 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2018.05.3"
==> kube-addon-manager [dc47eb5a410a] <== error: no objects passed to apply error: no objects passed to apply error: no objects passed to apply error: no objects passed to apply error: no objects passed to apply error: no objects passed to apply error: no objects passed to apply error: no objects passed to apply error: no objects passed to apply INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T02:51:56+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T02:51:58+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T02:52:00+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T02:52:04+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T02:52:06+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T02:52:08+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T02:52:10+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T02:52:13+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T02:52:16+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T02:52:18+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T02:52:20+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T02:52:23+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T02:52:25+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T02:52:29+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T02:52:30+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-10-14T02:52:34+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-10-14T02:52:36+00:00 ==
==> kube-apiserver [1399d87f9bc5] <== Trace[496781739]: [4.70108118s] [4.70108118s] END E1014 02:16:27.559618 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1014 02:16:27.559974 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} I1014 02:16:27.560080 1 trace.go:116] Trace[1740610602]: "List etcd3" key:/limitranges/kube-system,resourceVersion:,limit:0,continue: (started: 2019-10-14 02:16:15.860015705 +0000 UTC m=+84.028846681) (total time: 11.700048262s): Trace[1740610602]: [11.700048262s] [11.700048262s] END E1014 02:16:27.560091 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} I1014 02:16:27.561562 1 trace.go:116] Trace[1633673537]: "Create" url:/api/v1/namespaces/kube-system/pods (started: 2019-10-14 02:16:15.859265833 +0000 UTC m=+84.028096843) (total time: 11.702274737s): Trace[1633673537]: [11.702274737s] [11.702086104s] END I1014 02:16:27.561688 1 trace.go:116] Trace[403938364]: "Create" url:/api/v1/namespaces/kube-system/pods (started: 2019-10-14 02:16:22.857489997 +0000 UTC m=+91.026320991) (total time: 4.70418657s): Trace[403938364]: [4.70418657s] [4.704006554s] END E1014 02:16:27.561740 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: unexpected EOF I1014 02:16:27.561811 1 trace.go:116] Trace[1125035297]: "Create" url:/api/v1/namespaces/kube-system/pods (started: 2019-10-14 02:16:21.856185278 +0000 UTC m=+90.025016239) (total time: 5.705614655s): Trace[1125035297]: [5.705614655s] [5.705500328s] END I1014 02:16:27.561974 1 trace.go:116] Trace[1339814465]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler (started: 2019-10-14 02:16:22.355923929 +0000 UTC m=+90.524754914) (total time: 5.206038481s): Trace[1339814465]: [5.206038481s] [5.205951746s] END I1014 02:16:27.563084 1 trace.go:116] Trace[274673693]: "List" url:/api/v1/namespaces/kube-system/limitranges (started: 2019-10-14 02:16:22.858472485 +0000 UTC m=+91.027303440) (total time: 4.704593245s): Trace[274673693]: [4.704593245s] [4.70456682s] END I1014 02:16:27.564147 1 trace.go:116] Trace[490956110]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector (started: 2019-10-14 02:16:15.633491954 +0000 UTC m=+83.802323014) (total time: 11.930639619s): Trace[490956110]: [11.930639619s] [11.93055557s] END I1014 02:16:27.565237 1 trace.go:116] Trace[1855547820]: "List" url:/api/v1/namespaces/kube-system/limitranges (started: 2019-10-14 02:16:15.859967258 +0000 UTC m=+84.028798225) (total time: 11.70525344s): Trace[1855547820]: [11.70525344s] [11.705213768s] END E1014 02:16:27.566445 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.567448 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.568548 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.569674 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.570780 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.571893 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.573005 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.574494 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.575243 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.576321 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.577417 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.578717 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: dial tcp [::1]:8443: connect: connection refused I1014 02:16:27.579693 1 log.go:172] http2: server connection error from 127.0.0.1:34312: connection error: PROTOCOL_ERROR I1014 02:16:27.579722 1 log.go:172] http2: server connection error from 127.0.0.1:34312: connection error: PROTOCOL_ERROR E1014 02:16:27.579943 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.581021 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.581616 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} I1014 02:16:27.581979 1 log.go:172] http: TLS handshake error from 127.0.0.1:34310: EOF E1014 02:16:27.582079 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.584276 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.585380 1 storage_rbac.go:255] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.586563 1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.587769 1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.589001 1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.590206 1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.591303 1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.592393 1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.593486 1 storage_rbac.go:284] unable to reconcile role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.594581 1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.595806 1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.596876 1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.597968 1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.599144 1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.600190 1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: dial tcp [::1]:8443: connect: connection refused E1014 02:16:27.601429 1 storage_rbac.go:316] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public: Get https://[::1]:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: dial tcp [::1]:8443: connect: connection refused E1014 02:16:28.058609 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} I1014 02:16:28.058793 1 trace.go:116] Trace[342050563]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/default (started: 2019-10-14 02:16:22.838204826 +0000 UTC m=+91.007035814) (total time: 5.220568813s): Trace[342050563]: [5.220568813s] [5.220530453s] END E1014 02:16:28.787895 1 controller.go:185] StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.138, ResourceVersion: 0, AdditionalErrorMsg:
==> kube-apiserver [e7bb64d5019b] <== I1014 02:45:11.180980 1 trace.go:116] Trace[1660931088]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler (started: 2019-10-14 02:45:02.189619028 +0000 UTC m=+1711.547581457) (total time: 8.99132673s): Trace[1660931088]: [8.991275496s] [8.991168169s] About to write a response I1014 02:45:11.192566 1 trace.go:116] Trace[1411217568]: "Get" url:/api/v1/namespaces/default (started: 2019-10-14 02:45:09.432343657 +0000 UTC m=+1718.790306117) (total time: 1.760181333s): Trace[1411217568]: [1.760113052s] [1.760040364s] About to write a response I1014 02:45:11.199672 1 trace.go:116] Trace[738675845]: "GuaranteedUpdate etcd3" type:core.Event (started: 2019-10-14 02:45:03.634130433 +0000 UTC m=+1712.992092878) (total time: 7.565481757s): Trace[738675845]: [7.545284497s] [7.545284497s] initial value restored I1014 02:45:11.200122 1 trace.go:116] Trace[1556526339]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-apiserver-minikube.15cd61cbf4b302fe (started: 2019-10-14 02:45:03.633757244 +0000 UTC m=+1712.991719676) (total time: 7.566320229s): Trace[1556526339]: [7.545660807s] [7.545349114s] About to apply patch I1014 02:45:12.089357 1 trace.go:116] Trace[721935290]: "GuaranteedUpdate etcd3" type:v1.Endpoints (started: 2019-10-14 02:45:11.210401531 +0000 UTC m=+1720.568363965) (total time: 878.919565ms): Trace[721935290]: [878.881882ms] [864.016137ms] Transaction committed I1014 02:49:11.509257 1 trace.go:116] Trace[1379536403]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager (started: 2019-10-14 02:49:04.867549032 +0000 UTC m=+1954.225511516) (total time: 6.641672916s): Trace[1379536403]: [6.641634412s] [6.641469181s] About to write a response I1014 02:49:11.526584 1 trace.go:116] Trace[523006779]: "Get" url:/api/v1/namespaces/default (started: 2019-10-14 02:49:09.439057984 +0000 UTC m=+1958.797020445) (total time: 2.087484962s): Trace[523006779]: [2.087403292s] [2.087354604s] About to write a response I1014 02:49:11.530310 1 trace.go:116] Trace[1045938416]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler (started: 2019-10-14 02:49:05.341544276 +0000 UTC m=+1954.699506722) (total time: 6.188704881s): Trace[1045938416]: [6.188619344s] [6.188485934s] About to write a response I1014 02:49:30.597423 1 trace.go:116] Trace[1113941935]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler (started: 2019-10-14 02:49:29.761357149 +0000 UTC m=+1979.119319585) (total time: 836.026733ms): Trace[1113941935]: [835.96552ms] [835.872784ms] About to write a response I1014 02:49:55.067911 1 trace.go:116] Trace[1467352693]: "Create" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings (started: 2019-10-14 02:49:54.107706428 +0000 UTC m=+2003.465668859) (total time: 960.152785ms): Trace[1467352693]: [960.152785ms] [959.650943ms] END I1014 02:50:22.054393 1 trace.go:116] Trace[467521597]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager (started: 2019-10-14 02:50:15.931149373 +0000 UTC m=+2025.289111776) (total time: 6.123162986s): Trace[467521597]: [6.123119932s] [6.123054601s] About to write a response I1014 02:50:22.055189 1 trace.go:116] Trace[2030738938]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler (started: 2019-10-14 02:50:15.213073953 +0000 UTC m=+2024.571036439) (total time: 6.842093934s): Trace[2030738938]: [6.842057719s] [6.841861556s] About to write a response I1014 02:50:22.055617 1 trace.go:116] Trace[742560437]: "Get" url:/api/v1/namespaces/default (started: 2019-10-14 02:50:19.44069664 +0000 UTC m=+2028.798659077) (total time: 2.614899895s): Trace[742560437]: [2.614492636s] [2.614439473s] About to write a response I1014 02:50:22.056417 1 trace.go:116] Trace[905869683]: "List etcd3" key:/minions,resourceVersion:,limit:0,continue: (started: 2019-10-14 02:50:17.968955529 +0000 UTC m=+2027.326918000) (total time: 4.087442998s): Trace[905869683]: [4.087442998s] [4.087442998s] END I1014 02:50:22.056513 1 trace.go:116] Trace[1779429733]: "List" url:/api/v1/nodes (started: 2019-10-14 02:50:17.968872915 +0000 UTC m=+2027.326835381) (total time: 4.08762654s): Trace[1779429733]: [4.087567262s] [4.087494139s] Listing from storage done I1014 02:50:22.232146 1 trace.go:116] Trace[1742676733]: "GuaranteedUpdate etcd3" type:core.Event (started: 2019-10-14 02:50:21.179203061 +0000 UTC m=+2030.537165499) (total time: 1.052914578s): Trace[1742676733]: [876.695474ms] [876.695474ms] initial value restored I1014 02:50:22.232435 1 trace.go:116] Trace[119096774]: "Patch" url:/api/v1/namespaces/kube-system/events/etcd-minikube.15cd61c212665fbd (started: 2019-10-14 02:50:21.179112522 +0000 UTC m=+2030.537074941) (total time: 1.053281122s): Trace[119096774]: [876.788158ms] [876.752299ms] About to apply patch Trace[119096774]: [1.05323188s] [176.312158ms] Object stored in database I1014 02:50:44.991869 1 trace.go:116] Trace[890352664]: "Create" url:/apis/storage.k8s.io/v1/storageclasses (started: 2019-10-14 02:50:44.267516601 +0000 UTC m=+2053.625479051) (total time: 724.282036ms): Trace[890352664]: [724.282036ms] [724.08914ms] END I1014 02:50:44.993032 1 trace.go:116] Trace[1091426874]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler (started: 2019-10-14 02:50:44.48552661 +0000 UTC m=+2053.843489031) (total time: 507.464412ms): Trace[1091426874]: [507.372227ms] [507.278097ms] About to write a response I1014 02:51:36.774804 1 trace.go:116] Trace[1808296975]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler (started: 2019-10-14 02:51:29.2339716 +0000 UTC m=+2098.591934029) (total time: 7.540782423s): Trace[1808296975]: [7.540714161s] [7.540622885s] About to write a response I1014 02:51:36.776124 1 trace.go:116] Trace[1488671393]: "Get" url:/api/v1/namespaces/default (started: 2019-10-14 02:51:29.441987181 +0000 UTC m=+2098.799949594) (total time: 7.334096817s): Trace[1488671393]: [7.334007384s] [7.3339686s] About to write a response I1014 02:51:36.779351 1 trace.go:116] Trace[1213439390]: "GuaranteedUpdate etcd3" type:coordination.Lease (started: 2019-10-14 02:51:32.102947726 +0000 UTC m=+2101.460910206) (total time: 4.676365801s): Trace[1213439390]: [4.67633817s] [4.676011172s] Transaction committed I1014 02:51:36.779450 1 trace.go:116] Trace[641407855]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube (started: 2019-10-14 02:51:32.102595427 +0000 UTC m=+2101.460557935) (total time: 4.676833573s): Trace[641407855]: [4.676776063s] [4.676490735s] Object stored in database I1014 02:51:36.780128 1 trace.go:116] Trace[960542406]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2019-10-14 02:51:32.298713316 +0000 UTC m=+2101.656675776) (total time: 4.481332974s): Trace[960542406]: [4.481332974s] [4.481332974s] END I1014 02:51:36.780499 1 trace.go:116] Trace[1734816333]: "List" url:/apis/batch/v1/jobs (started: 2019-10-14 02:51:32.298531827 +0000 UTC m=+2101.656494282) (total time: 4.48194387s): Trace[1734816333]: [4.481837884s] [4.481667111s] Listing from storage done I1014 02:51:36.783241 1 trace.go:116] Trace[901420411]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager (started: 2019-10-14 02:51:30.427928478 +0000 UTC m=+2099.785890913) (total time: 6.35527359s): Trace[901420411]: [6.355199582s] [6.355095511s] About to write a response I1014 02:51:36.925736 1 trace.go:116] Trace[462761015]: "Create" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings (started: 2019-10-14 02:51:36.053542332 +0000 UTC m=+2105.411504741) (total time: 872.164694ms): Trace[462761015]: [872.164694ms] [871.998382ms] END I1014 02:51:36.934331 1 trace.go:116] Trace[1968946968]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2019-10-14 02:51:33.638807513 +0000 UTC m=+2102.996769971) (total time: 3.29548668s): Trace[1968946968]: [3.139925833s] [3.139925833s] initial value restored I1014 02:51:36.934711 1 trace.go:116] Trace[1249709196]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-apiserver-minikube.15cd61cbf4b302fe (started: 2019-10-14 02:51:33.638523813 +0000 UTC m=+2102.996486264) (total time: 3.29616302s): Trace[1249709196]: [3.140212121s] [3.140022516s] About to apply patch Trace[1249709196]: [3.295850542s] [155.419257ms] Object stored in database
==> kube-controller-manager [1b1e298ebd29] <== I1014 02:26:50.813673 1 controllermanager.go:534] Started "job" I1014 02:26:50.813723 1 job_controller.go:143] Starting job controller I1014 02:26:50.814081 1 shared_informer.go:197] Waiting for caches to sync for job I1014 02:26:51.036013 1 controllermanager.go:534] Started "horizontalpodautoscaling" I1014 02:26:51.036074 1 horizontal.go:156] Starting HPA controller I1014 02:26:51.036317 1 shared_informer.go:197] Waiting for caches to sync for HPA I1014 02:26:51.183550 1 controllermanager.go:534] Started "ttl" I1014 02:26:51.183624 1 ttl_controller.go:116] Starting TTL controller I1014 02:26:51.183641 1 shared_informer.go:197] Waiting for caches to sync for TTL I1014 02:26:51.335476 1 node_lifecycle_controller.go:294] Sending events to api server. I1014 02:26:51.335610 1 node_lifecycle_controller.go:327] Controller is using taint based evictions. I1014 02:26:51.335675 1 taint_manager.go:162] Sending events to api server. I1014 02:26:51.335755 1 node_lifecycle_controller.go:421] Controller will reconcile labels. I1014 02:26:51.335855 1 node_lifecycle_controller.go:434] Controller will taint node by condition. I1014 02:26:51.335914 1 controllermanager.go:534] Started "nodelifecycle" W1014 02:26:51.335929 1 controllermanager.go:513] "endpointslice" is disabled I1014 02:26:51.336090 1 node_lifecycle_controller.go:458] Starting node controller I1014 02:26:51.336133 1 shared_informer.go:197] Waiting for caches to sync for taint I1014 02:26:51.484886 1 controllermanager.go:534] Started "serviceaccount" I1014 02:26:51.485397 1 shared_informer.go:197] Waiting for caches to sync for resource quota I1014 02:26:51.485592 1 serviceaccounts_controller.go:116] Starting service account controller I1014 02:26:51.488301 1 shared_informer.go:197] Waiting for caches to sync for service account I1014 02:26:51.537906 1 shared_informer.go:204] Caches are synced for PVC protection I1014 02:26:51.538877 1 shared_informer.go:204] Caches are synced for HPA W1014 02:26:51.549689 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1014 02:26:51.560867 1 shared_informer.go:204] Caches are synced for PV protection I1014 02:26:51.567327 1 shared_informer.go:204] Caches are synced for endpoint I1014 02:26:51.580171 1 shared_informer.go:204] Caches are synced for GC I1014 02:26:51.583765 1 shared_informer.go:204] Caches are synced for TTL I1014 02:26:51.587398 1 shared_informer.go:204] Caches are synced for attach detach I1014 02:26:51.588954 1 shared_informer.go:204] Caches are synced for service account I1014 02:26:51.594168 1 shared_informer.go:204] Caches are synced for namespace I1014 02:26:51.608598 1 shared_informer.go:204] Caches are synced for ReplicaSet I1014 02:26:51.614493 1 shared_informer.go:204] Caches are synced for job I1014 02:26:51.628008 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator I1014 02:26:51.635859 1 shared_informer.go:204] Caches are synced for ReplicationController I1014 02:26:51.636652 1 shared_informer.go:204] Caches are synced for taint I1014 02:26:51.636948 1 taint_manager.go:186] Starting NoExecuteTaintManager I1014 02:26:51.637016 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: W1014 02:26:51.637454 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp. I1014 02:26:51.637761 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal. I1014 02:26:51.637090 1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"b5b07e56-5580-44ff-abb8-0992803e1261", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I1014 02:26:51.743223 1 shared_informer.go:204] Caches are synced for bootstrap_signer I1014 02:26:51.810672 1 shared_informer.go:204] Caches are synced for persistent volume I1014 02:26:51.885796 1 shared_informer.go:204] Caches are synced for certificate I1014 02:26:51.888249 1 shared_informer.go:204] Caches are synced for expand I1014 02:26:51.909124 1 shared_informer.go:204] Caches are synced for certificate I1014 02:26:51.947523 1 shared_informer.go:204] Caches are synced for daemon sets I1014 02:26:52.030721 1 shared_informer.go:204] Caches are synced for stateful set I1014 02:26:52.075652 1 shared_informer.go:204] Caches are synced for disruption I1014 02:26:52.075719 1 disruption.go:341] Sending events to api server. I1014 02:26:52.082294 1 shared_informer.go:204] Caches are synced for resource quota I1014 02:26:52.085937 1 shared_informer.go:204] Caches are synced for resource quota I1014 02:26:52.096098 1 shared_informer.go:204] Caches are synced for garbage collector I1014 02:26:52.096186 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1014 02:26:52.135669 1 shared_informer.go:204] Caches are synced for deployment I1014 02:26:52.256749 1 shared_informer.go:197] Waiting for caches to sync for garbage collector I1014 02:26:52.357268 1 shared_informer.go:204] Caches are synced for garbage collector I1014 02:34:06.985606 1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded F1014 02:34:06.985696 1 controllermanager.go:279] leaderelection lost
==> kube-controller-manager [81f2daf254f3] <== I1014 02:36:47.015722 1 shared_informer.go:197] Waiting for caches to sync for bootstrap_signer I1014 02:36:47.167408 1 controllermanager.go:534] Started "persistentvolume-binder" I1014 02:36:47.167526 1 pv_controller_base.go:282] Starting persistent volume controller I1014 02:36:47.168451 1 shared_informer.go:197] Waiting for caches to sync for persistent volume I1014 02:36:47.873044 1 garbagecollector.go:130] Starting garbage collector controller I1014 02:36:47.873093 1 shared_informer.go:197] Waiting for caches to sync for garbage collector I1014 02:36:47.873141 1 graph_builder.go:282] GraphBuilder running I1014 02:36:47.873452 1 controllermanager.go:534] Started "garbagecollector" I1014 02:36:47.889343 1 controllermanager.go:534] Started "replicaset" I1014 02:36:47.889646 1 replica_set.go:182] Starting replicaset controller I1014 02:36:47.889684 1 shared_informer.go:197] Waiting for caches to sync for ReplicaSet E1014 02:36:47.903206 1 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W1014 02:36:47.903243 1 controllermanager.go:526] Skipping "service" I1014 02:36:47.916211 1 controllermanager.go:534] Started "replicationcontroller" I1014 02:36:47.916292 1 replica_set.go:182] Starting replicationcontroller controller I1014 02:36:47.916549 1 shared_informer.go:197] Waiting for caches to sync for ReplicationController I1014 02:36:48.066182 1 controllermanager.go:534] Started "job" I1014 02:36:48.066622 1 job_controller.go:143] Starting job controller I1014 02:36:48.066654 1 shared_informer.go:197] Waiting for caches to sync for job I1014 02:36:48.666517 1 controllermanager.go:534] Started "horizontalpodautoscaling" W1014 02:36:48.666731 1 controllermanager.go:526] Skipping "ttl-after-finished" I1014 02:36:48.667456 1 shared_informer.go:197] Waiting for caches to sync for resource quota I1014 02:36:48.668091 1 horizontal.go:156] Starting HPA controller I1014 02:36:48.668177 1 shared_informer.go:197] Waiting for caches to sync for HPA W1014 02:36:48.713812 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1014 02:36:48.717375 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator I1014 02:36:48.721494 1 shared_informer.go:204] Caches are synced for bootstrap_signer I1014 02:36:48.722041 1 shared_informer.go:204] Caches are synced for PVC protection I1014 02:36:48.734548 1 shared_informer.go:204] Caches are synced for GC I1014 02:36:48.742720 1 shared_informer.go:204] Caches are synced for namespace I1014 02:36:48.751896 1 shared_informer.go:204] Caches are synced for taint I1014 02:36:48.752108 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: W1014 02:36:48.752237 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp. I1014 02:36:48.752348 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal. I1014 02:36:48.752482 1 taint_manager.go:186] Starting NoExecuteTaintManager I1014 02:36:48.752932 1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"b5b07e56-5580-44ff-abb8-0992803e1261", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I1014 02:36:48.767019 1 shared_informer.go:204] Caches are synced for service account I1014 02:36:48.767479 1 shared_informer.go:204] Caches are synced for job I1014 02:36:48.770375 1 shared_informer.go:204] Caches are synced for persistent volume I1014 02:36:48.770753 1 shared_informer.go:204] Caches are synced for daemon sets I1014 02:36:48.770919 1 shared_informer.go:204] Caches are synced for HPA I1014 02:36:48.771005 1 shared_informer.go:204] Caches are synced for expand I1014 02:36:48.771188 1 shared_informer.go:204] Caches are synced for TTL I1014 02:36:48.774908 1 shared_informer.go:204] Caches are synced for PV protection I1014 02:36:48.783394 1 shared_informer.go:204] Caches are synced for stateful set I1014 02:36:48.789958 1 shared_informer.go:204] Caches are synced for ReplicaSet I1014 02:36:48.816416 1 shared_informer.go:204] Caches are synced for deployment I1014 02:36:48.816416 1 shared_informer.go:204] Caches are synced for certificate I1014 02:36:48.866386 1 shared_informer.go:204] Caches are synced for certificate I1014 02:36:48.959612 1 shared_informer.go:204] Caches are synced for disruption I1014 02:36:48.959665 1 disruption.go:341] Sending events to api server. I1014 02:36:49.016963 1 shared_informer.go:204] Caches are synced for ReplicationController I1014 02:36:49.065324 1 shared_informer.go:204] Caches are synced for attach detach I1014 02:36:49.267778 1 shared_informer.go:204] Caches are synced for resource quota I1014 02:36:49.273571 1 shared_informer.go:204] Caches are synced for garbage collector I1014 02:36:49.273790 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1014 02:36:49.286256 1 shared_informer.go:204] Caches are synced for endpoint I1014 02:36:49.306714 1 shared_informer.go:204] Caches are synced for resource quota I1014 02:36:49.369235 1 shared_informer.go:197] Waiting for caches to sync for garbage collector I1014 02:36:49.469680 1 shared_informer.go:204] Caches are synced for garbage collector
==> kube-scheduler [6168c2e83555] <== I1014 02:34:18.514910 1 serving.go:319] Generated self-signed cert in-memory I1014 02:34:19.229910 1 server.go:143] Version: v1.16.0 I1014 02:34:19.230148 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory W1014 02:34:19.231579 1 authorization.go:47] Authorization is disabled W1014 02:34:19.231607 1 authentication.go:79] Authentication is disabled I1014 02:34:19.231618 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I1014 02:34:19.232224 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259 I1014 02:34:20.234420 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler... E1014 02:34:47.020616 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) I1014 02:34:50.299618 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler
==> kube-scheduler [df9d847c5f0b] <== I1014 02:24:00.169641 1 serving.go:319] Generated self-signed cert in-memory I1014 02:24:01.060229 1 server.go:143] Version: v1.16.0 I1014 02:24:01.060641 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory W1014 02:24:01.063649 1 authorization.go:47] Authorization is disabled W1014 02:24:01.063688 1 authentication.go:79] Authentication is disabled I1014 02:24:01.063703 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I1014 02:24:01.064556 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259 I1014 02:24:02.066654 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler... E1014 02:24:11.978490 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-scheduler: etcdserver: request timed out I1014 02:24:22.259917 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler I1014 02:34:05.925929 1 leaderelection.go:287] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded F1014 02:34:05.925965 1 server.go:264] leaderelection lost E1014 02:34:05.928120 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
==> kubelet <== -- Logs begin at Mon 2019-10-14 02:05:48 UTC, end at Mon 2019-10-14 02:52:37 UTC. -- Oct 14 02:38:13 minikube kubelet[3661]: E1014 02:38:13.857026 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:38:26 minikube kubelet[3661]: E1014 02:38:26.856643 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:38:38 minikube kubelet[3661]: E1014 02:38:38.855989 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:38:52 minikube kubelet[3661]: E1014 02:38:52.856895 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:39:06 minikube kubelet[3661]: E1014 02:39:06.856091 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:39:20 minikube kubelet[3661]: E1014 02:39:20.856315 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:39:31 minikube kubelet[3661]: E1014 02:39:31.856366 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:39:44 minikube kubelet[3661]: E1014 02:39:44.856682 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:39:55 minikube kubelet[3661]: E1014 02:39:55.856697 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:40:08 minikube kubelet[3661]: E1014 02:40:08.855765 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:40:21 minikube kubelet[3661]: E1014 02:40:21.857647 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:41:05 minikube kubelet[3661]: E1014 02:41:05.183283 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:41:19 minikube kubelet[3661]: E1014 02:41:19.856718 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:41:30 minikube kubelet[3661]: E1014 02:41:30.855795 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:41:43 minikube kubelet[3661]: E1014 02:41:43.856701 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:41:55 minikube kubelet[3661]: E1014 02:41:55.858130 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:42:08 minikube kubelet[3661]: E1014 02:42:08.856811 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:42:21 minikube kubelet[3661]: E1014 02:42:21.857231 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:42:36 minikube kubelet[3661]: E1014 02:42:36.856449 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:42:50 minikube kubelet[3661]: E1014 02:42:50.856027 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:43:04 minikube kubelet[3661]: E1014 02:43:04.856550 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:43:17 minikube kubelet[3661]: E1014 02:43:17.857312 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:43:29 minikube kubelet[3661]: E1014 02:43:29.857685 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:43:41 minikube kubelet[3661]: E1014 02:43:41.856973 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:43:52 minikube kubelet[3661]: E1014 02:43:52.857168 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:44:06 minikube kubelet[3661]: E1014 02:44:06.856267 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:44:18 minikube kubelet[3661]: E1014 02:44:18.857321 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:44:30 minikube kubelet[3661]: E1014 02:44:30.855969 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:44:44 minikube kubelet[3661]: E1014 02:44:44.855932 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:44:57 minikube kubelet[3661]: E1014 02:44:57.856734 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:45:12 minikube kubelet[3661]: E1014 02:45:12.856207 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:45:25 minikube kubelet[3661]: E1014 02:45:25.856210 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:45:39 minikube kubelet[3661]: E1014 02:45:39.856392 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:45:53 minikube kubelet[3661]: E1014 02:45:53.856140 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:46:38 minikube kubelet[3661]: E1014 02:46:38.865932 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:46:50 minikube kubelet[3661]: E1014 02:46:50.856136 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:47:05 minikube kubelet[3661]: E1014 02:47:05.861365 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:47:16 minikube kubelet[3661]: E1014 02:47:16.856685 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:47:27 minikube kubelet[3661]: E1014 02:47:27.855858 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:47:42 minikube kubelet[3661]: E1014 02:47:42.856051 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:47:57 minikube kubelet[3661]: E1014 02:47:57.857569 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:48:08 minikube kubelet[3661]: E1014 02:48:08.856448 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:48:21 minikube kubelet[3661]: E1014 02:48:21.857754 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:48:36 minikube kubelet[3661]: E1014 02:48:36.858233 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:48:50 minikube kubelet[3661]: E1014 02:48:50.856923 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:49:01 minikube kubelet[3661]: E1014 02:49:01.857874 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:49:16 minikube kubelet[3661]: E1014 02:49:16.856471 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:49:29 minikube kubelet[3661]: E1014 02:49:29.857442 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:49:44 minikube kubelet[3661]: E1014 02:49:44.856082 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:49:56 minikube kubelet[3661]: E1014 02:49:56.856610 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:50:09 minikube kubelet[3661]: E1014 02:50:09.857444 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:50:20 minikube kubelet[3661]: E1014 02:50:20.857456 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:50:31 minikube kubelet[3661]: E1014 02:50:31.857631 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:50:42 minikube kubelet[3661]: E1014 02:50:42.857213 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:50:57 minikube kubelet[3661]: E1014 02:50:57.856218 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:51:10 minikube kubelet[3661]: E1014 02:51:10.857066 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:51:24 minikube kubelet[3661]: E1014 02:51:24.857433 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:51:35 minikube kubelet[3661]: E1014 02:51:35.856255 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:52:19 minikube kubelet[3661]: E1014 02:52:19.343885 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)" Oct 14 02:52:31 minikube kubelet[3661]: E1014 02:52:31.856733 3661 pod_workers.go:191] Error syncing pod 290ada63-f80d-4bc3-97ac-c4520ed3725b ("storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(290ada63-f80d-4bc3-97ac-c4520ed3725b)"
==> storage-provisioner [597f0c02cf47] <== F1014 02:52:18.738439 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
The operating system version: Linux 4.19.78-2-lts x86_64