kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.28k stars 4.87k forks source link

Minikube crashes when starting #5431

Closed c0debreaker closed 5 years ago

c0debreaker commented 5 years ago

The exact command to reproduce the issue: minikube start --vm-driver=virtualbox

The full output of the command that failed:

šŸ˜„ minikube v1.4.0 on Darwin 10.13.6 šŸ‘ Upgrading from Kubernetes 1.10.0 to 1.16.0 šŸ’æ Downloading VM boot image ...

minikube-v1.4.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s minikube-v1.4.0.iso: 135.73 MiB / 135.73 MiB [-] 100.00% 6.00 MiB p/s 23s šŸ’” Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one. šŸ”„ Starting existing virtualbox VM for "minikube" ... āŒ› Waiting for the host to be provisioned ... šŸ³ Preparing Kubernetes v1.16.0 on Docker 17.12.1-ce ... šŸ’¾ Downloading kubelet v1.16.0 šŸ’¾ Downloading kubeadm v1.16.0 šŸšœ Pulling images ... šŸ”„ Relaunching Kubernetes using kubeadm ...

šŸ’£ Error restarting cluster: addon phase: command failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml stdout: stderr: error execution phase addon/coredns: unable to fetch CoreDNS current installed version and ConfigMap.: multiple DNS addon deployments found: [{{ } {coredns kube-system /apis/apps/v1/namespaces/kube-system/deployments/coredns f8e0f295-da58-11e8-85cb-0800279ddd13 12696 2 2018-10-28 02:27:10 +0000 UTC 2019-09-22 04:52:44 +0000 UTC 0xc000779320 map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kube-dns kubernetes.io/name:CoreDNS] map[deployment.kubernetes.io/revision:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernetes.io/name":"CoreDNS"},"name":"coredns","namespace":"kube-system"},"spec":{"replicas":1,"selector":{"matchLabels":{"k8s-app":"kube-dns"}},"strategy":{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"},"template":{"metadata":{"labels":{"k8s-app":"kube-dns"}},"spec":{"containers":[{"args":["-conf","/etc/coredns/Corefile"],"image":"k8s.gcr.io/coredns:1.2.2","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":5,"httpGet":{"path":"/health","port":8080,"scheme":"HTTP"},"initialDelaySeconds":60,"successThreshold":1,"timeoutSeconds":5},"name":"coredns","ports":[{"containerPort":53,"name":"dns","protocol":"UDP"},{"containerPort":53,"name":"dns-tcp","protocol":"TCP"},{"containerPort":9153,"name":"metrics","protocol":"TCP"}],"resources":{"limits":{"memory":"170Mi"},"requests":{"cpu":"100m","memory":"70Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["all"]},"readOnlyRootFilesystem":true},"volumeMounts":[{"mountPath":"/etc/coredns","name":"config-volume","readOnly":true}]}],"dnsPolicy":"Default","serviceAccountName":"coredns","tolerations":[{"key":"CriticalAddonsOnly","operator":"Exists"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"}],"volumes":[{"configMap":{"items":[{"key":"Corefile","path":"Corefile"}],"name":"coredns"},"name":"config-volume"}]}}}} ] [] [foregroundDeletion] []} {0xc000779368 &LabelSelector{MatchLabels:map[string]string{k8s-app: kube-dns,},MatchExpressions:[]LabelSelectorRequirement{},} {{ 0 0001-01-01 00:00:00 +0000 UTC map[k8s-app:kube-dns] map[] [] [] []} {[{config-volume {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:coredns,},Items:[]KeyToPath{KeyToPath{Key:Corefile,Path:Corefile,Mode:nil,},},DefaultMode:420,Optional:nil,} nil nil nil nil nil nil nil nil nil}}] [] [{coredns k8s.gcr.io/coredns:1.2.2 [] [-conf /etc/coredns/Corefile] [{dns 0 53 UDP } {dns-tcp 0 53 TCP } {metrics 0 9153 TCP }] [] [] {map[memory:{{178257920 0} {} 170Mi BinarySI}] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{73400320 0} {} 70Mi BinarySI}]} [{config-volume true /etc/coredns }] [] &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} nil nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:true,AllowPrivilegeEscalation:false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000779530 Default map[] coredns coredns false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [{CriticalAddonsOnly Exists } {node-role.kubernetes.io/master NoSchedule }] [] nil [] map[] []}} {RollingUpdate &RollingUpdateDeployment{MaxUnavailable:1,MaxSurge:25%,}} 0 0xc0007795c0 false 0xc0007795c4} {1 1 1 1 1 0 [{Available True 2018-10-28 02:27:10 +0000 UTC 2018-10-28 02:27:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2018-10-28 02:27:51 +0000 UTC 2018-10-28 02:27:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "coredns-c4cffd6dc" has successfully progressed.}] }} {{ } {kube-dns kube-system /apis/apps/v1/namespaces/kube-system/deployments/kube-dns f3ad7a52-da58-11e8-85cb-0800279ddd13 12450 1 2018-10-28 02:27:01 +0000 UTC map[k8s-app:kube-dns] map[deployment.kubernetes.io/revision:1] [] [] []} {0xc00077968c &LabelSelector{MatchLabels:map[string]string{k8s-app: kube-dns,},MatchExpressions:[]LabelSelectorRequirement{},} {{ 0 0001-01-01 00:00:00 +0000 UTC map[k8s-app:kube-dns] map[] [] [] []} {[{kube-dns-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:kube-dns,},Items:[]KeyToPath{},DefaultMode:420,Optional:*true,} nil nil nil nil nil nil nil nil nil}}] [] [{kubedns k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 [] [--domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2] [{dns-local 0 10053 UDP } {dns-tcp-local 0 10053 TCP } {metrics 0 10055 TCP }] [] [{PROMETHEUS_PORT 10055 nil}] {map[memory:{{178257920 0} {} 170Mi BinarySI}] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{73400320 0} {} 70Mi BinarySI}]} [{kube-dns-config false /kube-dns-config }] [] &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/kubedns,Port:{0 10054 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readiness,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File IfNotPresent nil false false false} {dnsmasq k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 [] [-v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] [{dns 0 53 UDP } {dns-tcp 0 53 TCP }] [] [] {map[] map[cpu:{{150 -3} {} 150m DecimalSI} memory:{{20971520 0} {} 20Mi BinarySI}]} [{kube-dns-config false /etc/k8s/dns/dnsmasq-nanny }] [] &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/dnsmasq,Port:{0 10054 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} nil nil nil /dev/termination-log File IfNotPresent nil false false false} {sidecar k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 [] [--v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV] [{metrics 0 10054 TCP }] [] [] {map[] map[cpu:{{10 -3} {} 10m DecimalSI} memory:{{20971520 0} {} 20Mi BinarySI}]} [] [] &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{0 10054 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0007799e0 Default map[] kube-dns kube-dns false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] &Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:beta.kubernetes.io/arch,Operator:In,Values:[amd64],},},MatchFields:[]NodeSelectorRequirement{},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,} default-scheduler [{CriticalAddonsOnly Exists } {node-role.kubernetes.io/master NoSchedule }] [] nil [] map[] []}} {RollingUpdate &RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:10%,}} 0 0xc000779a80 false 0xc000779a84} {1 1 1 0 0 1 [{Progressing True 2018-10-28 02:28:10 +0000 UTC 2018-10-28 02:27:08 +0000 UTC NewReplicaSetAvailable ReplicaSet "kube-dns-86f4d74b45" has successfully progressed.} {Available False 2018-10-28 05:27:27 +0000 UTC 2018-10-28 05:27:27 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}] }}] To see the stack trace of this error execute with --v=5 or higher : Process exited with status 1

šŸ˜æ Sorry that minikube crashed. If this was unexpected, we would love to hear from you: šŸ‘‰ https://github.com/kubernetes/minikube/issues/new/choose āŒ Problems detected in kube-addon-manager [793c3c8de684]: error: no objects paassed to apply error: no objects pasddon reconcile completed at 2018-10-28T05:19:11+0000 == exit:70

The output of the minikube logs command:

==> Docker <== -- Logs begin at Sun 2019-09-22 04:51:23 UTC, end at Sun 2019-09-22 05:20:26 UTC. -- Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." module=containerd type=io.containerd.grpc.v1 Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." module=containerd type=io.containerd.monitor.v1 Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." module=containerd type=io.containerd.runtime.v1 Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." module=containerd type=io.containerd.grpc.v1 Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." module=containerd type=io.containerd.grpc.v1 Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." module=containerd type=io.containerd.grpc.v1 Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23Z" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd-debug.sock" module="containerd/debug" Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23Z" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd.sock" module="containerd/grpc" Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23Z" level=info msg="containerd successfully booted in 0.007839s" module=containerd Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23.415673944Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23.544029287Z" level=info msg="Graph migration to content-addressability took 0.00 seconds" Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23.544396955Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23.544450462Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23.544485973Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23.544518536Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23.544550774Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23.544586111Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23.545361575Z" level=info msg="Loading containers: start." Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23.980981562Z" level=info msg="Removing stale sandbox 7d1f07c0d264994883dc6d33c80b246495f45638308e76748156b60f0ab9980f (893937870a1c3f564c92d669f5663eba1c54a92c3e221f1a7cfb750c4645ae90)" Sep 22 04:51:23 minikube dockerd[2419]: time="2019-09-22T04:51:23.991587248Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint a8bf9f6e69c9894f476e2ac28e4fef4c7d67d14e1ac635add65e822a69695198 91762a681ab382372682008e9afd9da64a64fa9e3bb1f76019b3b619b0eef8de], retrying...." Sep 22 04:51:24 minikube dockerd[2419]: time="2019-09-22T04:51:24.086514481Z" level=info msg="Removing stale sandbox bae29a9ddd2d9d15e83a3fa64cc82de1b7a065b03d2a19f94dec77b821f19e4e (4dd84674e869ffac9a86e1f91ea027b13060d12aec3319870b74c59bc2023085)" Sep 22 04:51:24 minikube dockerd[2419]: time="2019-09-22T04:51:24.087418910Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint a8bf9f6e69c9894f476e2ac28e4fef4c7d67d14e1ac635add65e822a69695198 fed29bfba086342914b641e7e1c9774816d9960014b9f5ad74e7a408fac23154], retrying...." Sep 22 04:51:24 minikube dockerd[2419]: time="2019-09-22T04:51:24.116569254Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 22 04:51:24 minikube dockerd[2419]: time="2019-09-22T04:51:24.204908718Z" level=info msg="Loading containers: done." Sep 22 04:51:24 minikube dockerd[2419]: time="2019-09-22T04:51:24.222665784Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 04:51:24 minikube dockerd[2419]: time="2019-09-22T04:51:24.222753924Z" level=info msg="Docker daemon" commit=7390fc6 graphdriver(s)=overlay2 version=17.12.1-ce Sep 22 04:51:24 minikube dockerd[2419]: time="2019-09-22T04:51:24.223309045Z" level=info msg="Daemon has completed initialization" Sep 22 04:51:24 minikube dockerd[2419]: time="2019-09-22T04:51:24.229321722Z" level=info msg="API listen on [::]:2376" Sep 22 04:51:24 minikube systemd[1]: Started Docker Application Container Engine. Sep 22 04:51:24 minikube dockerd[2419]: time="2019-09-22T04:51:24.229294919Z" level=info msg="API listen on /var/run/docker.sock" Sep 22 04:52:29 minikube dockerd[2419]: time="2019-09-22T04:52:29.219687819Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 04:52:29 minikube dockerd[2419]: time="2019-09-22T04:52:29.233955305Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 04:52:29 minikube dockerd[2419]: time="2019-09-22T04:52:29.302759963Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 04:52:29 minikube dockerd[2419]: time="2019-09-22T04:52:29.313950265Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 04:52:34 minikube dockerd[2419]: time="2019-09-22T04:52:34.435117539Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Sep 22 04:52:35 minikube dockerd[2419]: time="2019-09-22T04:52:35Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/1568a1baa9b533fe34ac95f26973ccc199759af06f2c27af0f3d6eae56b45bfe/shim.sock" debug=false module="containerd/tasks" pid=3285 Sep 22 04:52:35 minikube dockerd[2419]: time="2019-09-22T04:52:35Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/9d68c4e6129d7afe0293586578d646a22f134fcf54bce0906f065089eeeef7e3/shim.sock" debug=false module="containerd/tasks" pid=3293 Sep 22 04:52:35 minikube dockerd[2419]: time="2019-09-22T04:52:35Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/6053a41d4ee370ec48e596f916c1f2124ff8d1cef9ad2e3ef8fca0f6e3882627/shim.sock" debug=false module="containerd/tasks" pid=3303 Sep 22 04:52:35 minikube dockerd[2419]: time="2019-09-22T04:52:35Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/4e08de375ae9bd5d7c165c7c013705d58d5944574ae52d7e3bec36acabf2166f/shim.sock" debug=false module="containerd/tasks" pid=3337 Sep 22 04:52:35 minikube dockerd[2419]: time="2019-09-22T04:52:35Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/358b95236af99fc6a32d7bf7160ecbc33bb0b61693cbc39ae0c958ecb59cfc58/shim.sock" debug=false module="containerd/tasks" pid=3351 Sep 22 04:52:35 minikube dockerd[2419]: time="2019-09-22T04:52:35Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/db2e41d84eea98e6659e64a9af254f8153222cfe4dd645f1fc24bc46c0c3dc4d/shim.sock" debug=false module="containerd/tasks" pid=3462 Sep 22 04:52:35 minikube dockerd[2419]: time="2019-09-22T04:52:35Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/803cadd2fd4db12545086d22b753ce300487acb74bedf4d8c0bde6289faa7cf9/shim.sock" debug=false module="containerd/tasks" pid=3487 Sep 22 04:52:35 minikube dockerd[2419]: time="2019-09-22T04:52:35Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/431c86851c72743c96a553b1fc608da6d2cd532283de0cce499bd2f68645b04b/shim.sock" debug=false module="containerd/tasks" pid=3498 Sep 22 04:52:35 minikube dockerd[2419]: time="2019-09-22T04:52:35Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/a493120a86ca2b45674da8d02679f533f4857fd173193b92fa1a27f4b735eaa0/shim.sock" debug=false module="containerd/tasks" pid=3514 Sep 22 04:52:35 minikube dockerd[2419]: time="2019-09-22T04:52:35Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/e16057b3b1515ce1e05f2f84dffafa524812bfda0e8d43119771fd8d7a302a9a/shim.sock" debug=false module="containerd/tasks" pid=3543 Sep 22 04:52:43 minikube dockerd[2419]: time="2019-09-22T04:52:43Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/0a8e170b27e0eb9dc1bc634889cdeb1b7a4976e0d14929484e08807e0241fd0d/shim.sock" debug=false module="containerd/tasks" pid=3728 Sep 22 04:52:44 minikube dockerd[2419]: time="2019-09-22T04:52:44Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/b19a0343ee7c3b73f22a7d8860404ceb95ddabbd8b1488f60e2e01d0a4dca646/shim.sock" debug=false module="containerd/tasks" pid=3762 Sep 22 04:52:44 minikube dockerd[2419]: time="2019-09-22T04:52:44.818691226Z" level=error msg="Error streaming logs: EOF" container=db96d3ee3a82 method="(*Daemon).ContainerLogs" module=daemon Sep 22 04:52:46 minikube dockerd[2419]: time="2019-09-22T04:52:46Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/f51993a61a0922800806b806dab42125ed3bcd6fb0801b8b6d12d7e6dd5a0997/shim.sock" debug=false module="containerd/tasks" pid=4023 Sep 22 04:52:46 minikube dockerd[2419]: time="2019-09-22T04:52:46Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/fe82317697638714ae568244af89b3b8e42305e20b0f5d2f3466b296993896ca/shim.sock" debug=false module="containerd/tasks" pid=4060 Sep 22 04:52:47 minikube dockerd[2419]: time="2019-09-22T04:52:47Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/13a5a7ade31627678f5b052f5a80f8abed718218877c80690a46cd573495bb48/shim.sock" debug=false module="containerd/tasks" pid=4131 Sep 22 04:52:47 minikube dockerd[2419]: time="2019-09-22T04:52:47Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/d6fec807d76eb7339c75a0ca061594ab28809a3b4fc97681194aa6f1dd8a24b5/shim.sock" debug=false module="containerd/tasks" pid=4140 Sep 22 04:52:47 minikube dockerd[2419]: time="2019-09-22T04:52:47Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/fab5ba9c1160361387423b0180989cef967a0e934b7b0957af8d997c440c4b15/shim.sock" debug=false module="containerd/tasks" pid=4145 Sep 22 04:52:47 minikube dockerd[2419]: time="2019-09-22T04:52:47Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/4c6d7b5af36972847d6903d6f450eca87d7aeb70dd53d0c02bdeb1b40f4d9a03/shim.sock" debug=false module="containerd/tasks" pid=4176 Sep 22 04:52:47 minikube dockerd[2419]: time="2019-09-22T04:52:47Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/2fff3f92cbcf1a116e9560409f0249d9a07d3db359ef0d549368bc92e135993e/shim.sock" debug=false module="containerd/tasks" pid=4230 Sep 22 04:52:47 minikube dockerd[2419]: time="2019-09-22T04:52:47Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/3b587e024589949ffb6177ffa585ca1050214dbc62bc2a81bf8af38f7f19ff15/shim.sock" debug=false module="containerd/tasks" pid=4400 Sep 22 04:52:48 minikube dockerd[2419]: time="2019-09-22T04:52:48Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/f8a5ae05eb9aad45dfb7de02b161e90be760dd15ac8789b1d3070b99d961fdd7/shim.sock" debug=false module="containerd/tasks" pid=4431 Sep 22 04:52:48 minikube dockerd[2419]: time="2019-09-22T04:52:48Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/2a476e72fd4b229b7057086c2aafcbf08720ccdaf530962d74e18a5c0020cd8a/shim.sock" debug=false module="containerd/tasks" pid=4467 Sep 22 04:52:48 minikube dockerd[2419]: time="2019-09-22T04:52:48Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/da456ca8b0305e82344d5b5238bbe69a3fcceef701e17a3b67e42d25817171d0/shim.sock" debug=false module="containerd/tasks" pid=4516 Sep 22 04:52:49 minikube dockerd[2419]: time="2019-09-22T04:52:49Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/9dd19c3e20af08755b1442209f91a122965628d38f0980071b6ccbeaccb9b94f/shim.sock" debug=false module="containerd/tasks" pid=4581

==> container status <== CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT 9dd19c3e20af0 sha256:6f7f2dc7fab5d7e7f99dc4ac176683a981a9ff911d643b9f29ffa146838deda3 27 minutes ago Running sidecar 1 da456ca8b0305 sha256:c2ce1ffb51ed60c54057f53b8756231f5b4b792ce04113c6755339a1beb25943 27 minutes ago Running dnsmasq 1 2a476e72fd4b2 sha256:365ec60129c5426b4cf160257c06f6ad062c709e0576c8b3d9a5dcc488f5252d 27 minutes ago Running hello-minikube 1 f8a5ae05eb9aa sha256:0dab2435c100b32892e676b9709978617a5472390ac951f764c292950b902b1f 27 minutes ago Running kubernetes-dashboard 1 2fff3f92cbcf1 sha256:80cc5ea4b547abe174d7550b82825ace40769e977cde90495df3427b3a0f4e75 27 minutes ago Running kubedns 1 3b587e0245899 sha256:367cdc8433a4581c3c9d01501b92ecd79666d23c41232013a7b11e5f7acb7eed 27 minutes ago Running coredns 1 4c6d7b5af3697 sha256:4689081edb103a9e8174bf23a255bfbe0b2d9ed82edc907abab6989d1c60f02c 27 minutes ago Running storage-provisioner 1 b19a0343ee7c3 sha256:bfc21aadc7d3e20e34cec769d697f93543938e9151c653591861ec5f2429676b 27 minutes ago Running kube-proxy 0 db2e41d84eea9 sha256:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d 27 minutes ago Running kube-controller-manager 0 e16057b3b1515 sha256:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed 27 minutes ago Running etcd 0 a493120a86ca2 sha256:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a 27 minutes ago Running kube-scheduler 0 803cadd2fd4db sha256:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e 27 minutes ago Running kube-apiserver 0 431c86851c727 sha256:bd12a212f9dcbafe64323774c6b937dec3099d65f39a8d29896cf0d1d0c906cf 27 minutes ago Running kube-addon-manager 0 554dbef1ac38e k8s.gcr.io/echoserver@sha256:cb5c1bddd1b5665e1867a7fa1b5fa843a47ee433bbb75d4293888b71def53229 10 months ago Exited hello-minikube 0 bb8b880df7906 k8s.gcr.io/k8s-dns-sidecar-amd64@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb 10 months ago Exited sidecar 0 554a092a2f2c9 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b 10 months ago Exited dnsmasq 0 07722a8e78d6d k8s.gcr.io/coredns@sha256:3e2be1cec87aca0b74b7668bbe8c02964a95a402e45ceb51b2252629d608d03a 10 months ago Exited coredns 0 9659c436f3e23 k8s.gcr.io/kubernetes-dashboard-amd64@sha256:1d2e1229a918f4bc38b5a3f9f5f11302b3e71f8397b492afac7f273a0008776a 10 months ago Exited kubernetes-dashboard 0 db96d3ee3a825 gcr.io/k8s-minikube/storage-provisioner@sha256:088daa9fcbccf04c3f415d77d5a6360d2803922190b675cb7fc88a9d2d91985a 10 months ago Exited storage-provisioner 0 b53b1405937bd k8s.gcr.io/k8s-dns-kube-dns-amd64@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d 10 months ago Exited kubedns 0

==> coredns [07722a8e78d6] <== .:53 2018/10/28 02:27:51 [INFO] CoreDNS-1.2.2 2018/10/28 02:27:51 [INFO] linux/amd64, go1.11, eb51e8b CoreDNS-1.2.2 linux/amd64, go1.11, eb51e8b 2018/10/28 02:27:51 [INFO] plugin/reload: Running configuration MD5 = 486384b491cef6cb69c1f57a02087363 127.0.0.1:58291 - [28/Oct/2018:02:27:51 +0000] 6108 "HINFO IN 1201721109107356390.2404236465156073141. udp 57 false 512" NXDOMAIN qr,rd,ra 133 0.044289024s E1028 05:25:58.884733 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=153, ErrCode=NO_ERROR, debug="" E1028 05:25:58.885027 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=153, ErrCode=NO_ERROR, debug="" E1028 05:25:58.885152 1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to watch v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=5437&timeoutSeconds=513&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1028 05:25:58.885184 1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to watch v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=268&timeoutSeconds=507&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1028 05:25:58.885208 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=153, ErrCode=NO_ERROR, debug="" E1028 05:25:58.885494 1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=12368&timeoutSeconds=469&watch=true: dial tcp 10.96.0.1:443: connect: connection refused 2018/10/28 05:25:58 [INFO] SIGTERM: Shutting down servers then terminating

==> coredns [3b587e024589] <== .:53 2019/09/22 04:52:49 [INFO] CoreDNS-1.2.2 2019/09/22 04:52:49 [INFO] linux/amd64, go1.11, eb51e8b CoreDNS-1.2.2 linux/amd64, go1.11, eb51e8b 2019/09/22 04:52:49 [INFO] plugin/reload: Running configuration MD5 = 486384b491cef6cb69c1f57a02087363 127.0.0.1:56906 - [22/Sep/2019:04:52:49 +0000] 43727 "HINFO IN 2264783324256400193.7384743145613610690. udp 57 false 512" NXDOMAIN qr,rd,ra 133 0.034994782s

==> dmesg <== [ +0.000018] vboxvideo: Unknown symbol ttm_bo_kunmap (err 0) [ +0.000005] vboxvideo: Unknown symbol ttm_bo_del_sub_from_lru (err 0) [ +0.000011] vboxvideo: Unknown symbol ttm_bo_device_init (err 0) [ +0.000002] vboxvideo: Unknown symbol ttm_bo_init_mm (err 0) [ +0.000002] vboxvideo: Unknown symbol ttm_bo_dma_acc_size (err 0) [ +0.000006] vboxvideo: Unknown symbol ttm_tt_init (err 0) [ +0.000003] vboxvideo: Unknown symbol ttm_bo_kmap (err 0) [ +0.000011] vboxvideo: Unknown symbol ttm_bo_add_to_lru (err 0) [ +0.000003] vboxvideo: Unknown symbol ttm_bo_unref (err 0) [ +0.000003] vboxvideo: Unknown symbol ttm_mem_global_release (err 0) [ +0.000004] vboxvideo: Unknown symbol ttm_mem_global_init (err 0) [ +0.000015] vboxvideo: Unknown symbol ttm_bo_init (err 0) [ +0.000003] vboxvideo: Unknown symbol ttm_bo_validate (err 0) [ +0.000012] vboxvideo: Unknown symbol ttm_tt_fini (err 0) [ +0.000003] vboxvideo: Unknown symbol ttm_bo_eviction_valuable (err 0) [ +0.000002] vboxvideo: Unknown symbol ttm_pool_populate (err 0) [ +0.266618] vboxvideo: Unknown symbol ttm_bo_mmap (err 0) [ +0.000017] vboxvideo: Unknown symbol ttm_bo_global_release (err 0) [ +0.000006] vboxvideo: Unknown symbol ttm_pool_unpopulate (err 0) [ +0.000006] vboxvideo: Unknown symbol ttm_bo_manager_func (err 0) [ +0.000004] vboxvideo: Unknown symbol ttm_bo_global_init (err 0) [ +0.000001] vboxvideo: Unknown symbol ttm_bo_default_io_mem_pfn (err 0) [ +0.000010] vboxvideo: Unknown symbol ttm_bo_device_release (err 0) [ +0.000018] vboxvideo: Unknown symbol ttm_bo_kunmap (err 0) [ +0.000005] vboxvideo: Unknown symbol ttm_bo_del_sub_from_lru (err 0) [ +0.000011] vboxvideo: Unknown symbol ttm_bo_device_init (err 0) [ +0.000002] vboxvideo: Unknown symbol ttm_bo_init_mm (err 0) [ +0.000002] vboxvideo: Unknown symbol ttm_bo_dma_acc_size (err 0) [ +0.000007] vboxvideo: Unknown symbol ttm_tt_init (err 0) [ +0.000003] vboxvideo: Unknown symbol ttm_bo_kmap (err 0) [ +0.000011] vboxvideo: Unknown symbol ttm_bo_add_to_lru (err 0) [ +0.000003] vboxvideo: Unknown symbol ttm_bo_unref (err 0) [ +0.000004] vboxvideo: Unknown symbol ttm_mem_global_release (err 0) [ +0.000003] vboxvideo: Unknown symbol ttm_mem_global_init (err 0) [ +0.000015] vboxvideo: Unknown symbol ttm_bo_init (err 0) [ +0.000004] vboxvideo: Unknown symbol ttm_bo_validate (err 0) [ +0.000012] vboxvideo: Unknown symbol ttm_tt_fini (err 0) [ +0.000003] vboxvideo: Unknown symbol ttm_bo_eviction_valuable (err 0) [ +0.000002] vboxvideo: Unknown symbol ttm_pool_populate (err 0) [ +0.036221] hpet1: lost 670 rtc interrupts [ +0.014836] VBoxService 5.1.38 r122592 (verbosity: 0) linux.amd64 (May 9 2018 12:22:30) release log 00:00:00.000245 main Log opened 2019-09-22T04:51:24.156021000Z [ +0.000104] 00:00:00.000406 main OS Product: Linux [ +0.000068] 00:00:00.000491 main OS Release: 4.15.0 [ +0.000078] 00:00:00.000556 main OS Version: #1 SMP Fri Oct 5 20:44:14 UTC 2018 [ +0.000098] 00:00:00.000634 main Executable: /usr/sbin/VBoxService 00:00:00.000635 main Process ID: 2058 00:00:00.000636 main Package type: LINUX_64BITS_GENERIC [ +0.002452] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [ +0.001418] 00:00:00.004431 main 5.1.38 r122592 started. Verbose level = 0 [ +4.920046] systemd-fstab-generator[2338]: Ignoring "noauto" for root device [ +0.107046] hpet1: lost 275 rtc interrupts [ +5.004847] hpet_rtc_timer_reinit: 81 callbacks suppressed [ +0.000001] hpet1: lost 318 rtc interrupts [ +4.989710] hpet1: lost 312 rtc interrupts [Sep22 04:52] systemd-fstab-generator[2922]: Ignoring "noauto" for root device [ +24.818578] kauditd_printk_skb: 20 callbacks suppressed [Sep22 04:53] kauditd_printk_skb: 50 callbacks suppressed [ +13.403993] NFSD: Unable to end grace period: -110 [ +13.555220] kauditd_printk_skb: 14 callbacks suppressed

==> kernel <== 05:20:26 up 30 min, 0 users, load average: 0.09, 0.17, 0.16 Linux minikube 4.15.0 #1 SMP Fri Oct 5 20:44:14 UTC 2018 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2018.05"

==> kube-addon-manager [431c86851c72] <== INFO: == Kubernetes addon reconcile completed at 2019-09-22T05:19:50+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-09-22T05:19:54+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serrerviceaccount/storage-provisioner unchanged depor: no objects passed to apply lerror: no objects passed to apply error: no objectoyment.apps/coredns pruned deployment.apps/kubernetes-dashs passed to apply error: no objects passed to apboard pruned INFO: == Kubernetes addply error: no objects passed to apply error: no objects passed to apply eron reconcile completed at 2019-09-22T05:19:56+00:00 == INFO: Leader election disabled. INFOror: no objects passed to apply : == Kubernetes addon ensure completed at 2019-09-22T05:19:59+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged deployment.apps/coredns pruned deployment.apps/kubernetes-dashboard pruned INFO: == Kubernetes addon reconcile completed at 2019-09-22T05:20:01+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-09-22T05:20:04+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged deployment.apps/coredns pruned deployment.apps/kubernetes-dashboard pruned INFO: == Kubernetes addon reconcile completed at 2019-09-22T05:20:06+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-09-22T05:20:09+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged deployment.apps/coredns pruned deployment.apps/kubernetes-dashboard pruned INFO: == Kubernetes addon reconcile completed at 2019-09-22T05:20:11+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-09-22T05:20:14+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged deployment.apps/coredns pruned deployment.apps/kubernetes-dashboard pruned INFO: == Kubernetes addon reconcile completed at 2019-09-22T05:20:16+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-09-22T05:20:19+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == serviceaccount/storage-provisioner unchanged deployment.apps/coredns pruned deployment.apps/kubernetes-dashboard pruned INFO: == Kubernetes addon reconcile completed at 2019-09-22T05:20:21+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-09-22T05:20:24+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label ==

==> kube-apiserver [803cadd2fd4d] <== I0922 04:52:39.647107 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 04:52:39.657818 1 client.go:361] parsed scheme: "endpoint" I0922 04:52:39.657855 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 04:52:39.670149 1 client.go:361] parsed scheme: "endpoint" I0922 04:52:39.670202 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 04:52:39.682551 1 client.go:361] parsed scheme: "endpoint" I0922 04:52:39.682590 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 04:52:39.700564 1 client.go:361] parsed scheme: "endpoint" I0922 04:52:39.700601 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 04:52:39.716805 1 client.go:361] parsed scheme: "endpoint" I0922 04:52:39.716840 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 04:52:39.729564 1 client.go:361] parsed scheme: "endpoint" I0922 04:52:39.729604 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] W0922 04:52:39.892804 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources. W0922 04:52:39.911620 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0922 04:52:39.934299 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0922 04:52:39.938336 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0922 04:52:39.953547 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0922 04:52:39.977029 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources. W0922 04:52:39.977137 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources. I0922 04:52:39.989166 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass. I0922 04:52:39.989186 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota. I0922 04:52:39.991106 1 client.go:361] parsed scheme: "endpoint" I0922 04:52:39.991124 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 04:52:40.002182 1 client.go:361] parsed scheme: "endpoint" I0922 04:52:40.002216 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0922 04:52:42.393788 1 secure_serving.go:123] Serving securely on [::]:8443 I0922 04:52:42.394947 1 crd_finalizer.go:274] Starting CRDFinalizer I0922 04:52:42.395283 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I0922 04:52:42.395392 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0922 04:52:42.395458 1 available_controller.go:383] Starting AvailableConditionController I0922 04:52:42.395516 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0922 04:52:42.395576 1 controller.go:81] Starting OpenAPI AggregationController I0922 04:52:42.395906 1 autoregister_controller.go:140] Starting autoregister controller I0922 04:52:42.396058 1 cache.go:32] Waiting for caches to sync for autoregister controller I0922 04:52:42.470242 1 controller.go:85] Starting OpenAPI controller I0922 04:52:42.470520 1 customresource_discovery_controller.go:208] Starting DiscoveryController I0922 04:52:42.470606 1 naming_controller.go:288] Starting NamingConditionController I0922 04:52:42.470676 1 establishing_controller.go:73] Starting EstablishingController I0922 04:52:42.470737 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController I0922 04:52:42.470795 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController I0922 04:52:42.471135 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0922 04:52:42.471201 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister E0922 04:52:42.480769 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.100, ResourceVersion: 0, AdditionalErrorMsg: I0922 04:52:42.571716 1 shared_informer.go:204] Caches are synced for crd-autoregister I0922 04:52:42.617944 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0922 04:52:42.633686 1 cache.go:39] Caches are synced for AvailableConditionController controller I0922 04:52:42.633894 1 cache.go:39] Caches are synced for autoregister controller I0922 04:52:43.394843 1 controller.go:107] OpenAPI AggregationController: Processing item I0922 04:52:43.395245 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0922 04:52:43.395720 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0922 04:52:43.407696 1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000 I0922 04:52:43.426094 1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000 I0922 04:52:43.426627 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I0922 04:52:44.218650 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0922 04:52:44.350846 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0922 04:52:45.920007 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0922 04:53:03.151496 1 controller.go:606] quota admission added evaluator for: endpoints I0922 04:53:03.770228 1 controller.go:606] quota admission added evaluator for: serviceaccounts E0922 05:07:37.777133 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted

==> kube-controller-manager [db2e41d84eea] <== I0922 04:53:06.396792 1 controllermanager.go:534] Started "pvc-protection" W0922 04:53:06.396829 1 controllermanager.go:526] Skipping "ttl-after-finished" I0922 04:53:06.396861 1 pvc_protection_controller.go:100] Starting PVC protection controller I0922 04:53:06.396877 1 shared_informer.go:197] Waiting for caches to sync for PVC protection I0922 04:53:06.537913 1 controllermanager.go:534] Started "replicationcontroller" I0922 04:53:06.537971 1 replica_set.go:182] Starting replicationcontroller controller I0922 04:53:06.538041 1 shared_informer.go:197] Waiting for caches to sync for ReplicationController I0922 04:53:06.700534 1 controllermanager.go:534] Started "namespace" I0922 04:53:06.700756 1 namespace_controller.go:186] Starting namespace controller I0922 04:53:06.701145 1 shared_informer.go:197] Waiting for caches to sync for namespace I0922 04:53:06.837111 1 controllermanager.go:534] Started "tokencleaner" I0922 04:53:06.837204 1 tokencleaner.go:117] Starting token cleaner controller I0922 04:53:06.837220 1 shared_informer.go:197] Waiting for caches to sync for token_cleaner I0922 04:53:06.937406 1 shared_informer.go:204] Caches are synced for token_cleaner W0922 04:53:07.001019 1 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. I0922 04:53:07.002051 1 controllermanager.go:534] Started "attachdetach" I0922 04:53:07.002198 1 attach_detach_controller.go:334] Starting attach detach controller I0922 04:53:07.002313 1 shared_informer.go:197] Waiting for caches to sync for attach detach I0922 04:53:07.136914 1 controllermanager.go:534] Started "csrapproving" I0922 04:53:07.137007 1 certificate_controller.go:113] Starting certificate controller I0922 04:53:07.137024 1 shared_informer.go:197] Waiting for caches to sync for certificate I0922 04:53:07.287331 1 controllermanager.go:534] Started "bootstrapsigner" I0922 04:53:07.287561 1 shared_informer.go:197] Waiting for caches to sync for resource quota I0922 04:53:07.288259 1 shared_informer.go:197] Waiting for caches to sync for bootstrap_signer I0922 04:53:07.300103 1 shared_informer.go:197] Waiting for caches to sync for garbage collector I0922 04:53:07.317242 1 shared_informer.go:204] Caches are synced for service account W0922 04:53:07.321293 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0922 04:53:07.337779 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator I0922 04:53:07.338882 1 shared_informer.go:204] Caches are synced for certificate I0922 04:53:07.337715 1 shared_informer.go:204] Caches are synced for ReplicaSet I0922 04:53:07.344915 1 shared_informer.go:204] Caches are synced for ReplicationController I0922 04:53:07.346120 1 shared_informer.go:204] Caches are synced for HPA I0922 04:53:07.346472 1 shared_informer.go:204] Caches are synced for job I0922 04:53:07.365004 1 shared_informer.go:204] Caches are synced for stateful set I0922 04:53:07.385232 1 shared_informer.go:204] Caches are synced for TTL I0922 04:53:07.388350 1 shared_informer.go:204] Caches are synced for certificate I0922 04:53:07.388701 1 shared_informer.go:204] Caches are synced for GC I0922 04:53:07.390487 1 shared_informer.go:204] Caches are synced for bootstrap_signer I0922 04:53:07.396546 1 shared_informer.go:204] Caches are synced for daemon sets I0922 04:53:07.398007 1 shared_informer.go:204] Caches are synced for PVC protection I0922 04:53:07.402042 1 shared_informer.go:204] Caches are synced for namespace I0922 04:53:07.403714 1 shared_informer.go:204] Caches are synced for expand I0922 04:53:07.437492 1 shared_informer.go:204] Caches are synced for endpoint I0922 04:53:07.441208 1 shared_informer.go:204] Caches are synced for persistent volume I0922 04:53:07.488271 1 shared_informer.go:204] Caches are synced for taint I0922 04:53:07.488776 1 taint_manager.go:186] Starting NoExecuteTaintManager I0922 04:53:07.489349 1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"eefed653-da58-11e8-85cb-0800279ddd13", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I0922 04:53:07.489551 1 shared_informer.go:204] Caches are synced for PV protection I0922 04:53:07.489683 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: W0922 04:53:07.489934 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp. I0922 04:53:07.490053 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal. I0922 04:53:07.673472 1 shared_informer.go:204] Caches are synced for disruption I0922 04:53:07.673895 1 disruption.go:341] Sending events to api server. I0922 04:53:07.687875 1 shared_informer.go:204] Caches are synced for deployment I0922 04:53:07.815374 1 shared_informer.go:204] Caches are synced for resource quota I0922 04:53:07.852790 1 shared_informer.go:204] Caches are synced for garbage collector I0922 04:53:07.852820 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0922 04:53:07.887930 1 shared_informer.go:204] Caches are synced for resource quota I0922 04:53:07.900471 1 shared_informer.go:204] Caches are synced for garbage collector I0922 04:53:07.902985 1 shared_informer.go:204] Caches are synced for attach detach

==> kube-proxy [b19a0343ee7c] <== E0922 00:20:26.746836 21680 style.go:167] unable to parse "I0922 04:52:44.497187 1 feature_gate.go:226] feature gates: &{{} map[]}\n": template: I0922 04:52:44.497187 1 feature_gate.go:226] feature gates: &{{} map[]} :1: unexpected "}" in command - returning raw string. I0922 04:52:44.497187 1 feature_gate.go:226] feature gates: &{{} map[]} W0922 04:52:44.669466 1 server_others.go:290] Can't use ipvs proxier, trying iptables proxier I0922 04:52:44.670817 1 server_others.go:140] Using iptables Proxier. W0922 04:52:44.706766 1 proxier.go:311] clusterCIDR not specified, unable to distinguish between internal and external traffic I0922 04:52:44.706884 1 server_others.go:174] Tearing down inactive rules. I0922 04:52:44.845083 1 server.go:444] Version: v1.10.0 I0922 04:52:44.856803 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I0922 04:52:44.857012 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0922 04:52:44.859278 1 conntrack.go:83] Setting conntrack hashsize to 32768 I0922 04:52:44.863647 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0922 04:52:44.863899 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0922 04:52:44.864883 1 config.go:102] Starting endpoints config controller I0922 04:52:44.864906 1 controller_utils.go:1019] Waiting for caches to sync for endpoints config controller I0922 04:52:44.865074 1 config.go:202] Starting service config controller I0922 04:52:44.865129 1 controller_utils.go:1019] Waiting for caches to sync for service config controller I0922 04:52:44.965276 1 controller_utils.go:1026] Caches are synced for endpoints config controller I0922 04:52:44.965377 1 controller_utils.go:1026] Caches are synced for service config controller

==> kube-scheduler [a493120a86ca] <== I0922 04:52:37.411976 1 serving.go:319] Generated self-signed cert in-memory W0922 04:52:42.499574 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0922 04:52:42.500121 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0922 04:52:42.500284 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous. W0922 04:52:42.500344 1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0922 04:52:42.508204 1 server.go:143] Version: v1.16.0 I0922 04:52:42.508718 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory W0922 04:52:42.520446 1 authorization.go:47] Authorization is disabled W0922 04:52:42.520526 1 authentication.go:79] Authentication is disabled I0922 04:52:42.520541 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0922 04:52:42.521290 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259 E0922 04:52:42.689700 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0922 04:52:43.692006 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope I0922 04:52:45.630687 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler... I0922 04:53:04.819641 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler

==> kubelet <== -- Logs begin at Sun 2019-09-22 04:51:23 UTC, end at Sun 2019-09-22 05:20:26 UTC. -- Sep 22 04:52:42 minikube kubelet[3168]: I0922 04:52:42.685099 3168 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/44103dc6-da72-11e8-aa86-0800279ddd13-kube-proxy") pod "kube-proxy-skr82" (UID: "44103dc6-da72-11e8-aa86-0800279ddd13") Sep 22 04:52:42 minikube kubelet[3168]: I0922 04:52:42.685147 3168 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/44103dc6-da72-11e8-aa86-0800279ddd13-lib-modules") pod "kube-proxy-skr82" (UID: "44103dc6-da72-11e8-aa86-0800279ddd13") Sep 22 04:52:42 minikube kubelet[3168]: I0922 04:52:42.685344 3168 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-t2tfd" (UniqueName: "kubernetes.io/secret/f8feb645-da58-11e8-85cb-0800279ddd13-storage-provisioner-token-t2tfd") pod "storage-provisioner" (UID: "f8feb645-da58-11e8-85cb-0800279ddd13") Sep 22 04:52:42 minikube kubelet[3168]: I0922 04:52:42.685405 3168 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f97fc1d0-da58-11e8-85cb-0800279ddd13-config-volume") pod "coredns-c4cffd6dc-6xwhw" (UID: "f97fc1d0-da58-11e8-85cb-0800279ddd13") Sep 22 04:52:42 minikube kubelet[3168]: I0922 04:52:42.685458 3168 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f8feb645-da58-11e8-85cb-0800279ddd13-tmp") pod "storage-provisioner" (UID: "f8feb645-da58-11e8-85cb-0800279ddd13") Sep 22 04:52:42 minikube kubelet[3168]: I0922 04:52:42.685505 3168 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-5h67z" (UniqueName: "kubernetes.io/secret/f8efdbed-da58-11e8-85cb-0800279ddd13-default-token-5h67z") pod "kubernetes-dashboard-6f4cfc5d87-mfbsx" (UID: "f8efdbed-da58-11e8-85cb-0800279ddd13") Sep 22 04:52:42 minikube kubelet[3168]: I0922 04:52:42.685552 3168 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-bhvdw" (UniqueName: "kubernetes.io/secret/340bda50-da63-11e8-85cb-0800279ddd13-default-token-bhvdw") pod "hello-minikube-7c77b68cff-dg9g8" (UID: "340bda50-da63-11e8-85cb-0800279ddd13") Sep 22 04:52:42 minikube kubelet[3168]: I0922 04:52:42.685597 3168 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/f7b880c4-da58-11e8-85cb-0800279ddd13-kube-dns-config") pod "kube-dns-86f4d74b45-zps4f" (UID: "f7b880c4-da58-11e8-85cb-0800279ddd13") Sep 22 04:52:42 minikube kubelet[3168]: I0922 04:52:42.685641 3168 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-token-vblc5" (UniqueName: "kubernetes.io/secret/f7b880c4-da58-11e8-85cb-0800279ddd13-kube-dns-token-vblc5") pod "kube-dns-86f4d74b45-zps4f" (UID: "f7b880c4-da58-11e8-85cb-0800279ddd13") Sep 22 04:52:42 minikube kubelet[3168]: E0922 04:52:42.712351 3168 controller.go:135] failed to ensure node lease exists, will retry in 3.2s, error: namespaces "kube-node-lease" not found Sep 22 04:52:42 minikube kubelet[3168]: W0922 04:52:42.717829 3168 kubelet.go:1651] Deleted mirror pod "kube-scheduler-minikube_kube-system(f6f41a75-da58-11e8-85cb-0800279ddd13)" because it is outdated Sep 22 04:52:42 minikube kubelet[3168]: W0922 04:52:42.736397 3168 kubelet.go:1651] Deleted mirror pod "kube-addon-manager-minikube_kube-system(0b851f23-da59-11e8-85cb-0800279ddd13)" because it is outdated Sep 22 04:52:42 minikube kubelet[3168]: I0922 04:52:42.786229 3168 reconciler.go:154] Reconciler: start to sync state Sep 22 04:52:42 minikube kubelet[3168]: W0922 04:52:42.928810 3168 kubelet.go:1651] Deleted mirror pod "etcd-minikube_kube-system(42dd4dfd-da72-11e8-aa86-0800279ddd13)" because it is outdated Sep 22 04:52:43 minikube kubelet[3168]: W0922 04:52:43.345603 3168 kubelet.go:1651] Deleted mirror pod "kube-apiserver-minikube_kube-system(4393b710-da72-11e8-aa86-0800279ddd13)" because it is outdated Sep 22 04:52:43 minikube kubelet[3168]: W0922 04:52:43.747686 3168 kubelet.go:1651] Deleted mirror pod "kube-controller-manager-minikube_kube-system(42fb0713-da72-11e8-aa86-0800279ddd13)" because it is outdated Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.786798 3168 secret.go:198] Couldn't get secret kube-system/coredns-token-f7pnh: failed to sync secret cache: timed out waiting for the condition Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.786798 3168 secret.go:198] Couldn't get secret kube-system/default-token-5h67z: failed to sync secret cache: timed out waiting for the condition Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.786813 3168 secret.go:198] Couldn't get secret kube-system/kube-dns-token-vblc5: failed to sync secret cache: timed out waiting for the condition Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.786824 3168 configmap.go:203] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.786840 3168 secret.go:198] Couldn't get secret default/default-token-bhvdw: failed to sync secret cache: timed out waiting for the condition Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.786854 3168 configmap.go:203] Couldn't get configMap kube-system/kube-dns: failed to sync configmap cache: timed out waiting for the condition Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.787224 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/f97fc1d0-da58-11e8-85cb-0800279ddd13-coredns-token-f7pnh\" (\"f97fc1d0-da58-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:44.287193007 +0000 UTC m=+15.185864204 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-f7pnh\" (UniqueName: \"kubernetes.io/secret/f97fc1d0-da58-11e8-85cb-0800279ddd13-coredns-token-f7pnh\") pod \"coredns-c4cffd6dc-6xwhw\" (UID: \"f97fc1d0-da58-11e8-85cb-0800279ddd13\") : failed to sync secret cache: timed out waiting for the condition" Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.788371 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/f7b880c4-da58-11e8-85cb-0800279ddd13-kube-dns-token-vblc5\" (\"f7b880c4-da58-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:44.288345893 +0000 UTC m=+15.187017099 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-dns-token-vblc5\" (UniqueName: \"kubernetes.io/secret/f7b880c4-da58-11e8-85cb-0800279ddd13-kube-dns-token-vblc5\") pod \"kube-dns-86f4d74b45-zps4f\" (UID: \"f7b880c4-da58-11e8-85cb-0800279ddd13\") : failed to sync secret cache: timed out waiting for the condition" Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.789635 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/f8efdbed-da58-11e8-85cb-0800279ddd13-default-token-5h67z\" (\"f8efdbed-da58-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:44.289610703 +0000 UTC m=+15.188281899 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"default-token-5h67z\" (UniqueName: \"kubernetes.io/secret/f8efdbed-da58-11e8-85cb-0800279ddd13-default-token-5h67z\") pod \"kubernetes-dashboard-6f4cfc5d87-mfbsx\" (UID: \"f8efdbed-da58-11e8-85cb-0800279ddd13\") : failed to sync secret cache: timed out waiting for the condition" Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.789400 3168 secret.go:198] Couldn't get secret kube-system/storage-provisioner-token-t2tfd: failed to sync secret cache: timed out waiting for the condition Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.790869 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/340bda50-da63-11e8-85cb-0800279ddd13-default-token-bhvdw\" (\"340bda50-da63-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:44.290847547 +0000 UTC m=+15.189518749 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"default-token-bhvdw\" (UniqueName: \"kubernetes.io/secret/340bda50-da63-11e8-85cb-0800279ddd13-default-token-bhvdw\") pod \"hello-minikube-7c77b68cff-dg9g8\" (UID: \"340bda50-da63-11e8-85cb-0800279ddd13\") : failed to sync secret cache: timed out waiting for the condition" Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.790974 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/f97fc1d0-da58-11e8-85cb-0800279ddd13-config-volume\" (\"f97fc1d0-da58-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:44.290952442 +0000 UTC m=+15.189623629 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f97fc1d0-da58-11e8-85cb-0800279ddd13-config-volume\") pod \"coredns-c4cffd6dc-6xwhw\" (UID: \"f97fc1d0-da58-11e8-85cb-0800279ddd13\") : failed to sync configmap cache: timed out waiting for the condition" Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.791001 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/f7b880c4-da58-11e8-85cb-0800279ddd13-kube-dns-config\" (\"f7b880c4-da58-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:44.290980945 +0000 UTC m=+15.189652131 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-dns-config\" (UniqueName: \"kubernetes.io/configmap/f7b880c4-da58-11e8-85cb-0800279ddd13-kube-dns-config\") pod \"kube-dns-86f4d74b45-zps4f\" (UID: \"f7b880c4-da58-11e8-85cb-0800279ddd13\") : failed to sync configmap cache: timed out waiting for the condition" Sep 22 04:52:43 minikube kubelet[3168]: E0922 04:52:43.791020 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/f8feb645-da58-11e8-85cb-0800279ddd13-storage-provisioner-token-t2tfd\" (\"f8feb645-da58-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:44.29100587 +0000 UTC m=+15.189677057 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-t2tfd\" (UniqueName: \"kubernetes.io/secret/f8feb645-da58-11e8-85cb-0800279ddd13-storage-provisioner-token-t2tfd\") pod \"storage-provisioner\" (UID: \"f8feb645-da58-11e8-85cb-0800279ddd13\") : failed to sync secret cache: timed out waiting for the condition" Sep 22 04:52:44 minikube kubelet[3168]: I0922 04:52:44.120393 3168 kubelet_node_status.go:114] Node minikube was previously registered Sep 22 04:52:44 minikube kubelet[3168]: W0922 04:52:44.344318 3168 status_manager.go:545] Failed to update status for pod "kube-apiserver-minikube_kube-system(4393b710-da72-11e8-aa86-0800279ddd13)": failed to patch status "{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastTransitionTime\":\"2019-09-22T04:52:34Z\",\"type\":\"Initialized\"},{\"lastTransitionTime\":\"2019-09-22T04:52:37Z\",\"type\":\"Ready\"},{\"lastProbeTime\":null,\"lastTransitionTime\":\"2019-09-22T04:52:37Z\",\"status\":\"True\",\"type\":\"ContainersReady\"},{\"lastTransitionTime\":\"2019-09-22T04:52:34Z\",\"type\":\"PodScheduled\"}],\"containerStatuses\":[{\"containerID\":\"docker://803cadd2fd4db12545086d22b753ce300487acb74bedf4d8c0bde6289faa7cf9\",\"image\":\"k8s.gcr.io/kube-apiserver:v1.16.0\",\"imageID\":\"docker-pullable://k8s.gcr.io/kube-apiserver@sha256:f4168527c91289da2708f62ae729fdde5fb484167dd05ffbb7ab666f60de96cd\",\"lastState\":{},\"name\":\"kube-apiserver\",\"ready\":true,\"restartCount\":0,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2019-09-22T04:52:35Z\"}}}],\"podIPs\":null,\"startTime\":\"2019-09-22T04:52:34Z\"}}" for pod "kube-system"/"kube-apiserver-minikube": pods "kube-apiserver-minikube" not found Sep 22 04:52:45 minikube kubelet[3168]: E0922 04:52:45.312550 3168 secret.go:198] Couldn't get secret default/default-token-bhvdw: failed to sync secret cache: timed out waiting for the condition Sep 22 04:52:45 minikube kubelet[3168]: E0922 04:52:45.312654 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/340bda50-da63-11e8-85cb-0800279ddd13-default-token-bhvdw\" (\"340bda50-da63-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:46.312627356 +0000 UTC m=+17.211298550 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"default-token-bhvdw\" (UniqueName: \"kubernetes.io/secret/340bda50-da63-11e8-85cb-0800279ddd13-default-token-bhvdw\") pod \"hello-minikube-7c77b68cff-dg9g8\" (UID: \"340bda50-da63-11e8-85cb-0800279ddd13\") : failed to sync secret cache: timed out waiting for the condition" Sep 22 04:52:45 minikube kubelet[3168]: E0922 04:52:45.312680 3168 secret.go:198] Couldn't get secret kube-system/kube-dns-token-vblc5: failed to sync secret cache: timed out waiting for the condition Sep 22 04:52:45 minikube kubelet[3168]: E0922 04:52:45.312714 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/f7b880c4-da58-11e8-85cb-0800279ddd13-kube-dns-token-vblc5\" (\"f7b880c4-da58-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:46.31269858 +0000 UTC m=+17.211369773 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"kube-dns-token-vblc5\" (UniqueName: \"kubernetes.io/secret/f7b880c4-da58-11e8-85cb-0800279ddd13-kube-dns-token-vblc5\") pod \"kube-dns-86f4d74b45-zps4f\" (UID: \"f7b880c4-da58-11e8-85cb-0800279ddd13\") : failed to sync secret cache: timed out waiting for the condition" Sep 22 04:52:45 minikube kubelet[3168]: E0922 04:52:45.312728 3168 secret.go:198] Couldn't get secret kube-system/coredns-token-f7pnh: failed to sync secret cache: timed out waiting for the condition Sep 22 04:52:45 minikube kubelet[3168]: E0922 04:52:45.312832 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/f97fc1d0-da58-11e8-85cb-0800279ddd13-coredns-token-f7pnh\" (\"f97fc1d0-da58-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:46.312743073 +0000 UTC m=+17.211414267 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"coredns-token-f7pnh\" (UniqueName: \"kubernetes.io/secret/f97fc1d0-da58-11e8-85cb-0800279ddd13-coredns-token-f7pnh\") pod \"coredns-c4cffd6dc-6xwhw\" (UID: \"f97fc1d0-da58-11e8-85cb-0800279ddd13\") : failed to sync secret cache: timed out waiting for the condition" Sep 22 04:52:45 minikube kubelet[3168]: E0922 04:52:45.312868 3168 secret.go:198] Couldn't get secret kube-system/storage-provisioner-token-t2tfd: failed to sync secret cache: timed out waiting for the condition Sep 22 04:52:45 minikube kubelet[3168]: E0922 04:52:45.312913 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/f8feb645-da58-11e8-85cb-0800279ddd13-storage-provisioner-token-t2tfd\" (\"f8feb645-da58-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:46.312895033 +0000 UTC m=+17.211566227 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-t2tfd\" (UniqueName: \"kubernetes.io/secret/f8feb645-da58-11e8-85cb-0800279ddd13-storage-provisioner-token-t2tfd\") pod \"storage-provisioner\" (UID: \"f8feb645-da58-11e8-85cb-0800279ddd13\") : failed to sync secret cache: timed out waiting for the condition" Sep 22 04:52:45 minikube kubelet[3168]: E0922 04:52:45.312929 3168 secret.go:198] Couldn't get secret kube-system/default-token-5h67z: failed to sync secret cache: timed out waiting for the condition Sep 22 04:52:45 minikube kubelet[3168]: E0922 04:52:45.312962 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/f8efdbed-da58-11e8-85cb-0800279ddd13-default-token-5h67z\" (\"f8efdbed-da58-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:46.312945755 +0000 UTC m=+17.211616949 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"default-token-5h67z\" (UniqueName: \"kubernetes.io/secret/f8efdbed-da58-11e8-85cb-0800279ddd13-default-token-5h67z\") pod \"kubernetes-dashboard-6f4cfc5d87-mfbsx\" (UID: \"f8efdbed-da58-11e8-85cb-0800279ddd13\") : failed to sync secret cache: timed out waiting for the condition" Sep 22 04:52:45 minikube kubelet[3168]: E0922 04:52:45.312978 3168 configmap.go:203] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 22 04:52:45 minikube kubelet[3168]: E0922 04:52:45.313008 3168 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/f97fc1d0-da58-11e8-85cb-0800279ddd13-config-volume\" (\"f97fc1d0-da58-11e8-85cb-0800279ddd13\")" failed. No retries permitted until 2019-09-22 04:52:46.312994228 +0000 UTC m=+17.211665421 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f97fc1d0-da58-11e8-85cb-0800279ddd13-config-volume\") pod \"coredns-c4cffd6dc-6xwhw\" (UID: \"f97fc1d0-da58-11e8-85cb-0800279ddd13\") : failed to sync configmap cache: timed out waiting for the condition" Sep 22 04:52:46 minikube kubelet[3168]: I0922 04:52:46.759955 3168 kubelet_node_status.go:75] Successfully registered node minikube Sep 22 04:52:47 minikube kubelet[3168]: W0922 04:52:47.115770 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/kube-dns-86f4d74b45-zps4f through plugin: invalid network status for Sep 22 04:52:47 minikube kubelet[3168]: W0922 04:52:47.131654 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/kube-dns-86f4d74b45-zps4f through plugin: invalid network status for Sep 22 04:52:47 minikube kubelet[3168]: W0922 04:52:47.171136 3168 pod_container_deletor.go:75] Container "f51993a61a0922800806b806dab42125ed3bcd6fb0801b8b6d12d7e6dd5a0997" not found in pod's containers Sep 22 04:52:47 minikube kubelet[3168]: W0922 04:52:47.503316 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-c4cffd6dc-6xwhw through plugin: invalid network status for Sep 22 04:52:47 minikube kubelet[3168]: W0922 04:52:47.616784 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/kubernetes-dashboard-6f4cfc5d87-mfbsx through plugin: invalid network status for Sep 22 04:52:47 minikube kubelet[3168]: W0922 04:52:47.877257 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-minikube-7c77b68cff-dg9g8 through plugin: invalid network status for Sep 22 04:52:48 minikube kubelet[3168]: W0922 04:52:48.332727 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/kube-dns-86f4d74b45-zps4f through plugin: invalid network status for Sep 22 04:52:48 minikube kubelet[3168]: W0922 04:52:48.421517 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-minikube-7c77b68cff-dg9g8 through plugin: invalid network status for Sep 22 04:52:49 minikube kubelet[3168]: W0922 04:52:49.419260 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-c4cffd6dc-6xwhw through plugin: invalid network status for Sep 22 04:52:49 minikube kubelet[3168]: W0922 04:52:49.438482 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/kubernetes-dashboard-6f4cfc5d87-mfbsx through plugin: invalid network status for Sep 22 04:52:49 minikube kubelet[3168]: I0922 04:52:49.456664 3168 kubelet.go:1647] Trying to delete pod kube-controller-manager-minikube_kube-system 42fb0713-da72-11e8-aa86-0800279ddd13 Sep 22 04:52:50 minikube kubelet[3168]: W0922 04:52:50.482247 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-minikube-7c77b68cff-dg9g8 through plugin: invalid network status for Sep 22 04:52:50 minikube kubelet[3168]: W0922 04:52:50.503806 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-c4cffd6dc-6xwhw through plugin: invalid network status for Sep 22 04:52:50 minikube kubelet[3168]: W0922 04:52:50.538634 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/kubernetes-dashboard-6f4cfc5d87-mfbsx through plugin: invalid network status for Sep 22 04:52:50 minikube kubelet[3168]: W0922 04:52:50.554243 3168 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/kube-dns-86f4d74b45-zps4f through plugin: invalid network status for

==> kubernetes-dashboard [9659c436f3e2] <== 2018/10/28 04:56:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 04:56:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 04:57:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 04:57:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 04:58:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 04:58:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 04:59:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 04:59:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:00:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:00:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:01:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:01:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:02:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:02:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:03:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:03:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:04:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:04:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:05:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:05:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:06:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:06:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:07:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:07:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:08:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:08:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:09:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:09:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:10:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:10:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:11:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:11:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:12:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:12:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:13:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:13:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:14:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:14:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:15:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:15:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:16:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:16:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:17:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:17:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:18:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:18:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:19:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:19:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:20:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:20:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:21:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:21:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:22:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:22:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:23:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:23:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:24:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:24:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:25:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/28 05:25:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

==> kubernetes-dashboard [f8a5ae05eb9a] <== 2019/09/22 04:52:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system 2019/09/22 04:52:49 Initializing JWE encryption key from synchronized object 2019/09/22 04:52:49 Creating in-cluster Heapster client 2019/09/22 04:52:49 Serving insecurely on HTTP port: 9090 2019/09/22 04:52:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:53:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:53:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:54:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:54:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:55:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:55:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:56:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:56:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:57:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:57:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:58:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:58:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:59:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 04:59:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:00:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:00:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:01:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:01:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:02:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:02:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:03:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:03:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:04:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:04:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:05:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:05:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:06:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:06:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:07:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:07:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:08:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:08:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:09:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:09:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:10:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:10:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:11:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:11:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:12:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:12:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:13:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:13:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:14:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:14:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:15:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:15:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:16:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:16:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:17:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:17:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:18:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:18:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:19:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:19:49 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2019/09/22 05:20:19 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

==> storage-provisioner [4c6d7b5af369] <== E0922 00:20:26.890011 21680 logs.go:135] failed: Process exited with status 1

==> storage-provisioner [db96d3ee3a82] <== E0922 00:20:26.914851 21680 logs.go:135] failed: Process exited with status 1

šŸ’£ Error getting machine logs: unable to fetch logs for: storage-provisioner [4c6d7b5af369], storage-provisioner [db96d3ee3a82]

šŸ˜æ Sorry that minikube crashed. If this was unexpected, we would love to hear from you: šŸ‘‰ https://github.com/kubernetes/minikube/issues/new/choose exit:70

The operating system version: Darwin M-C02SQ8WFGTF1 17.7.0 Darwin Kernel Version 17.7.0: Thu Jun 21 22:53:14 PDT 2018; root:xnu-4570.71.2~1/RELEASE_X86_64 x86_64

Mac OSX High Sierra 10.13.6

c0debreaker commented 5 years ago

It's fixed. Running minikube delete fixed it.

vetbijaya commented 4 years ago

Minikube start crashed my ubuntu on VirtualBox. Below command to reproduce: $minikube start --driver=virtualbox --cpus 2 --memory 2024 --no-vtx-check

attached screenshot:

image