genuinetools / img

Standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder.
https://blog.jessfraz.com/post/building-container-images-securely-on-kubernetes/
MIT License
3.89k stars 230 forks source link

img crashes kubernetes control plane #248

Open sh0rez opened 5 years ago

sh0rez commented 5 years ago

Hi! I have no idea how I achieved this, but img seems to be crashing my kubernetes control-plane.

Context

Cluster: Newly created Minikube (Kubernetes 1.15)

img pod:

apiVersion: v1
kind: Pod
metadata:
  name: img
spec:
  containers:
    - image: r.j3ss.co/img
      name: img
      command: ["tail", "-f", "/dev/null"]
      securityContext:
        privileged: true
      resources:
        requests:
          ephemeral-storage: "8Gi"

Dockerfile:

FROM node

I then kubectl exec into the pod and run img build -t a . to build the image.

Result

Expected: The image builds and everyone is happy

Actually: img starts pulling the node base image, once the download is done it starts extracting. This is all we get, during the extract the controlplane goes down.

It starts with kube-scheduler crashing:

E0702 20:51:23.913594       1 leaderelection.go:306] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0702 20:51:23.929173       1 event.go:247] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'minikube_d0b53ea5-9d09-11e9-a78f-080027fc66d9 stopped leading'
I0702 20:51:23.976794       1 leaderelection.go:263] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded
E0702 20:51:23.982605       1 server.go:252] lost master
lost lease

!! container crashes

And kube-controller-manager following quickly behind:

I0702 20:51:23.911071       1 leaderelection.go:263] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded
F0702 20:51:23.912212       1 controllermanager.go:260] leaderelection lost

!! container crashes

During this timespan, kube-apiserver emits the following:

I0702 20:51:23.912609       1 log.go:172] http: TLS handshake error from 192.168.99.100:60252: EOF
E0702 20:51:23.922791       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E0702 20:51:23.923646       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I0702 20:51:23.925130       1 trace.go:81] Trace[992874036]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-07-02 20:51:05.785664319 +0000 UTC m=+558.540460279) (total time: 18.138655921s):
Trace[992874036]: [18.138655921s] [18.138619809s] END
E0702 20:51:24.080845       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E0702 20:51:25.006893       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I0702 20:51:25.007031       1 trace.go:81] Trace[243044746]: "List /api/v1/nodes" (started: 2019-07-02 20:51:23.921349527 +0000 UTC m=+576.676145483) (total time: 1.085666037s):
Trace[243044746]: [1.085666037s] [1.085658843s] END
E0702 20:51:25.007211       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I0702 20:51:25.008056       1 trace.go:81] Trace[399813541]: "List /apis/batch/v1/jobs" (started: 2019-07-02 20:51:23.918424562 +0000 UTC m=+576.673220513) (total time: 1.089619254s):
Trace[399813541]: [1.089619254s] [1.089578733s] END
I0702 20:51:25.019174       1 trace.go:81] Trace[114325482]: "Get /api/v1/namespaces/kube-system/configmaps/ingress-controller-leader-nginx" (started: 2019-07-02 20:51:06.668019028 +0000 UTC m=+559.422814991) (total time: 18.351132144s):
Trace[114325482]: [18.351047391s] [18.351036528s] About to write a response
I0702 20:51:25.032302       1 trace.go:81] Trace[1273773393]: "Create /api/v1/namespaces/kube-system/events" (started: 2019-07-02 20:51:23.914353179 +0000 UTC m=+576.669149147) (total time: 1.117930515s):
Trace[1273773393]: [1.117440808s] [1.115287725s] Object stored in database
I0702 20:51:25.036747       1 trace.go:81] Trace[757186810]: "List /api/v1/namespaces/kube-system/pods" (started: 2019-07-02 20:51:23.994785206 +0000 UTC m=+576.749581160) (total time: 1.041938093s):
Trace[757186810]: [1.041856736s] [1.041750817s] Listing from storage done
I0702 20:51:25.037969       1 trace.go:81] Trace[493499326]: "Get /api/v1/namespaces/kube-system/configmaps/ingress-controller-leader-nginx" (started: 2019-07-02 20:51:23.994419117 +0000 UTC m=+576.749215086) (total time: 1.043530729s):
Trace[493499326]: [1.043456513s] [1.043445607s] About to write a response
I0702 20:51:25.038415       1 trace.go:81] Trace[1923350878]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/minikube" (started: 2019-07-02 20:51:24.281548356 +0000 UTC m=+577.036344322) (total time: 756.853885ms):
Trace[1923350878]: [756.802096ms] [756.76723ms] About to write a response
I0702 20:51:25.040113       1 trace.go:81] Trace[540937284]: "Get /api/v1/namespaces/default" (started: 2019-07-02 20:51:23.925442589 +0000 UTC m=+576.680238543) (total time: 1.114654243s):
Trace[540937284]: [1.114598008s] [1.114589589s] About to write a response
I0702 20:51:28.397734       1 trace.go:81] Trace[2106000279]: "Get /api/v1/namespaces/kube-system/pods/kube-scheduler-minikube" (started: 2019-07-02 20:51:27.184404505 +0000 UTC m=+579.939200442) (total time: 1.213295764s):
Trace[2106000279]: [1.212418427s] [1.21241074s] About to write a response
I0702 20:51:28.444375       1 trace.go:81] Trace[602498507]: "GuaranteedUpdate etcd3: *core.Event" (started: 2019-07-02 20:51:27.18408933 +0000 UTC m=+579.938885273) (total time: 1.26023579s):
Trace[602498507]: [1.2222699s] [1.216699375s] Transaction prepared
I0702 20:51:28.455839       1 trace.go:81] Trace[1405743360]: "Patch /api/v1/namespaces/kube-system/events/kube-scheduler-minikube.15adb24c342e405f" (started: 2019-07-02 20:51:27.184007913 +0000 UTC m=+579.938803879) (total time: 1.260994019s):
Trace[1405743360]: [1.222189201s] [1.216535592s] About to check admission control

The apiserver stays online though. Nevertheless the kubectl connection is interrupted, I need to wait until the control plane becomes responsive again.

There are no events, neither on the pod, nor in kubectl get events.

To me it looks like img is causing some network problems which lead to the leader election failing, however I am totally lost at debugging here.

issue-label-bot[bot] commented 5 years ago

Issue-Label Bot is automatically applying the label bug to this issue, with a confidence of 0.82. Please mark this comment with :thumbsup: or :thumbsdown: to give our bot feedback!

Links: app homepage, dashboard and code for this bot.

AkihiroSuda commented 5 years ago

Can you try without securityContext.privileged?

AkihiroSuda commented 5 years ago

Couldn't reproduce the issue on minikube 1.2.0 (Kubernetes 1.15.0)

sh0rez commented 5 years ago

Okay, I cannot reproduce it on EC2 either. Nevertheless it crashes on my local machine and on my local server:

https://asciinema.org/a/QJ06jv15Eu7oW0zFFZQjBjK1T?speed=10

sh0rez commented 5 years ago

This is the concatenated log of all services during the timespan:

etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:22.742461 W | etcdserver: timed out waiting for read index response (local node might have slow network)
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:24.711687 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "error:etcdserver: request timed out" took too long (24.298664393s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:24.712390 W | wal: sync duration of 22.39820622s, expected less than 1s
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:24.713188 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (1.962133833s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:24.713877 W | etcdserver: read-only range request "key:\"/registry/deployments\" range_end:\"/registry/deploymentt\" count_only:true " with result "range_response_count:0 size:7" took too long (1.967992983s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:24.714499 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (1.968650461s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:24.715253 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (1.98021819s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:24.715368 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:7" took too long (2.011170335s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:24.717233 W | etcdserver: read-only range request "key:\"/registry/services/endpoints\" range_end:\"/registry/services/endpointt\" count_only:true " with result "range_response_count:0 size:7" took too long (23.554606648s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:24.853469 I | embed: rejected connection from "127.0.0.1:58644" (error "EOF", ServerName "")
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:24.894702 I | embed: rejected connection from "127.0.0.1:58628" (error "EOF", ServerName "")
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:24.907287 I | embed: rejected connection from "127.0.0.1:58630" (error "EOF", ServerName "")
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:25.230338 W | etcdserver: read-only range request "key:\"/registry/minions/minikube\" " with result "range_response_count:1 size:4072" took too long (136.445534ms) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:25.231766 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/ingress-controller-leader-nginx\" " with result "range_response_count:1 size:453" took too long (125.538935ms) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:25.232842 W | etcdserver: read-only range request "key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" " with result "range_response_count:4 size:842" took too long (122.87067ms) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:25.233195 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:15 size:29536" took too long (123.412886ms) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:25.234718 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses\" range_end:\"/registry/runtimeclasset\" count_only:true " with result "range_response_count:0 size:5" took too long (128.426093ms) to execute
influxdb-grafana-8qttq_kube-system_influxdb-93c62b6c680b99c9633fdd84c4e89b4f5dba58ea3c90ac6246b640a2d36f96db.log    [I] 2019-07-03T17:16:24Z failed to store statistics: timeout service=monitor
influxdb-grafana-8qttq_kube-system_influxdb-93c62b6c680b99c9633fdd84c4e89b4f5dba58ea3c90ac6246b640a2d36f96db.log    [httpd] 172.17.0.5 - root [03/Jul/2019:17:16:24 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.3" 492eced1-9db6-11e9-8016-000000000000 30223
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.693690 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.694568 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:24.773591 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:24.776628 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.809843 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.809880 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.809935 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.809946 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.809953 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.809958 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:24.812870 1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.813115 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.813976 1 trace.go:81] Trace[18906199]: "Get /api/v1/namespaces/default" (started: 2019-07-03 17:16:01.077805519 +0000 UTC m=+1253.281577606) (total time: 23.736002193s):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[18906199]: [23.735814594s] [23.735805133s] About to write a response
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:24.814195 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.814673 1 trace.go:81] Trace[2105881521]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-07-03 17:16:00.411289333 +0000 UTC m=+1252.615061451) (total time: 24.403371043s):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[2105881521]: [24.403371043s] [24.403330263s] END
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.816796 1 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.816892 1 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.817015 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.817031 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.817223 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.817247 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.817388 1 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.817450 1 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.817702 1 trace.go:81] Trace[305836536]: "Get /api/v1/namespaces/kube-system/configmaps/ingress-controller-leader-nginx" (started: 2019-07-03 17:16:03.880287587 +0000 UTC m=+1256.084059686) (total time: 20.937397551s):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[305836536]: [20.937366678s] [20.937356917s] About to write a response
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.817935 1 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.820695 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.820761 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:24.828700 1 clientconn.go:960] grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:24.828781 1 clientconn.go:960] grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.829794 1 trace.go:81] Trace[723750187]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/minikube" (started: 2019-07-03 17:16:01.334434814 +0000 UTC m=+1253.538206910) (total time: 23.495343127s):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[723750187]: [23.495343127s] [23.495287201s] END
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834670 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834702 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834713 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834719 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834728 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834733 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834742 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834747 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834757 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834762 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834771 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834776 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834784 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.834789 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:24.835321 1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:24.974208 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:25.025715 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:25.032642 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:25.037149 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:25.037765 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:25.038339 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:25.054468 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:25.092221 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:25.105077 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:25.105607 1 reflector.go:302] storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions: watch of *apiextensions.CustomResourceDefinition ended with: The resourceVersion for the provided watch is too old.
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:25.106359 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:25.106389 1 reflector.go:302] storage/cacher.go:/validatingwebhookconfigurations: watch of *admissionregistration.ValidatingWebhookConfiguration ended with: The resourceVersion for the provided watch is too old.
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:25.106403 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:25.106424 1 reflector.go:302] storage/cacher.go:/apiregistration.k8s.io/apiservices: watch of *apiregistration.APIService ended with: The resourceVersion for the provided watch is too old.
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:25.106478 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:25.106504 1 reflector.go:302] storage/cacher.go:/storageclasses: watch of *storage.StorageClass ended with: The resourceVersion for the provided watch is too old.
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:25.282889 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:25.282937 1 reflector.go:302] storage/cacher.go:/runtimeclasses: watch of *node.RuntimeClass ended with: The resourceVersion for the provided watch is too old.
nginx-ingress-controller-7b465d9cf8-q4nvl_kube-system_nginx-ingress-controller-24e936d8fdd4d6cdab2ce93832d7de193d8bb4dbbb1872dfc105d812fc6a9e4c.log I0703 17:16:24.772771 6 leaderelection.go:249] failed to renew lease kube-system/ingress-controller-leader-nginx: failed to tryAcquireOrRenew context deadline exceeded
nginx-ingress-controller-7b465d9cf8-q4nvl_kube-system_nginx-ingress-controller-24e936d8fdd4d6cdab2ce93832d7de193d8bb4dbbb1872dfc105d812fc6a9e4c.log I0703 17:16:24.889443 6 leaderelection.go:205] attempting to acquire leader lease kube-system/ingress-controller-leader-nginx...
nginx-ingress-controller-7b465d9cf8-q4nvl_kube-system_nginx-ingress-controller-24e936d8fdd4d6cdab2ce93832d7de193d8bb4dbbb1872dfc105d812fc6a9e4c.log I0703 17:16:25.091777 6 leaderelection.go:214] successfully acquired lease kube-system/ingress-controller-leader-nginx
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:26.110270 1 cacher.go:154] Terminating all watchers from cacher *apiextensions.CustomResourceDefinition
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:26.110521 1 cacher.go:154] Terminating all watchers from cacher *admissionregistration.ValidatingWebhookConfiguration
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:26.128813 1 reflector.go:302] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.ValidatingWebhookConfiguration ended with: too old resource version: 1 (2277)
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:26.151107 1 reflector.go:302] k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:117: watch of *apiextensions.CustomResourceDefinition ended with: too old resource version: 1 (2277)
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log W0703 17:16:26.287242 1 cacher.go:154] Terminating all watchers from cacher *node.RuntimeClass
kube-controller-manager-minikube_kube-system_kube-controller-manager-8d01ad9ef939ea03be6c3a068db44376663106fbe99a023f37dc3f39031bb2cf.log   W0703 17:16:26.129063 1 reflector.go:302] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.ValidatingWebhookConfiguration ended with: too old resource version: 1 (2277)
kube-controller-manager-minikube_kube-system_kube-controller-manager-8d01ad9ef939ea03be6c3a068db44376663106fbe99a023f37dc3f39031bb2cf.log   W0703 17:16:26.199718 1 reflector.go:302] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: watch of *unstructured.Unstructured ended with: too old resource version: 1 (2277)
kube-controller-manager-minikube_kube-system_kube-controller-manager-8d01ad9ef939ea03be6c3a068db44376663106fbe99a023f37dc3f39031bb2cf.log   W0703 17:16:26.324228 1 reflector.go:302] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.RuntimeClass ended with: too old resource version: 1 (2279)
kube-scheduler-minikube_kube-system_kube-scheduler-c45e17d3f23025a2fe6462f89033ca41f5a798e008144cce605e1290f2f38aed.log E0703 17:16:24.704286 1 leaderelection.go:324] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
kube-scheduler-minikube_kube-system_kube-scheduler-c45e17d3f23025a2fe6462f89033ca41f5a798e008144cce605e1290f2f38aed.log E0703 17:16:22.736302 1 event.go:296] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'minikube_bbe40ff4-f748-4c66-8bf1-5ebc28160526 stopped leading'
kube-scheduler-minikube_kube-system_kube-scheduler-c45e17d3f23025a2fe6462f89033ca41f5a798e008144cce605e1290f2f38aed.log I0703 17:16:24.705573 1 leaderelection.go:281] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded
kube-scheduler-minikube_kube-system_kube-scheduler-c45e17d3f23025a2fe6462f89033ca41f5a798e008144cce605e1290f2f38aed.log E0703 17:16:24.705652 1 server.go:254] lost master
kube-scheduler-minikube_kube-system_kube-scheduler-c45e17d3f23025a2fe6462f89033ca41f5a798e008144cce605e1290f2f38aed.log lost lease
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log I0703 17:16:27.199463 1 serving.go:319] Generated self-signed cert in-memory
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log W0703 17:16:27.787592 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log W0703 17:16:27.787664 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log W0703 17:16:27.787679 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log W0703 17:16:27.787742 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log W0703 17:16:27.787769 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log I0703 17:16:27.924855 1 server.go:142] Version: v1.15.0
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log I0703 17:16:27.925342 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log W0703 17:16:27.926423 1 authorization.go:47] Authorization is disabled
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log W0703 17:16:27.926471 1 authentication.go:55] Authentication is disabled
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log I0703 17:16:27.926605 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log I0703 17:16:27.927106 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log I0703 17:16:29.030350 1 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-scheduler...
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log E0703 17:16:57.940063 1 leaderelection.go:324] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:58.022872 1 log.go:172] http: TLS handshake error from 192.168.99.100:53534: EOF
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:58.028242 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:58.057896 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:58.058461 1 trace.go:81] Trace[1764234468]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-07-03 17:16:41.132278562 +0000 UTC m=+1293.336050636) (total time: 16.925746514s):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[1764234468]: [16.925746514s] [16.92570812s] END
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:58.059643 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:58.061881 1 trace.go:81] Trace[1883032059]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-07-03 17:16:41.731389459 +0000 UTC m=+1293.935161544) (total time: 16.330470469s):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[1883032059]: [16.330470469s] [16.330424481s] END
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:58.177441 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:58.177995 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log E0703 17:16:58.178046 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:59.259408 1 trace.go:81] Trace[1941983561]: "GuaranteedUpdate etcd3: *coordination.Lease" (started: 2019-07-03 17:16:58.251882563 +0000 UTC m=+1310.455654665) (total time: 1.007500113s):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[1941983561]: [1.007479137s] [1.0070256s] Transaction committed
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:59.259555 1 trace.go:81] Trace[1948855516]: "Update /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/minikube" (started: 2019-07-03 17:16:58.251774626 +0000 UTC m=+1310.455546706) (total time: 1.007767395s):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[1948855516]: [1.007725448s] [1.007664507s] Object stored in database
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:59.262326 1 trace.go:81] Trace[833092943]: "List etcd3: key=/namespaces, resourceVersion=, limit: 0, continue: " (started: 2019-07-03 17:16:58.326867793 +0000 UTC m=+1310.530639866) (total time: 935.437068ms):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[833092943]: [935.437068ms] [935.437068ms] END
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:59.262553 1 trace.go:81] Trace[1097032844]: "List /api/v1/namespaces/" (started: 2019-07-03 17:16:58.326856655 +0000 UTC m=+1310.530628750) (total time: 935.613815ms):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[1097032844]: [935.5547ms] [935.550517ms] Listing from storage done
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:59.263195 1 trace.go:81] Trace[219239420]: "GuaranteedUpdate etcd3: *core.Event" (started: 2019-07-03 17:16:58.035523571 +0000 UTC m=+1310.239295690) (total time: 1.227656549s):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[219239420]: [268.331673ms] [268.331673ms] initial value restored
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[219239420]: [1.173042983s] [904.71131ms] Transaction prepared
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:59.263321 1 trace.go:81] Trace[1509506141]: "Patch /api/v1/namespaces/kube-system/events/nginx-ingress-controller-7b465d9cf8-q4nvl.15adf4382ad09e4e" (started: 2019-07-03 17:16:58.032639955 +0000 UTC m=+1310.236412060) (total time: 1.230665783s):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[1509506141]: [271.21756ms] [271.19166ms] About to apply patch
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[1509506141]: [1.175773019s] [904.555459ms] About to check admission control
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log I0703 17:16:59.445112 1 trace.go:81] Trace[1568777634]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2019-07-03 17:16:58.302014283 +0000 UTC m=+1310.505786381) (total time: 1.143055238s):
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[1568777634]: [960.018097ms] [960.018097ms] initial value restored
kube-apiserver-minikube_kube-system_kube-apiserver-6951cee2700e8febaaafa14b69b243a9bc8eb40ad10804e450d9de4b6cefb0e8.log Trace[1568777634]: [1.099152011s] [139.133914ms] Transaction prepared
kube-controller-manager-minikube_kube-system_kube-controller-manager-8d01ad9ef939ea03be6c3a068db44376663106fbe99a023f37dc3f39031bb2cf.log   E0703 17:16:57.942704 1 leaderelection.go:324] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
kube-controller-manager-minikube_kube-system_kube-controller-manager-8d01ad9ef939ea03be6c3a068db44376663106fbe99a023f37dc3f39031bb2cf.log   I0703 17:16:57.947831 1 leaderelection.go:281] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded
kube-controller-manager-minikube_kube-system_kube-controller-manager-8d01ad9ef939ea03be6c3a068db44376663106fbe99a023f37dc3f39031bb2cf.log   F0703 17:16:57.948846 1 controllermanager.go:281] leaderelection lost
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.111673 W | etcdserver: timed out waiting for read index response (local node might have slow network)
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.111738 W | wal: sync duration of 17.762819145s, expected less than 1s
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.179238 W | etcdserver: failed to revoke 34a76bb8c2e5a4e3 ("etcdserver: request timed out")
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.182840 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "error:etcdserver: request timed out" took too long (17.050173145s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.184234 W | etcdserver: failed to revoke 34a76bb8c2e5a4e3 ("lease not found")
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.188621 W | etcdserver: failed to revoke 34a76bb8c2e5a4e3 ("lease not found")
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.188665 W | etcdserver: failed to revoke 34a76bb8c2e5a4e3 ("lease not found")
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.188673 W | etcdserver: failed to revoke 34a76bb8c2e5a4e3 ("lease not found")
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.188679 W | etcdserver: failed to revoke 34a76bb8c2e5a4e3 ("lease not found")
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.188685 W | etcdserver: failed to revoke 34a76bb8c2e5a4e3 ("lease not found")
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.188691 W | etcdserver: failed to revoke 34a76bb8c2e5a4e3 ("lease not found")
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.206022 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (100.215891ms) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.209040 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:7" took too long (16.180949681s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.209691 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "error:context canceled" took too long (16.477862644s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.213183 W | etcdserver: read-only range request "key:\"/registry/volumeattachments\" range_end:\"/registry/volumeattachmentt\" count_only:true " with result "range_response_count:0 size:5" took too long (16.65251128s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.213460 W | etcdserver: read-only range request "key:\"/registry/jobs\" range_end:\"/registry/jobt\" count_only:true " with result "range_response_count:0 size:5" took too long (16.780377528s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.213753 W | etcdserver: read-only range request "key:\"/registry/podtemplates\" range_end:\"/registry/podtemplatet\" count_only:true " with result "range_response_count:0 size:5" took too long (16.853631663s) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:58.298138 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/heapster\" " with result "range_response_count:1 size:554" took too long (113.758592ms) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:59.258952 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.99.100\" " with result "range_response_count:0 size:5" took too long (941.678248ms) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:59.260201 W | etcdserver: read-only range request "key:\"/registry/roles\" range_end:\"/registry/rolet\" count_only:true " with result "range_response_count:0 size:7" took too long (234.201308ms) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:59.261168 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:5" took too long (898.997654ms) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:16:59.261325 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/heapster-9spdr\" " with result "range_response_count:1 size:1456" took too long (958.657899ms) to execute
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log INFO: Leader is minikube
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log INFO: == Kubernetes addon ensure completed at 2019-07-03T17:17:00+00:00 ==
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log INFO: == Reconciling with deprecated label ==
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log error: no objects passed to apply
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log INFO: == Reconciling with addon-manager label ==
kube-scheduler-minikube_kube-system_kube-scheduler-a95f87d441ed0c89fe4ff58cd6f27c6f2edd0b45e9f48f8c2b3eaac0a668a332.log I0703 17:17:00.566200 1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log deployment.apps/kubernetes-dashboard unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log service/kubernetes-dashboard unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log service/monitoring-grafana unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log replicationcontroller/heapster unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log service/heapster unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log replicationcontroller/influxdb-grafana unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log service/monitoring-influxdb unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log deployment.extensions/default-http-backend unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log deployment.extensions/nginx-ingress-controller unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log serviceaccount/nginx-ingress unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log service/default-http-backend unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log serviceaccount/storage-provisioner unchanged
kube-addon-manager-minikube_kube-system_kube-addon-manager-c2761a80019048b67abe122ab586b07f64454439dacc6ea3812c708fbbbcbb08.log INFO: == Kubernetes addon reconcile completed at 2019-07-03T17:17:02+00:00 ==
influxdb-grafana-8qttq_kube-system_influxdb-93c62b6c680b99c9633fdd84c4e89b4f5dba58ea3c90ac6246b640a2d36f96db.log    [httpd] 172.17.0.5 - root [03/Jul/2019:17:17:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.3" 6144d617-9db6-11e9-8017-000000000000 23364
kube-controller-manager-minikube_kube-system_kube-controller-manager-bae3f706ec7d8f922ebb3c3ac7ef36e41f8c08190ad3e0d7b25f2a9c6855f50e.log   I0703 17:17:01.200634 1 serving.go:319] Generated self-signed cert in-memory
kube-controller-manager-minikube_kube-system_kube-controller-manager-bae3f706ec7d8f922ebb3c3ac7ef36e41f8c08190ad3e0d7b25f2a9c6855f50e.log   I0703 17:17:01.669353 1 controllermanager.go:164] Version: v1.15.0
kube-controller-manager-minikube_kube-system_kube-controller-manager-bae3f706ec7d8f922ebb3c3ac7ef36e41f8c08190ad3e0d7b25f2a9c6855f50e.log   I0703 17:17:01.670206 1 secure_serving.go:116] Serving securely on 127.0.0.1:10257
kube-controller-manager-minikube_kube-system_kube-controller-manager-bae3f706ec7d8f922ebb3c3ac7ef36e41f8c08190ad3e0d7b25f2a9c6855f50e.log   I0703 17:17:01.679730 1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
kube-controller-manager-minikube_kube-system_kube-controller-manager-bae3f706ec7d8f922ebb3c3ac7ef36e41f8c08190ad3e0d7b25f2a9c6855f50e.log   I0703 17:17:01.689218 1 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-controller-manager...
kube-controller-manager-minikube_kube-system_kube-controller-manager-bae3f706ec7d8f922ebb3c3ac7ef36e41f8c08190ad3e0d7b25f2a9c6855f50e.log   I0703 17:17:21.286470 1 leaderelection.go:245] successfully acquired lease kube-system/kube-controller-manager
kube-controller-manager-minikube_kube-system_kube-controller-manager-bae3f706ec7d8f922ebb3c3ac7ef36e41f8c08190ad3e0d7b25f2a9c6855f50e.log   I0703 17:17:21.334428 1 event.go:258] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"06ff15c3-86a2-40c4-996d-e2b6a2529864", APIVersion:"v1", ResourceVersion:"2331", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_a38606d3-f28b-4ba2-a7c4-6afc53c07fd8 became leader
kube-controller-manager-minikube_kube-system_kube-controller-manager-bae3f706ec7d8f922ebb3c3ac7ef36e41f8c08190ad3e0d7b25f2a9c6855f50e.log   I0703 17:17:21.615253 1 plugins.go:103] No cloud provider specified.
kube-controller-manager-minikube_kube-system_kube-controller-manager-bae3f706ec7d8f922ebb3c3ac7ef36e41f8c08190ad3e0d7b25f2a9c6855f50e.log   I0703 17:17:21.681522 1 controller_utils.go:1029] Waiting for caches to sync for tokens controller
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:17:21.283175 W | etcdserver: read-only range request "key:\"/registry/deployments\" range_end:\"/registry/deploymentt\" count_only:true " with result "range_response_count:0 size:7" took too long (111.996391ms) to execute
etcd-minikube_kube-system_etcd-16d7bde44de8470c0e43c713b18e707774efd80b06b452859d6f2f6e6fe4c474.log 2019-07-03 17:17:21.283987 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/ingress-controller-leader-nginx\" " with result "range_response_count:1 size:453" took too long (201.452597ms) to execute
ikorolev93 commented 5 years ago

@sh0rez do you have etcd on the same machine as img? etcd is known to be sensitive to disk (and network, if you have it clustered) latencies, so you should either tune it to make thresholds bigger or use dedicated machines for it.

sh0rez commented 5 years ago

Yes, it's Minikube. I can try later if it helps.

This would at least explain why it works on EC2, they have much better disks than me