kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
28.91k stars 4.83k forks source link

ingress addon incorrectly round robining traffic another service #8328

Closed ericwooley closed 4 years ago

ericwooley commented 4 years ago

Repository to reproduce the issue Related stack overflow question I am creating two services/deployments, and trying to send traffice to only one of them.

The ingress addon seems to incorrectly round robining traffic another service (please note, I'm new to k8s, and might just be making a real stupid error of some kind).

Full yaml declaration

```yaml apiVersion: v1 kind: Service metadata: labels: app: repro-ingress-issue name: service-one spec: ports: - name: http port: 8080 protocol: TCP targetPort: http selector: app: repro-ingress-issue type: ClusterIP --- apiVersion: v1 kind: Service metadata: labels: app: repro-ingress-issue name: service-two spec: ports: - name: http port: 8080 protocol: TCP targetPort: http selector: app: repro-ingress-issue type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: repro-ingress-issue name: service-one spec: replicas: 1 selector: matchLabels: app: repro-ingress-issue template: metadata: labels: app: repro-ingress-issue spec: containers: - env: - name: NAME value: service-one - name: PORT value: "8080" image: issue-repro/express imagePullPolicy: Never name: service-one ports: - containerPort: 8080 name: http restartPolicy: Always --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: repro-ingress-issue name: service-two spec: replicas: 1 selector: matchLabels: app: repro-ingress-issue template: metadata: labels: app: repro-ingress-issue spec: containers: - env: - name: NAME value: service-two - name: PORT value: "8080" image: issue-repro/express imagePullPolicy: Never name: service-two ports: - containerPort: 8080 name: http restartPolicy: Always --- apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: repro-ingress-issue name: api namespace: default spec: rules: - host: api.dev.tengable.com http: paths: - backend: serviceName: service-one servicePort: http path: / pathType: Prefix ```

Steps to reproduce the issue:

  1. Create a minikube instance just for this
    1. minikube start --driver=kvm2 -p issue-repro
    2. minikube addons enable ingress -p issue-repro
    3. minikube addons enable dashbaord -p issue-repro
  2. connect docker to minikube
    1. eval $(minikube docker-env -p issue-repro)
  3. docker build . --tag issue-repro/express
  4. connect your local port:8080 to minikube:80
    1. ssh -N -f -M -S /tmp/issue-repro-minikube-sock -L 8080:127.0.0.1:80 -i $(minikube ssh-key -p issue-repro) docker@$(minikube ip -p issue-repro)
    2. ssh -S /tmp/issue-repro-minikube-sock -O exit -i $(minikube ssh-key -p issue-repro) docker@$(minikube ip -p issue-repro) to disconnect
  5. apply the k8s config
    1. `kubectl apply -k k8s/base
  6. Open a browser and navigate to api.dev.tengable.com:8080
  7. Refresh repeatly, and you will see that the requests go to both service-one and service-two

Full output of failed command:

Full output of minikube start command used, if not already included: ➜ minikube start -p issue-repro 😄 [issue-repro] minikube v1.10.1 on Debian bullseye/sid ▪ MINIKUBE_ACTIVE_DOCKERD=issue-repro ✨ Using the kvm2 driver based on existing profile 👍 Starting control plane node issue-repro in cluster issue-repro 🔄 Restarting existing kvm2 VM for "issue-repro" ... 🐳 Preparing Kubernetes v1.18.2 on Docker 19.03.8 ... 🌟 Enabled addons: dashboard, default-storageclass, ingress, storage-provisioner 🏄 Done! kubectl is now configured to use "issue-repro"

Optional: Full output of minikube logs command:

``` ==> Docker <== -- Logs begin at Sat 2020-05-30 00:01:18 UTC, end at Sat 2020-05-30 00:15:47 UTC. -- May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.023642543Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.023698898Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.023759494Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.023798880Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.023838904Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.023893196Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.024052463Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.024137067Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.024185126Z" level=info msg="containerd successfully booted in 0.008229s" May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.033447724Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.033540825Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.033589733Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.033639770Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.034228510Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.034244053Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.034254883Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.034261890Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.154134670Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.154252497Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.154297734Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.154337475Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.154376095Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.154414157Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.154592954Z" level=info msg="Loading containers: start." May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.496569235Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.651992192Z" level=info msg="Loading containers: done." May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.677405154Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.677927433Z" level=info msg="Daemon has completed initialization" May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.717309566Z" level=info msg="API listen on /var/run/docker.sock" May 30 00:01:21 issue-repro dockerd[1971]: time="2020-05-30T00:01:21.717368347Z" level=info msg="API listen on [::]:2376" May 30 00:01:21 issue-repro systemd[1]: Started Docker Application Container Engine. May 30 00:01:25 issue-repro dockerd[1971]: time="2020-05-30T00:01:25.140371714Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/889b2ac764bdf5524c93e388084c84b40a43b43fddc9b412f63c168f739ea0de/shim.sock" debug=false pid=3015 May 30 00:01:25 issue-repro dockerd[1971]: time="2020-05-30T00:01:25.167169091Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d5d0d038275f9da195902d299f85c23ed2e195bddd1f1d0541a5b7937745c1f4/shim.sock" debug=false pid=3033 May 30 00:01:25 issue-repro dockerd[1971]: time="2020-05-30T00:01:25.173678675Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cdfd0fbe79b8074f91f292d7f55992e7634f0e6ef35a7d2cfbcb37c106779337/shim.sock" debug=false pid=3042 May 30 00:01:25 issue-repro dockerd[1971]: time="2020-05-30T00:01:25.173974826Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/559e28054864be5358fa93d553bb1beb61a54eb712a48f34c366aee1e3392469/shim.sock" debug=false pid=3046 May 30 00:01:25 issue-repro dockerd[1971]: time="2020-05-30T00:01:25.405562383Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1aa4e2c67431d204d04fdeb48186219ecb2e01c16292a4bda562819bbd65dc4f/shim.sock" debug=false pid=3164 May 30 00:01:25 issue-repro dockerd[1971]: time="2020-05-30T00:01:25.430602739Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f719408b0969d7c29da2b7f342760c36c879484a58a6bb437b27a10e25020b9b/shim.sock" debug=false pid=3182 May 30 00:01:25 issue-repro dockerd[1971]: time="2020-05-30T00:01:25.481283252Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/55e125eefd8a3d8fa9b4cd5036f72a0936f1334ab9af3b02a135440d38295295/shim.sock" debug=false pid=3211 May 30 00:01:25 issue-repro dockerd[1971]: time="2020-05-30T00:01:25.485779414Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6f680c7cd85ef47024156193907954ab0a5ffd1f79ea00d1240677ee66d7628c/shim.sock" debug=false pid=3221 May 30 00:01:30 issue-repro dockerd[1971]: time="2020-05-30T00:01:30.048708560Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c48090c51a104208c3c5ce3e6598589dc0fe727cdf0460b6b78413e29a67ddc0/shim.sock" debug=false pid=3608 May 30 00:01:30 issue-repro dockerd[1971]: time="2020-05-30T00:01:30.190265100Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/19b51979e2f8969cf737c666840056242a147ae416d3b613678ecc4300dc0eab/shim.sock" debug=false pid=3675 May 30 00:01:30 issue-repro dockerd[1971]: time="2020-05-30T00:01:30.202366949Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6667bc1443814bef7ff63bc8eaf049a131037b067a55edd39bc6e6aa2241624d/shim.sock" debug=false pid=3687 May 30 00:01:30 issue-repro dockerd[1971]: time="2020-05-30T00:01:30.245036472Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d04f82d83dc56dafb201fbe9e7e5db454d5f95168e66fcca707f640c5c346230/shim.sock" debug=false pid=3738 May 30 00:01:30 issue-repro dockerd[1971]: time="2020-05-30T00:01:30.314084386Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/471c1642ad3366be282b83cea17ba733aa8a440f3b637a602c4fe7e43f2cfb27/shim.sock" debug=false pid=3800 May 30 00:01:30 issue-repro dockerd[1971]: time="2020-05-30T00:01:30.502711260Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d2fa5a229592a110f3389165dd54184cb9e7d88889e8ee6a6f619bdf41c979e9/shim.sock" debug=false pid=3861 May 30 00:01:30 issue-repro dockerd[1971]: time="2020-05-30T00:01:30.848452233Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c6c6eaf76a5834e8a90dcc3a2c05f57827176c78d45a461be61d91f31cc80062/shim.sock" debug=false pid=3953 May 30 00:01:30 issue-repro dockerd[1971]: time="2020-05-30T00:01:30.849960471Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7213c5711e8835d66c804d909b3d8994d09f2d0f2496686e1f12a3dcea56274f/shim.sock" debug=false pid=3957 May 30 00:01:30 issue-repro dockerd[1971]: time="2020-05-30T00:01:30.858729519Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/55149cef34ef8ae80b10cb9a18ec9a6594aed979a9ec4a638041bbe348aea932/shim.sock" debug=false pid=3963 May 30 00:01:31 issue-repro dockerd[1971]: time="2020-05-30T00:01:31.174043960Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/232a501b55d92b5d9f03a00515cc8fa7b27401158629b771c260a6e60aaa9817/shim.sock" debug=false pid=4088 May 30 00:01:31 issue-repro dockerd[1971]: time="2020-05-30T00:01:31.594683914Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7f877b4c636defcc442555ab2a140b58c74069c7e3fb6dfad053bfb97c4abafe/shim.sock" debug=false pid=4190 May 30 00:01:31 issue-repro dockerd[1971]: time="2020-05-30T00:01:31.598705871Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d81b00a45a460a42503c16cd2b6f9555660aaf007a5e868f62b5a6cdbea79f10/shim.sock" debug=false pid=4194 May 30 00:01:31 issue-repro dockerd[1971]: time="2020-05-30T00:01:31.611716098Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/726bb542208c930216ac1ac957173909c057b56cd2457e11007ae1a80e86b636/shim.sock" debug=false pid=4205 May 30 00:01:31 issue-repro dockerd[1971]: time="2020-05-30T00:01:31.629273787Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5ba343dd4d78d0fce7b0ba38bfab3506f2e2b6e1f54fd8ce71622d32a16cb880/shim.sock" debug=false pid=4229 May 30 00:01:32 issue-repro dockerd[1971]: time="2020-05-30T00:01:32.044672141Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c0fdf9c9236f994f8672f8aa95ca4b5b36fd7b8cbb3db2266f1c182aa41ec596/shim.sock" debug=false pid=4402 May 30 00:01:32 issue-repro dockerd[1971]: time="2020-05-30T00:01:32.416689414Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9b1aa95eeedd2d88ec77ce82880e9bb78044da65ba89f7251322e16c5246ea29/shim.sock" debug=false pid=4494 May 30 00:01:32 issue-repro dockerd[1971]: time="2020-05-30T00:01:32.442704413Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0eb33c39594a95e8bb5cd39611ae54a0cafd2c5fb506632701e11ffc60ebb020/shim.sock" debug=false pid=4509 May 30 00:01:32 issue-repro dockerd[1971]: time="2020-05-30T00:01:32.685257714Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4c533009716ca22ac8e1ab0220303375dad66b992c8d6c7aee3cd92cebf01625/shim.sock" debug=false pid=4591 May 30 00:02:01 issue-repro dockerd[1971]: time="2020-05-30T00:02:01.520679883Z" level=info msg="shim reaped" id=55149cef34ef8ae80b10cb9a18ec9a6594aed979a9ec4a638041bbe348aea932 May 30 00:02:01 issue-repro dockerd[1971]: time="2020-05-30T00:02:01.530783166Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 30 00:02:17 issue-repro dockerd[1971]: time="2020-05-30T00:02:17.355187193Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cee32e06fc8f85772db9cb569362fa5e3f76aa54c1b1e066368d57046b41d35b/shim.sock" debug=false pid=5011 ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID cee32e06fc8f8 8b32422733b3a 13 minutes ago Running kubernetes-dashboard 2 d04f82d83dc56 9b1aa95eeedd2 67da37a9a360e 14 minutes ago Running coredns 1 5ba343dd4d78d 0eb33c39594a9 67da37a9a360e 14 minutes ago Running coredns 1 d81b00a45a460 4c533009716ca 4689081edb103 14 minutes ago Running storage-provisioner 2 c0fdf9c9236f9 726bb542208c9 3b08661dc379d 14 minutes ago Running dashboard-metrics-scraper 1 19b51979e2f89 7f877b4c636de 0d40868643c69 14 minutes ago Running kube-proxy 1 7213c5711e883 d2fa5a229592a 7719f5a0ec639 14 minutes ago Running service-one 1 c48090c51a104 232a501b55d92 70144d369cb28 14 minutes ago Running controller 1 471c1642ad336 c6c6eaf76a583 7719f5a0ec639 14 minutes ago Running service-two 1 6667bc1443814 55149cef34ef8 8b32422733b3a 14 minutes ago Exited kubernetes-dashboard 1 d04f82d83dc56 6f680c7cd85ef ace0a8c17ba90 14 minutes ago Running kube-controller-manager 2 d5d0d038275f9 55e125eefd8a3 6ed75ad404bdd 14 minutes ago Running kube-apiserver 1 559e28054864b f719408b0969d a3099161e1375 14 minutes ago Running kube-scheduler 2 cdfd0fbe79b80 1aa4e2c67431d 303ce5db0e90d 14 minutes ago Running etcd 1 889b2ac764bdf d74300e79d380 7719f5a0ec639 31 minutes ago Exited service-one 0 c44e2f67fd523 44589a3fa3b18 7719f5a0ec639 31 minutes ago Exited service-two 0 272d310d98642 2c989d56ca9b2 quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287 38 minutes ago Exited controller 0 a673c7fc2885a 09ab7b7f59bbb 3b08661dc379d 39 minutes ago Exited dashboard-metrics-scraper 0 11f97f0fb3e18 a310ba967904b jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 39 minutes ago Exited patch 0 a18511ffb6591 2af0ed7fcc571 jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 39 minutes ago Exited create 0 3b6c521686fd0 044503980b7a4 a3099161e1375 40 minutes ago Exited kube-scheduler 1 dc5661b63d285 2bee36cfb700c ace0a8c17ba90 40 minutes ago Exited kube-controller-manager 1 988a7c4c75a82 c66154e1ba5b2 4689081edb103 42 minutes ago Exited storage-provisioner 1 ba0c2007a7bf0 c26e0546e6e84 67da37a9a360e 42 minutes ago Exited coredns 0 0ca834fdd7b2e 1f580dc7b62a4 67da37a9a360e 42 minutes ago Exited coredns 0 d9ba08ee70ad9 dcaea65e3c544 0d40868643c69 42 minutes ago Exited kube-proxy 0 43397fb07994e bc03c4008856a 303ce5db0e90d 43 minutes ago Exited etcd 0 ec998f445eec1 1f79499e941c1 6ed75ad404bdd 43 minutes ago Exited kube-apiserver 0 31e3c9d9c8d90 ==> coredns [0eb33c39594a] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b ==> coredns [1f580dc7b62a] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s ==> coredns [9b1aa95eeedd] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b ==> coredns [c26e0546e6e8] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b ==> describe nodes <== Name: issue-repro Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=issue-repro kubernetes.io/os=linux minikube.k8s.io/commit=63ab801ac27e5742ae442ce36dff7877dcccb278 minikube.k8s.io/name=issue-repro minikube.k8s.io/updated_at=2020_05_29T16_32_42_0700 minikube.k8s.io/version=v1.10.1 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 29 May 2020 23:32:26 +0000 Taints: Unschedulable: false Lease: HolderIdentity: issue-repro AcquireTime: RenewTime: Sat, 30 May 2020 00:15:40 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 30 May 2020 00:11:33 +0000 Fri, 29 May 2020 23:32:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 30 May 2020 00:11:33 +0000 Fri, 29 May 2020 23:32:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 30 May 2020 00:11:33 +0000 Fri, 29 May 2020 23:32:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 30 May 2020 00:11:33 +0000 Fri, 29 May 2020 23:32:36 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.39.148 Hostname: issue-repro Capacity: cpu: 2 ephemeral-storage: 16954224Ki hugepages-2Mi: 0 memory: 5674536Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 16954224Ki hugepages-2Mi: 0 memory: 5674536Ki pods: 110 System Info: Machine ID: bf304dc216ec441e8fbb3eac142182b3 System UUID: bf304dc2-16ec-441e-8fbb-3eac142182b3 Boot ID: 92e4e06a-5984-4f55-90a6-963eef13285a Kernel Version: 4.19.107 OS Image: Buildroot 2019.02.10 Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.8 Kubelet Version: v1.18.2 Kube-Proxy Version: v1.18.2 Non-terminated Pods: (13 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default service-one-857f9557f5-lh6jg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31m default service-two-5664d4c575-wcxsp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31m kube-system coredns-66bff467f8-7vdm8 100m (5%) 0 (0%) 70Mi (1%) 170Mi (3%) 42m kube-system coredns-66bff467f8-k9dnt 100m (5%) 0 (0%) 70Mi (1%) 170Mi (3%) 42m kube-system etcd-issue-repro 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m kube-system ingress-nginx-controller-7bb4c67d67-vns67 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 39m kube-system kube-apiserver-issue-repro 250m (12%) 0 (0%) 0 (0%) 0 (0%) 43m kube-system kube-controller-manager-issue-repro 200m (10%) 0 (0%) 0 (0%) 0 (0%) 43m kube-system kube-proxy-r2b28 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42m kube-system kube-scheduler-issue-repro 100m (5%) 0 (0%) 0 (0%) 0 (0%) 43m kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m kubernetes-dashboard dashboard-metrics-scraper-84bfdf55ff-tkrwc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39m kubernetes-dashboard kubernetes-dashboard-696dbcc666-xphg5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (42%) 0 (0%) memory 230Mi (4%) 340Mi (6%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 43m kubelet, issue-repro Starting kubelet. Normal NodeHasSufficientMemory 43m kubelet, issue-repro Node issue-repro status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 43m kubelet, issue-repro Node issue-repro status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 43m kubelet, issue-repro Node issue-repro status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 43m kubelet, issue-repro Updated Node Allocatable limit across pods Normal Starting 42m kube-proxy, issue-repro Starting kube-proxy. Normal Starting 14m kubelet, issue-repro Starting kubelet. Normal NodeHasSufficientMemory 14m (x8 over 14m) kubelet, issue-repro Node issue-repro status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 14m (x8 over 14m) kubelet, issue-repro Node issue-repro status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 14m (x7 over 14m) kubelet, issue-repro Node issue-repro status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 14m kubelet, issue-repro Updated Node Allocatable limit across pods Normal Starting 14m kube-proxy, issue-repro Starting kube-proxy. ==> dmesg <== [May30 00:01] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.034695] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ +2.079149] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +0.483059] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument [ +0.005910] systemd-fstab-generator[1146]: Ignoring "noauto" for root device [ +0.002148] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ +0.000001] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [ +1.033148] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [ +0.014084] vboxguest: loading out-of-tree module taints kernel. [ +0.002815] vboxguest: PCI device not found, probably running on physical hardware. [ +2.007577] systemd-fstab-generator[1951]: Ignoring "noauto" for root device [ +0.061186] systemd-fstab-generator[1961]: Ignoring "noauto" for root device [ +1.194668] systemd-fstab-generator[2197]: Ignoring "noauto" for root device [ +0.239901] systemd-fstab-generator[2271]: Ignoring "noauto" for root device [ +7.490649] kauditd_printk_skb: 149 callbacks suppressed [ +19.283860] kauditd_printk_skb: 77 callbacks suppressed [May30 00:02] kauditd_printk_skb: 2 callbacks suppressed [May30 00:03] NFSD: Unable to end grace period: -110 ==> etcd [1aa4e2c67431] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-05-30 00:01:25.721919 I | etcdmain: etcd Version: 3.4.3 2020-05-30 00:01:25.721950 I | etcdmain: Git SHA: 3cf2f69b5 2020-05-30 00:01:25.721953 I | etcdmain: Go Version: go1.12.12 2020-05-30 00:01:25.721956 I | etcdmain: Go OS/Arch: linux/amd64 2020-05-30 00:01:25.721959 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2 2020-05-30 00:01:25.722002 N | etcdmain: the server is already initialized as member before, starting as etcd member... [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-05-30 00:01:25.722024 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-05-30 00:01:25.740308 I | embed: name = issue-repro 2020-05-30 00:01:25.740324 I | embed: data dir = /var/lib/minikube/etcd 2020-05-30 00:01:25.740328 I | embed: member dir = /var/lib/minikube/etcd/member 2020-05-30 00:01:25.740331 I | embed: heartbeat = 100ms 2020-05-30 00:01:25.740333 I | embed: election = 1000ms 2020-05-30 00:01:25.740336 I | embed: snapshot count = 10000 2020-05-30 00:01:25.740343 I | embed: advertise client URLs = https://192.168.39.148:2379 2020-05-30 00:01:25.740347 I | embed: initial advertise peer URLs = https://192.168.39.148:2380 2020-05-30 00:01:25.740350 I | embed: initial cluster = 2020-05-30 00:01:25.781166 I | etcdserver: restarting member 942851562e2254aa in cluster 8bee001f44ea94 at commit index 4745 raft2020/05/30 00:01:25 INFO: 942851562e2254aa switched to configuration voters=() raft2020/05/30 00:01:25 INFO: 942851562e2254aa became follower at term 2 raft2020/05/30 00:01:25 INFO: newRaft 942851562e2254aa [peers: [], term: 2, commit: 4745, applied: 0, lastindex: 4745, lastterm: 2] 2020-05-30 00:01:25.801625 I | mvcc: restore compact to 3343 2020-05-30 00:01:25.810853 W | auth: simple token is not cryptographically signed 2020-05-30 00:01:25.817749 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] raft2020/05/30 00:01:25 INFO: 942851562e2254aa switched to configuration voters=(10675872347264799914) 2020-05-30 00:01:25.826929 I | etcdserver/membership: added member 942851562e2254aa [https://192.168.39.148:2380] to cluster 8bee001f44ea94 2020-05-30 00:01:25.827083 N | etcdserver/membership: set the initial cluster version to 3.4 2020-05-30 00:01:25.827252 I | etcdserver/api: enabled capabilities for version 3.4 2020-05-30 00:01:25.827527 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-05-30 00:01:25.827755 I | embed: listening for metrics on http://127.0.0.1:2381 2020-05-30 00:01:25.830883 I | embed: listening for peers on 192.168.39.148:2380 raft2020/05/30 00:01:26 INFO: 942851562e2254aa is starting a new election at term 2 raft2020/05/30 00:01:26 INFO: 942851562e2254aa became candidate at term 3 raft2020/05/30 00:01:26 INFO: 942851562e2254aa received MsgVoteResp from 942851562e2254aa at term 3 raft2020/05/30 00:01:26 INFO: 942851562e2254aa became leader at term 3 raft2020/05/30 00:01:26 INFO: raft.node: 942851562e2254aa elected leader 942851562e2254aa at term 3 2020-05-30 00:01:26.980224 I | etcdserver: published {Name:issue-repro ClientURLs:[https://192.168.39.148:2379]} to cluster 8bee001f44ea94 2020-05-30 00:01:27.002188 I | embed: ready to serve client requests 2020-05-30 00:01:27.027850 I | embed: serving client requests on 192.168.39.148:2379 2020-05-30 00:01:27.029556 I | embed: ready to serve client requests 2020-05-30 00:01:27.031569 I | embed: serving client requests on 127.0.0.1:2379 2020-05-30 00:02:23.121696 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:511" took too long (159.527996ms) to execute 2020-05-30 00:02:25.665591 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:484" took too long (154.477643ms) to execute 2020-05-30 00:02:27.843756 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:484" took too long (162.880564ms) to execute 2020-05-30 00:02:30.023388 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:484" took too long (164.487123ms) to execute 2020-05-30 00:02:36.242697 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:483" took too long (115.896285ms) to execute 2020-05-30 00:02:38.501219 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:483" took too long (244.692103ms) to execute 2020-05-30 00:02:38.501494 W | etcdserver: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (101.312904ms) to execute 2020-05-30 00:02:39.807690 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:511" took too long (253.699072ms) to execute 2020-05-30 00:02:39.807883 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (211.782784ms) to execute 2020-05-30 00:02:40.620600 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:483" took too long (104.534572ms) to execute 2020-05-30 00:02:40.789031 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:290" took too long (101.187227ms) to execute 2020-05-30 00:11:27.052901 I | mvcc: store.index: compact 5064 2020-05-30 00:11:27.081300 I | mvcc: finished scheduled compaction at 5064 (took 28.092117ms) ==> etcd [bc03c4008856] <== 2020-05-29 23:35:40.392081 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:482" took too long (122.095263ms) to execute 2020-05-29 23:35:40.593058 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:510" took too long (157.873177ms) to execute 2020-05-29 23:37:01.240572 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:509" took too long (120.481989ms) to execute 2020-05-29 23:37:01.706529 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (208.641371ms) to execute 2020-05-29 23:37:01.708104 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:2 size:6682" took too long (329.590065ms) to execute 2020-05-29 23:37:02.272195 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/ingress-controller-leader-nginx\" " with result "range_response_count:1 size:607" took too long (247.501913ms) to execute 2020-05-29 23:37:02.272983 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:11 size:49528" took too long (248.175586ms) to execute 2020-05-29 23:37:03.182573 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:483" took too long (253.164497ms) to execute 2020-05-29 23:37:03.388708 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:509" took too long (134.692949ms) to execute 2020-05-29 23:37:07.718260 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:510" took too long (226.815025ms) to execute 2020-05-29 23:37:09.851317 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:510" took too long (120.426965ms) to execute 2020-05-29 23:37:10.054370 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/ingress-controller-leader-nginx\" " with result "range_response_count:1 size:607" took too long (186.407752ms) to execute 2020-05-29 23:37:12.076541 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:510" took too long (210.797003ms) to execute 2020-05-29 23:37:12.438505 W | etcdserver: read-only range request "key:\"/registry/csidrivers\" range_end:\"/registry/csidrivert\" count_only:true " with result "range_response_count:0 size:5" took too long (100.422524ms) to execute 2020-05-29 23:37:13.698953 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:483" took too long (278.370597ms) to execute 2020-05-29 23:37:14.228756 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (145.127978ms) to execute 2020-05-29 23:37:14.229282 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:605" took too long (146.463261ms) to execute 2020-05-29 23:37:14.646594 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (200.179075ms) to execute 2020-05-29 23:37:15.118932 W | etcdserver: read-only range request "key:\"/registry/clusterroles\" range_end:\"/registry/clusterrolet\" count_only:true " with result "range_response_count:0 size:7" took too long (153.352376ms) to execute 2020-05-29 23:37:26.897738 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:510" took too long (125.327843ms) to execute 2020-05-29 23:37:29.071263 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (116.850767ms) to execute 2020-05-29 23:37:36.209108 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:483" took too long (143.361914ms) to execute 2020-05-29 23:42:21.597614 I | mvcc: store.index: compact 1118 2020-05-29 23:42:21.617553 I | mvcc: finished scheduled compaction at 1118 (took 19.549529ms) 2020-05-29 23:45:37.783054 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:484" took too long (218.166458ms) to execute 2020-05-29 23:45:38.610344 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:511" took too long (244.236255ms) to execute 2020-05-29 23:45:38.610445 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:5" took too long (233.654925ms) to execute 2020-05-29 23:45:40.895059 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:511" took too long (222.340784ms) to execute 2020-05-29 23:45:40.895299 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (112.693016ms) to execute 2020-05-29 23:45:41.966956 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:484" took too long (115.654424ms) to execute 2020-05-29 23:45:43.100055 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:511" took too long (192.544189ms) to execute 2020-05-29 23:45:44.101112 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:484" took too long (120.047215ms) to execute 2020-05-29 23:45:44.292121 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:290" took too long (110.29215ms) to execute 2020-05-29 23:45:47.428921 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:510" took too long (255.519214ms) to execute 2020-05-29 23:45:49.648947 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:510" took too long (147.297402ms) to execute 2020-05-29 23:45:50.497031 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:483" took too long (249.313252ms) to execute 2020-05-29 23:45:52.630680 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:483" took too long (117.875121ms) to execute 2020-05-29 23:46:00.012176 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:511" took too long (109.318823ms) to execute 2020-05-29 23:46:05.102858 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:484" took too long (157.501679ms) to execute 2020-05-29 23:47:21.602135 I | mvcc: store.index: compact 1876 2020-05-29 23:47:21.616222 I | mvcc: finished scheduled compaction at 1876 (took 13.560542ms) 2020-05-29 23:52:21.615287 I | mvcc: store.index: compact 2645 2020-05-29 23:52:21.629357 I | mvcc: finished scheduled compaction at 2645 (took 13.675776ms) 2020-05-29 23:57:21.619236 I | mvcc: store.index: compact 3343 2020-05-29 23:57:21.632994 I | mvcc: finished scheduled compaction at 3343 (took 13.236719ms) 2020-05-29 23:58:30.491929 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:483" took too long (215.267578ms) to execute 2020-05-29 23:58:31.459141 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:606" took too long (208.120534ms) to execute 2020-05-29 23:58:31.765235 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:510" took too long (295.819461ms) to execute 2020-05-29 23:58:32.643605 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:483" took too long (135.049741ms) to execute 2020-05-29 23:58:33.996546 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (221.410889ms) to execute 2020-05-29 23:58:34.775580 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:484" took too long (110.666566ms) to execute 2020-05-29 23:58:36.167570 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:511" took too long (120.32759ms) to execute 2020-05-29 23:58:39.051718 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:484" took too long (205.347715ms) to execute 2020-05-29 23:58:40.482190 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:510" took too long (229.294595ms) to execute 2020-05-29 23:58:41.293341 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (174.503314ms) to execute 2020-05-29 23:58:41.508802 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (105.162189ms) to execute 2020-05-29 23:58:42.721122 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:510" took too long (226.224062ms) to execute 2020-05-29 23:58:49.676165 N | pkg/osutil: received terminated signal, shutting down... WARNING: 2020/05/29 23:58:49 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... 2020-05-29 23:58:49.676565 I | etcdserver: skipped leadership transfer for single voting member cluster ==> kernel <== 00:15:47 up 14 min, 0 users, load average: 0.20, 0.27, 0.25 Linux issue-repro 4.19.107 #1 SMP Mon May 11 14:51:04 PDT 2020 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2019.02.10" ==> kube-apiserver [1f79499e941c] <== Trace[1724710680]: [996.005992ms] [995.953411ms] END E0529 23:34:56.506201 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} I0529 23:34:56.506676 1 trace.go:116] Trace[1703325790]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.18.2 (linux/amd64) kubernetes/52c56ce/leader-election,client:192.168.39.148 (started: 2020-05-29 23:34:46.505680288 +0000 UTC m=+147.688404970) (total time: 10.000932532s): Trace[1703325790]: [10.000932532s] [10.000908248s] END I0529 23:34:56.596494 1 trace.go:116] Trace[353112152]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2020-05-29 23:34:51.939064529 +0000 UTC m=+153.121789211) (total time: 4.657391129s): Trace[353112152]: [4.657391129s] [4.657391129s] END E0529 23:34:56.596530 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} I0529 23:34:56.596741 1 trace.go:116] Trace[258557226]: "List" url:/apis/batch/v1/jobs,user-agent:kube-controller-manager/v1.18.2 (linux/amd64) kubernetes/52c56ce/system:serviceaccount:kube-system:cronjob-controller,client:192.168.39.148 (started: 2020-05-29 23:34:51.939035908 +0000 UTC m=+153.121760595) (total time: 4.657679267s): Trace[258557226]: [4.657679267s] [4.657657907s] END E0529 23:34:56.597011 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} I0529 23:34:56.746374 1 trace.go:116] Trace[796804468]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-05-29 23:34:51.920122903 +0000 UTC m=+153.102847602) (total time: 4.826206032s): Trace[796804468]: [4.826156202s] [4.825627899s] Transaction committed I0529 23:34:56.746546 1 trace.go:116] Trace[1026711363]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/issue-repro,user-agent:kubelet/v1.18.2 (linux/amd64) kubernetes/52c56ce,client:192.168.39.148 (started: 2020-05-29 23:34:51.919984372 +0000 UTC m=+153.102709054) (total time: 4.826524056s): Trace[1026711363]: [4.826442554s] [4.826343572s] Object stored in database I0529 23:34:56.747152 1 trace.go:116] Trace[291487804]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.18.2 (linux/amd64) kubernetes/52c56ce,client:192.168.39.148 (started: 2020-05-29 23:34:53.811620289 +0000 UTC m=+154.994345048) (total time: 2.93550088s): Trace[291487804]: [2.935456243s] [2.934835716s] Object stored in database I0529 23:34:57.636822 1 trace.go:116] Trace[737442087]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.18.2 (linux/amd64) kubernetes/52c56ce,client:127.0.0.1 (started: 2020-05-29 23:34:54.082541872 +0000 UTC m=+155.265266628) (total time: 3.554172036s): Trace[737442087]: [3.554040261s] [3.554022918s] About to write a response I0529 23:35:10.417786 1 trace.go:116] Trace[213727656]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-05-29 23:35:06.753773579 +0000 UTC m=+167.936498266) (total time: 3.663986265s): Trace[213727656]: [3.663969537s] [3.663440371s] Transaction committed I0529 23:35:10.417892 1 trace.go:116] Trace[1019622382]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/issue-repro,user-agent:kubelet/v1.18.2 (linux/amd64) kubernetes/52c56ce,client:192.168.39.148 (started: 2020-05-29 23:35:06.753619476 +0000 UTC m=+167.936344175) (total time: 3.664233342s): Trace[1019622382]: [3.664185552s] [3.664073385s] Object stored in database I0529 23:35:10.419175 1 trace.go:116] Trace[1208410454]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.18.2 (linux/amd64) kubernetes/52c56ce,client:127.0.0.1 (started: 2020-05-29 23:35:04.08183937 +0000 UTC m=+165.264564040) (total time: 6.337314218s): Trace[1208410454]: [6.33728391s] [6.337275504s] About to write a response I0529 23:35:10.421541 1 trace.go:116] Trace[1070200330]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2020-05-29 23:35:03.80736294 +0000 UTC m=+164.990087611) (total time: 6.614158172s): Trace[1070200330]: [6.612580751s] [6.612580751s] initial value restored I0529 23:35:10.421617 1 trace.go:116] Trace[1937231356]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-apiserver-issue-repro.1613a3e42dd4e20b,user-agent:kubelet/v1.18.2 (linux/amd64) kubernetes/52c56ce,client:192.168.39.148 (started: 2020-05-29 23:35:03.807290073 +0000 UTC m=+164.990014744) (total time: 6.614308487s): Trace[1937231356]: [6.612655222s] [6.612631907s] About to apply patch I0529 23:35:10.972447 1 trace.go:116] Trace[1758244507]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-05-29 23:35:10.420792243 +0000 UTC m=+171.603516937) (total time: 551.627005ms): Trace[1758244507]: [401.058479ms] [401.058479ms] initial value restored I0529 23:35:27.929919 1 trace.go:116] Trace[72672120]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.18.2 (linux/amd64) kubernetes/52c56ce/leader-election,client:192.168.39.148 (started: 2020-05-29 23:35:27.397469398 +0000 UTC m=+188.580194070) (total time: 532.377201ms): Trace[72672120]: [532.205654ms] [532.194475ms] About to write a response I0529 23:36:29.854000 1 controller.go:606] quota admission added evaluator for: events.events.k8s.io I0529 23:36:29.897031 1 controller.go:606] quota admission added evaluator for: jobs.batch I0529 23:40:14.868165 1 controller.go:606] quota admission added evaluator for: ingresses.extensions I0529 23:44:20.323959 1 trace.go:116] Trace[455720676]: "Get" url:/api/v1/namespaces/default/pods/service-one-88f7bf775-dcksx/log,user-agent:kubectl/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.39.1 (started: 2020-05-29 23:43:52.795148502 +0000 UTC m=+693.977873176) (total time: 27.528762877s): Trace[455720676]: [27.528761007s] [27.525401927s] Transformed response object W0529 23:56:32.835318 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted I0529 23:58:49.597612 1 controller.go:181] Shutting down kubernetes service endpoint reconciler I0529 23:58:49.597588 1 dynamic_cafile_content.go:182] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0529 23:58:49.598009 1 controller.go:123] Shutting down OpenAPI controller I0529 23:58:49.598017 1 available_controller.go:399] Shutting down AvailableConditionController I0529 23:58:49.598027 1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller I0529 23:58:49.598033 1 crdregistration_controller.go:142] Shutting down crd-autoregister controller I0529 23:58:49.598039 1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController I0529 23:58:49.598046 1 nonstructuralschema_controller.go:198] Shutting down NonStructuralSchemaConditionController I0529 23:58:49.598052 1 establishing_controller.go:87] Shutting down EstablishingController I0529 23:58:49.598058 1 naming_controller.go:302] Shutting down NamingConditionController I0529 23:58:49.598063 1 customresource_discovery_controller.go:220] Shutting down DiscoveryController I0529 23:58:49.598073 1 autoregister_controller.go:165] Shutting down autoregister controller I0529 23:58:49.598080 1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController I0529 23:58:49.598086 1 crd_finalizer.go:278] Shutting down CRDFinalizer I0529 23:58:49.598217 1 dynamic_cafile_content.go:182] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0529 23:58:49.598225 1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt I0529 23:58:49.598238 1 controller.go:87] Shutting down OpenAPI AggregationController I0529 23:58:49.598307 1 tlsconfig.go:255] Shutting down DynamicServingCertificateController I0529 23:58:49.598316 1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I0529 23:58:49.598324 1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt I0529 23:58:49.603271 1 secure_serving.go:222] Stopped listening on [::]:8443 E0529 23:58:49.604951 1 controller.go:184] Get https://localhost:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:8443: connect: connection refused ==> kube-apiserver [55e125eefd8a] <== I0530 00:01:27.716671 1 client.go:361] parsed scheme: "endpoint" I0530 00:01:27.716818 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] W0530 00:01:27.822640 1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources. W0530 00:01:27.828615 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W0530 00:01:27.835569 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0530 00:01:27.846377 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0530 00:01:27.848350 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0530 00:01:27.857409 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0530 00:01:27.872505 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. W0530 00:01:27.872543 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. I0530 00:01:27.879679 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0530 00:01:27.879694 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0530 00:01:27.880860 1 client.go:361] parsed scheme: "endpoint" I0530 00:01:27.880878 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0530 00:01:27.885770 1 client.go:361] parsed scheme: "endpoint" I0530 00:01:27.885839 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0530 00:01:29.333233 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0530 00:01:29.333286 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0530 00:01:29.333558 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I0530 00:01:29.334006 1 secure_serving.go:178] Serving securely on [::]:8443 I0530 00:01:29.334032 1 available_controller.go:387] Starting AvailableConditionController I0530 00:01:29.334036 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0530 00:01:29.334045 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0530 00:01:29.336013 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0530 00:01:29.336023 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller I0530 00:01:29.339208 1 controller.go:81] Starting OpenAPI AggregationController I0530 00:01:29.340760 1 autoregister_controller.go:141] Starting autoregister controller I0530 00:01:29.340776 1 cache.go:32] Waiting for caches to sync for autoregister controller I0530 00:01:29.340828 1 crd_finalizer.go:266] Starting CRDFinalizer I0530 00:01:29.340850 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0530 00:01:29.340869 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0530 00:01:29.341052 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I0530 00:01:29.341130 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0530 00:01:29.341190 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0530 00:01:29.341204 1 naming_controller.go:291] Starting NamingConditionController I0530 00:01:29.341211 1 establishing_controller.go:76] Starting EstablishingController I0530 00:01:29.341222 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController I0530 00:01:29.341233 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController E0530 00:01:29.361381 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.148, ResourceVersion: 0, AdditionalErrorMsg: I0530 00:01:29.341132 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0530 00:01:29.377561 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister I0530 00:01:29.377568 1 shared_informer.go:230] Caches are synced for crd-autoregister I0530 00:01:29.341150 1 controller.go:86] Starting OpenAPI controller I0530 00:01:29.435331 1 cache.go:39] Caches are synced for AvailableConditionController controller I0530 00:01:29.436135 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller I0530 00:01:29.440880 1 cache.go:39] Caches are synced for autoregister controller I0530 00:01:29.441304 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0530 00:01:29.513538 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0530 00:01:30.332977 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0530 00:01:30.333118 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0530 00:01:30.343021 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I0530 00:01:31.201157 1 controller.go:606] quota admission added evaluator for: serviceaccounts I0530 00:01:31.237321 1 controller.go:606] quota admission added evaluator for: deployments.apps I0530 00:01:31.360254 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I0530 00:01:31.390982 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0530 00:01:31.403562 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0530 00:01:32.956936 1 controller.go:606] quota admission added evaluator for: jobs.batch I0530 00:01:44.649097 1 controller.go:606] quota admission added evaluator for: endpoints I0530 00:01:48.871169 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io W0530 00:15:06.893922 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted ==> kube-controller-manager [2bee36cfb700] <== I0529 23:36:37.945210 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"c3a7f85e-a32a-4a26-b644-8855abe5ef3d", APIVersion:"batch/v1", ResourceVersion:"875", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed I0529 23:40:14.759517 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"service-one", UID:"cd0a8020-acb3-4ec1-b3eb-66f0c0196592", APIVersion:"apps/v1", ResourceVersion:"1530", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set service-one-88f7bf775 to 1 I0529 23:40:14.781155 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"service-two", UID:"d738b7dc-c4b3-4096-acaa-3445a00e5ff3", APIVersion:"apps/v1", ResourceVersion:"1534", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set service-two-54d6ccbf99 to 1 I0529 23:40:14.792892 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"service-one-88f7bf775", UID:"467f7071-5120-473b-a704-90eb3b915185", APIVersion:"apps/v1", ResourceVersion:"1532", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: service-one-88f7bf775-dcksx I0529 23:40:14.808237 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"service-two-54d6ccbf99", UID:"9955a2dc-1532-45cf-9039-f753707d4dca", APIVersion:"apps/v1", ResourceVersion:"1537", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: service-two-54d6ccbf99-qjb7k I0529 23:44:23.691165 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"service-one", UID:"cd0a8020-acb3-4ec1-b3eb-66f0c0196592", APIVersion:"apps/v1", ResourceVersion:"2159", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set service-one-857f9557f5 to 1 I0529 23:44:23.705988 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"service-one-857f9557f5", UID:"4d13528b-5013-4b5c-9d33-6770b16b1315", APIVersion:"apps/v1", ResourceVersion:"2160", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: service-one-857f9557f5-lh6jg I0529 23:44:23.706060 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"service-two", UID:"d738b7dc-c4b3-4096-acaa-3445a00e5ff3", APIVersion:"apps/v1", ResourceVersion:"2161", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set service-two-5664d4c575 to 1 I0529 23:44:23.711224 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"service-two-5664d4c575", UID:"ddfa4c7d-861c-48c2-b5a3-a0843f20c1a9", APIVersion:"apps/v1", ResourceVersion:"2163", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: service-two-5664d4c575-wcxsp I0529 23:44:25.510414 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"service-one", UID:"cd0a8020-acb3-4ec1-b3eb-66f0c0196592", APIVersion:"apps/v1", ResourceVersion:"2183", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set service-one-88f7bf775 to 0 I0529 23:44:25.549294 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"service-one-88f7bf775", UID:"467f7071-5120-473b-a704-90eb3b915185", APIVersion:"apps/v1", ResourceVersion:"2202", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: service-one-88f7bf775-dcksx I0529 23:44:25.589373 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"service-two", UID:"d738b7dc-c4b3-4096-acaa-3445a00e5ff3", APIVersion:"apps/v1", ResourceVersion:"2180", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set service-two-54d6ccbf99 to 0 I0529 23:44:25.621078 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"service-two-54d6ccbf99", UID:"9955a2dc-1532-45cf-9039-f753707d4dca", APIVersion:"apps/v1", ResourceVersion:"2216", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: service-two-54d6ccbf99-qjb7k E0529 23:58:49.599282 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=912&timeout=7m4s&timeoutSeconds=424&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.599334 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://control-plane.minikube.internal:8443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=1&timeout=8m31s&timeoutSeconds=511&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601283 1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://control-plane.minikube.internal:8443/apis/apiregistration.k8s.io/v1/apiservices?allowWatchBookmarks=true&resourceVersion=38&timeout=6m59s&timeoutSeconds=419&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601306 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=2313&timeout=6m53s&timeoutSeconds=413&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601323 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.VolumeAttachment: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=1&timeout=5m20s&timeoutSeconds=320&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601340 1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://control-plane.minikube.internal:8443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?allowWatchBookmarks=true&resourceVersion=1&timeout=6m39s&timeoutSeconds=399&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601355 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://control-plane.minikube.internal:8443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=1&timeout=8m53s&timeoutSeconds=533&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601372 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: Get https://control-plane.minikube.internal:8443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=1&timeout=5m44s&timeoutSeconds=344&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601390 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.MutatingWebhookConfiguration: Get https://control-plane.minikube.internal:8443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=6m8s&timeoutSeconds=368&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601408 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=4243&timeout=6m38s&timeoutSeconds=398&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601424 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=933&timeout=8m25s&timeoutSeconds=505&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601439 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://control-plane.minikube.internal:8443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=940&timeout=6m26s&timeoutSeconds=386&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601457 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=4242&timeout=7m59s&timeoutSeconds=479&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601474 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.DaemonSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=438&timeout=9m22s&timeoutSeconds=562&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601489 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodSecurityPolicy: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/podsecuritypolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=8m51s&timeoutSeconds=531&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601505 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m3s&timeoutSeconds=543&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601520 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4244&timeout=7m43s&timeoutSeconds=463&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601537 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRole: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=911&timeout=5m5s&timeoutSeconds=305&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601552 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Event: Get https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1beta1/events?allowWatchBookmarks=true&resourceVersion=3926&timeout=8m4s&timeoutSeconds=484&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601570 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://control-plane.minikube.internal:8443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=995&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601585 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=2232&timeout=7m53s&timeoutSeconds=473&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601601 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=8m34s&timeoutSeconds=514&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601616 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=930&timeout=5m0s&timeoutSeconds=300&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601635 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=5m3s&timeoutSeconds=303&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601649 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: Get https://control-plane.minikube.internal:8443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=2233&timeout=5m50s&timeoutSeconds=350&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601666 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: Get https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=2227&timeout=6m49s&timeoutSeconds=409&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601682 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=264&timeout=7m19s&timeoutSeconds=439&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601697 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://control-plane.minikube.internal:8443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=907&timeout=6m56s&timeoutSeconds=416&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601713 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=9m7s&timeoutSeconds=547&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601730 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=5m34s&timeoutSeconds=334&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601754 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=21&timeout=7m48s&timeoutSeconds=468&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601772 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://control-plane.minikube.internal:8443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=985&timeout=9m27s&timeoutSeconds=567&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601795 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PriorityClass: Get https://control-plane.minikube.internal:8443/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=42&timeout=8m23s&timeoutSeconds=503&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601811 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.IngressClass: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1beta1/ingressclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=8m20s&timeoutSeconds=500&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601830 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CertificateSigningRequest: Get https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=1&timeout=9m19s&timeoutSeconds=559&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601847 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Secret: Get https://control-plane.minikube.internal:8443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=971&timeout=9m7s&timeoutSeconds=547&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.601864 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://control-plane.minikube.internal:8443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=1688&timeout=9m8s&timeoutSeconds=548&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.602922 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=8m35s&timeoutSeconds=515&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused W0529 23:58:49.602944 1 reflector.go:404] k8s.io/client-go/informers/factory.go:135: watch of *v1beta1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received E0529 23:58:49.602977 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://control-plane.minikube.internal:8443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=1&timeout=8m57s&timeoutSeconds=537&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.602998 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=1688&timeout=9m54s&timeoutSeconds=594&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.603016 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ControllerRevision: Get https://control-plane.minikube.internal:8443/apis/apps/v1/controllerrevisions?allowWatchBookmarks=true&resourceVersion=374&timeout=7m56s&timeoutSeconds=476&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.603033 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=4026&timeout=5m13s&timeoutSeconds=313&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.603052 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1527&timeout=6m8s&timeoutSeconds=368&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.603073 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://control-plane.minikube.internal:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=1&timeout=8m56s&timeoutSeconds=536&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.602949 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=5m7s&timeoutSeconds=307&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.603126 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?resourceVersion=1: dial tcp 192.168.39.148:8443: connect: connection refused ==> kube-controller-manager [6f680c7cd85e] <== I0530 00:01:47.916565 1 shared_informer.go:223] Waiting for caches to sync for ReplicationController I0530 00:01:48.077433 1 controllermanager.go:533] Started "namespace" I0530 00:01:48.077521 1 namespace_controller.go:200] Starting namespace controller I0530 00:01:48.077718 1 shared_informer.go:223] Waiting for caches to sync for namespace I0530 00:01:48.217425 1 controllermanager.go:533] Started "deployment" I0530 00:01:48.217642 1 deployment_controller.go:153] Starting deployment controller I0530 00:01:48.217682 1 shared_informer.go:223] Waiting for caches to sync for deployment I0530 00:01:48.367145 1 controllermanager.go:533] Started "replicaset" I0530 00:01:48.367297 1 replica_set.go:181] Starting replicaset controller I0530 00:01:48.367305 1 shared_informer.go:223] Waiting for caches to sync for ReplicaSet I0530 00:01:48.517138 1 controllermanager.go:533] Started "statefulset" I0530 00:01:48.517327 1 stateful_set.go:146] Starting stateful set controller I0530 00:01:48.517483 1 shared_informer.go:223] Waiting for caches to sync for stateful set I0530 00:01:48.666469 1 controllermanager.go:533] Started "csrsigning" I0530 00:01:48.666495 1 core.go:239] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true. W0530 00:01:48.666502 1 controllermanager.go:525] Skipping "route" I0530 00:01:48.666572 1 certificate_controller.go:119] Starting certificate controller "csrsigning" I0530 00:01:48.666581 1 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning I0530 00:01:48.666613 1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key I0530 00:01:48.817738 1 controllermanager.go:533] Started "persistentvolume-expander" I0530 00:01:48.818113 1 shared_informer.go:223] Waiting for caches to sync for resource quota I0530 00:01:48.818353 1 expand_controller.go:319] Starting expand controller I0530 00:01:48.818428 1 shared_informer.go:223] Waiting for caches to sync for expand W0530 00:01:48.846147 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="issue-repro" does not exist I0530 00:01:48.853282 1 shared_informer.go:230] Caches are synced for PVC protection I0530 00:01:48.856689 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I0530 00:01:48.861778 1 shared_informer.go:230] Caches are synced for GC I0530 00:01:48.867318 1 shared_informer.go:230] Caches are synced for PV protection I0530 00:01:48.867845 1 shared_informer.go:230] Caches are synced for persistent volume I0530 00:01:48.869252 1 shared_informer.go:230] Caches are synced for endpoint_slice I0530 00:01:48.878297 1 shared_informer.go:230] Caches are synced for daemon sets I0530 00:01:48.916726 1 shared_informer.go:230] Caches are synced for ReplicationController I0530 00:01:48.918181 1 shared_informer.go:230] Caches are synced for stateful set I0530 00:01:48.918274 1 shared_informer.go:230] Caches are synced for taint I0530 00:01:48.918428 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: W0530 00:01:48.918578 1 node_lifecycle_controller.go:1048] Missing timestamp for Node issue-repro. Assuming now as a timestamp. I0530 00:01:48.918681 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. I0530 00:01:48.919023 1 taint_manager.go:187] Starting NoExecuteTaintManager I0530 00:01:48.919470 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"issue-repro", UID:"0a10713b-6bff-48cb-80d8-717c5fb44fd7", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node issue-repro event: Registered Node issue-repro in Controller I0530 00:01:48.919728 1 shared_informer.go:230] Caches are synced for expand I0530 00:01:48.933415 1 shared_informer.go:230] Caches are synced for TTL I0530 00:01:48.941066 1 shared_informer.go:230] Caches are synced for attach detach I0530 00:01:48.966876 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I0530 00:01:49.016549 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I0530 00:01:49.018456 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I0530 00:01:49.118773 1 shared_informer.go:230] Caches are synced for endpoint I0530 00:01:49.125973 1 shared_informer.go:230] Caches are synced for job I0530 00:01:49.177929 1 shared_informer.go:230] Caches are synced for namespace I0530 00:01:49.265550 1 shared_informer.go:230] Caches are synced for service account I0530 00:01:49.267325 1 shared_informer.go:230] Caches are synced for HPA I0530 00:01:49.367095 1 shared_informer.go:230] Caches are synced for disruption I0530 00:01:49.367186 1 disruption.go:339] Sending events to api server. I0530 00:01:49.368163 1 shared_informer.go:230] Caches are synced for ReplicaSet I0530 00:01:49.417946 1 shared_informer.go:230] Caches are synced for deployment I0530 00:01:49.467078 1 shared_informer.go:230] Caches are synced for bootstrap_signer I0530 00:01:49.471646 1 shared_informer.go:230] Caches are synced for resource quota I0530 00:01:49.518466 1 shared_informer.go:230] Caches are synced for resource quota I0530 00:01:49.518654 1 shared_informer.go:230] Caches are synced for garbage collector I0530 00:01:49.527079 1 shared_informer.go:230] Caches are synced for garbage collector I0530 00:01:49.527158 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage ==> kube-proxy [7f877b4c636d] <== W0530 00:01:32.154491 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy I0530 00:01:32.194021 1 node.go:136] Successfully retrieved node IP: 192.168.39.148 I0530 00:01:32.194044 1 server_others.go:186] Using iptables Proxier. W0530 00:01:32.194054 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I0530 00:01:32.194062 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I0530 00:01:32.197637 1 server.go:583] Version: v1.18.2 I0530 00:01:32.199580 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I0530 00:01:32.199609 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0530 00:01:32.199656 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0530 00:01:32.199683 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0530 00:01:32.209283 1 config.go:315] Starting service config controller I0530 00:01:32.209303 1 shared_informer.go:223] Waiting for caches to sync for service config I0530 00:01:32.210138 1 config.go:133] Starting endpoints config controller I0530 00:01:32.210181 1 shared_informer.go:223] Waiting for caches to sync for endpoints config I0530 00:01:32.337234 1 shared_informer.go:230] Caches are synced for endpoints config I0530 00:01:32.337290 1 shared_informer.go:230] Caches are synced for service config ==> kube-proxy [dcaea65e3c54] <== W0529 23:33:00.263495 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy I0529 23:33:00.279004 1 node.go:136] Successfully retrieved node IP: 192.168.39.148 I0529 23:33:00.279062 1 server_others.go:186] Using iptables Proxier. W0529 23:33:00.279112 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I0529 23:33:00.279131 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I0529 23:33:00.279764 1 server.go:583] Version: v1.18.2 I0529 23:33:00.280618 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I0529 23:33:00.280779 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0529 23:33:00.281090 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0529 23:33:00.281306 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0529 23:33:00.282462 1 config.go:315] Starting service config controller I0529 23:33:00.282648 1 shared_informer.go:223] Waiting for caches to sync for service config I0529 23:33:00.282819 1 config.go:133] Starting endpoints config controller I0529 23:33:00.283008 1 shared_informer.go:223] Waiting for caches to sync for endpoints config I0529 23:33:00.383013 1 shared_informer.go:230] Caches are synced for service config I0529 23:33:00.383364 1 shared_informer.go:230] Caches are synced for endpoints config ==> kube-scheduler [044503980b7a] <== I0529 23:35:11.353976 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0529 23:35:11.354008 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0529 23:35:11.925072 1 serving.go:313] Generated self-signed cert in-memory I0529 23:35:12.302992 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0529 23:35:12.303066 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0529 23:35:12.303981 1 authorization.go:47] Authorization is disabled W0529 23:35:12.303991 1 authentication.go:40] Authentication is disabled I0529 23:35:12.303997 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0529 23:35:12.304719 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0529 23:35:12.304741 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0529 23:35:12.304768 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0529 23:35:12.304797 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0529 23:35:12.305002 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0529 23:35:12.305027 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0529 23:35:12.404935 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0529 23:35:12.404946 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0529 23:35:12.405333 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I0529 23:35:29.490114 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler E0529 23:58:49.599129 1 reflector.go:382] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=3719&timeout=7m33s&timeoutSeconds=453&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.599170 1 reflector.go:382] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=3073&timeout=5m29s&timeoutSeconds=329&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.602473 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=4026&timeout=6m8s&timeoutSeconds=368&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.602512 1 reflector.go:382] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=2313&timeoutSeconds=326&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.602553 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=6m39s&timeoutSeconds=399&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.602592 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1527&timeout=8m16s&timeoutSeconds=496&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.602813 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=6m8s&timeoutSeconds=368&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.602848 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=21&timeout=5m56s&timeoutSeconds=356&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.603265 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=9m55s&timeoutSeconds=595&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused E0529 23:58:49.603516 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=264&timeout=6m29s&timeoutSeconds=389&watch=true: dial tcp 192.168.39.148:8443: connect: connection refused ==> kube-scheduler [f719408b0969] <== I0530 00:01:25.981555 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0530 00:01:25.981608 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0530 00:01:26.625118 1 serving.go:313] Generated self-signed cert in-memory W0530 00:01:29.402379 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0530 00:01:29.402402 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0530 00:01:29.402410 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. W0530 00:01:29.402414 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0530 00:01:29.452431 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0530 00:01:29.452444 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0530 00:01:29.459567 1 authorization.go:47] Authorization is disabled W0530 00:01:29.459606 1 authentication.go:40] Authentication is disabled I0530 00:01:29.459624 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0530 00:01:29.463728 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0530 00:01:29.463748 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0530 00:01:29.464024 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0530 00:01:29.464056 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0530 00:01:29.564053 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0530 00:01:29.564332 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I0530 00:01:45.180912 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler ==> kubelet <== -- Logs begin at Sat 2020-05-30 00:01:18 UTC, end at Sat 2020-05-30 00:15:47 UTC. -- May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457749 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/38797624-7089-4336-952a-81eec3fda5d5-kube-proxy") pod "kube-proxy-r2b28" (UID: "38797624-7089-4336-952a-81eec3fda5d5") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457773 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/7cecac2c-84df-47cb-82a6-c7b6a2890944-tmp-volume") pod "kubernetes-dashboard-696dbcc666-xphg5" (UID: "7cecac2c-84df-47cb-82a6-c7b6a2890944") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457786 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/6f7398f8-b80f-4aa1-8a72-f6c344dbf3c5-webhook-cert") pod "ingress-nginx-controller-7bb4c67d67-vns67" (UID: "6f7398f8-b80f-4aa1-8a72-f6c344dbf3c5") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457798 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0e20ea52-be7c-40d6-bb81-db925a0bbe78-config-volume") pod "coredns-66bff467f8-k9dnt" (UID: "0e20ea52-be7c-40d6-bb81-db925a0bbe78") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457841 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dn2nq" (UniqueName: "kubernetes.io/secret/33c1a292-93a8-4190-be5f-6c26df8e2a0b-default-token-dn2nq") pod "service-one-857f9557f5-lh6jg" (UID: "33c1a292-93a8-4190-be5f-6c26df8e2a0b") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457859 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/38797624-7089-4336-952a-81eec3fda5d5-xtables-lock") pod "kube-proxy-r2b28" (UID: "38797624-7089-4336-952a-81eec3fda5d5") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457871 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-dxkm7" (UniqueName: "kubernetes.io/secret/38797624-7089-4336-952a-81eec3fda5d5-kube-proxy-token-dxkm7") pod "kube-proxy-r2b28" (UID: "38797624-7089-4336-952a-81eec3fda5d5") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457894 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/38797624-7089-4336-952a-81eec3fda5d5-lib-modules") pod "kube-proxy-r2b28" (UID: "38797624-7089-4336-952a-81eec3fda5d5") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457908 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-7s8hf" (UniqueName: "kubernetes.io/secret/f400c99d-a043-4745-903f-dc549b305ec8-storage-provisioner-token-7s8hf") pod "storage-provisioner" (UID: "f400c99d-a043-4745-903f-dc549b305ec8") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457920 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-dbccp" (UniqueName: "kubernetes.io/secret/18c6fe39-ea9b-4247-a243-664d49bf6be9-kubernetes-dashboard-token-dbccp") pod "dashboard-metrics-scraper-84bfdf55ff-tkrwc" (UID: "18c6fe39-ea9b-4247-a243-664d49bf6be9") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457931 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-nmqdp" (UniqueName: "kubernetes.io/secret/0e20ea52-be7c-40d6-bb81-db925a0bbe78-coredns-token-nmqdp") pod "coredns-66bff467f8-k9dnt" (UID: "0e20ea52-be7c-40d6-bb81-db925a0bbe78") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457944 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/18c6fe39-ea9b-4247-a243-664d49bf6be9-tmp-volume") pod "dashboard-metrics-scraper-84bfdf55ff-tkrwc" (UID: "18c6fe39-ea9b-4247-a243-664d49bf6be9") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457956 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dn2nq" (UniqueName: "kubernetes.io/secret/97eeb939-dd72-4ef7-8215-1bdca8cb034e-default-token-dn2nq") pod "service-two-5664d4c575-wcxsp" (UID: "97eeb939-dd72-4ef7-8215-1bdca8cb034e") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457977 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9ac494f8-3561-40cc-88cc-7d1f7eff0ea9-config-volume") pod "coredns-66bff467f8-7vdm8" (UID: "9ac494f8-3561-40cc-88cc-7d1f7eff0ea9") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.457988 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-nmqdp" (UniqueName: "kubernetes.io/secret/9ac494f8-3561-40cc-88cc-7d1f7eff0ea9-coredns-token-nmqdp") pod "coredns-66bff467f8-7vdm8" (UID: "9ac494f8-3561-40cc-88cc-7d1f7eff0ea9") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.458011 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-dbccp" (UniqueName: "kubernetes.io/secret/7cecac2c-84df-47cb-82a6-c7b6a2890944-kubernetes-dashboard-token-dbccp") pod "kubernetes-dashboard-696dbcc666-xphg5" (UID: "7cecac2c-84df-47cb-82a6-c7b6a2890944") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.458048 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ingress-nginx-token-9kz6x" (UniqueName: "kubernetes.io/secret/6f7398f8-b80f-4aa1-8a72-f6c344dbf3c5-ingress-nginx-token-9kz6x") pod "ingress-nginx-controller-7bb4c67d67-vns67" (UID: "6f7398f8-b80f-4aa1-8a72-f6c344dbf3c5") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.458088 2358 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f400c99d-a043-4745-903f-dc549b305ec8-tmp") pod "storage-provisioner" (UID: "f400c99d-a043-4745-903f-dc549b305ec8") May 30 00:01:29 issue-repro kubelet[2358]: I0530 00:01:29.458097 2358 reconciler.go:157] Reconciler: start to sync state May 30 00:01:30 issue-repro kubelet[2358]: W0530 00:01:30.331471 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/service-one-857f9557f5-lh6jg through plugin: invalid network status for May 30 00:01:30 issue-repro kubelet[2358]: E0530 00:01:30.612202 2358 secret.go:195] Couldn't get secret kube-system/coredns-token-nmqdp: failed to sync secret cache: timed out waiting for the condition May 30 00:01:30 issue-repro kubelet[2358]: E0530 00:01:30.612307 2358 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/0e20ea52-be7c-40d6-bb81-db925a0bbe78-coredns-token-nmqdp podName:0e20ea52-be7c-40d6-bb81-db925a0bbe78 nodeName:}" failed. No retries permitted until 2020-05-30 00:01:31.11228749 +0000 UTC m=+7.355296797 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-nmqdp\" (UniqueName: \"kubernetes.io/secret/0e20ea52-be7c-40d6-bb81-db925a0bbe78-coredns-token-nmqdp\") pod \"coredns-66bff467f8-k9dnt\" (UID: \"0e20ea52-be7c-40d6-bb81-db925a0bbe78\") : failed to sync secret cache: timed out waiting for the condition" May 30 00:01:30 issue-repro kubelet[2358]: E0530 00:01:30.612324 2358 secret.go:195] Couldn't get secret kube-system/coredns-token-nmqdp: failed to sync secret cache: timed out waiting for the condition May 30 00:01:30 issue-repro kubelet[2358]: E0530 00:01:30.612347 2358 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/9ac494f8-3561-40cc-88cc-7d1f7eff0ea9-coredns-token-nmqdp podName:9ac494f8-3561-40cc-88cc-7d1f7eff0ea9 nodeName:}" failed. No retries permitted until 2020-05-30 00:01:31.11233828 +0000 UTC m=+7.355347586 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-nmqdp\" (UniqueName: \"kubernetes.io/secret/9ac494f8-3561-40cc-88cc-7d1f7eff0ea9-coredns-token-nmqdp\") pod \"coredns-66bff467f8-7vdm8\" (UID: \"9ac494f8-3561-40cc-88cc-7d1f7eff0ea9\") : failed to sync secret cache: timed out waiting for the condition" May 30 00:01:30 issue-repro kubelet[2358]: E0530 00:01:30.612358 2358 secret.go:195] Couldn't get secret kube-system/storage-provisioner-token-7s8hf: failed to sync secret cache: timed out waiting for the condition May 30 00:01:30 issue-repro kubelet[2358]: E0530 00:01:30.612394 2358 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/f400c99d-a043-4745-903f-dc549b305ec8-storage-provisioner-token-7s8hf podName:f400c99d-a043-4745-903f-dc549b305ec8 nodeName:}" failed. No retries permitted until 2020-05-30 00:01:31.112383486 +0000 UTC m=+7.355392796 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-7s8hf\" (UniqueName: \"kubernetes.io/secret/f400c99d-a043-4745-903f-dc549b305ec8-storage-provisioner-token-7s8hf\") pod \"storage-provisioner\" (UID: \"f400c99d-a043-4745-903f-dc549b305ec8\") : failed to sync secret cache: timed out waiting for the condition" May 30 00:01:30 issue-repro kubelet[2358]: W0530 00:01:30.625749 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/service-two-5664d4c575-wcxsp through plugin: invalid network status for May 30 00:01:30 issue-repro kubelet[2358]: W0530 00:01:30.675266 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-696dbcc666-xphg5 through plugin: invalid network status for May 30 00:01:30 issue-repro kubelet[2358]: W0530 00:01:30.862295 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-controller-7bb4c67d67-vns67 through plugin: invalid network status for May 30 00:01:30 issue-repro kubelet[2358]: W0530 00:01:30.867777 2358 pod_container_deletor.go:77] Container "471c1642ad3366be282b83cea17ba733aa8a440f3b637a602c4fe7e43f2cfb27" not found in pod's containers May 30 00:01:30 issue-repro kubelet[2358]: W0530 00:01:30.872766 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/service-one-857f9557f5-lh6jg through plugin: invalid network status for May 30 00:01:30 issue-repro kubelet[2358]: W0530 00:01:30.942518 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-696dbcc666-xphg5 through plugin: invalid network status for May 30 00:01:30 issue-repro kubelet[2358]: I0530 00:01:30.952205 2358 kubelet_node_status.go:112] Node issue-repro was previously registered May 30 00:01:30 issue-repro kubelet[2358]: I0530 00:01:30.952268 2358 kubelet_node_status.go:73] Successfully registered node issue-repro May 30 00:01:31 issue-repro kubelet[2358]: W0530 00:01:31.190924 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/service-two-5664d4c575-wcxsp through plugin: invalid network status for May 30 00:01:31 issue-repro kubelet[2358]: W0530 00:01:31.239210 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-tkrwc through plugin: invalid network status for May 30 00:01:31 issue-repro kubelet[2358]: W0530 00:01:31.377677 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-tkrwc through plugin: invalid network status for May 30 00:01:31 issue-repro kubelet[2358]: W0530 00:01:31.443518 2358 pod_container_deletor.go:77] Container "19b51979e2f8969cf737c666840056242a147ae416d3b613678ecc4300dc0eab" not found in pod's containers May 30 00:01:32 issue-repro kubelet[2358]: W0530 00:01:32.271039 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-k9dnt through plugin: invalid network status for May 30 00:01:32 issue-repro kubelet[2358]: W0530 00:01:32.271807 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-7vdm8 through plugin: invalid network status for May 30 00:01:32 issue-repro kubelet[2358]: W0530 00:01:32.521562 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-7vdm8 through plugin: invalid network status for May 30 00:01:32 issue-repro kubelet[2358]: W0530 00:01:32.748665 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-696dbcc666-xphg5 through plugin: invalid network status for May 30 00:01:32 issue-repro kubelet[2358]: W0530 00:01:32.806337 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/service-two-5664d4c575-wcxsp through plugin: invalid network status for May 30 00:01:32 issue-repro kubelet[2358]: W0530 00:01:32.847835 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-tkrwc through plugin: invalid network status for May 30 00:01:32 issue-repro kubelet[2358]: W0530 00:01:32.879041 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-k9dnt through plugin: invalid network status for May 30 00:01:32 issue-repro kubelet[2358]: W0530 00:01:32.945921 2358 pod_container_deletor.go:77] Container "c0fdf9c9236f994f8672f8aa95ca4b5b36fd7b8cbb3db2266f1c182aa41ec596" not found in pod's containers May 30 00:01:32 issue-repro kubelet[2358]: W0530 00:01:32.951914 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-controller-7bb4c67d67-vns67 through plugin: invalid network status for May 30 00:01:33 issue-repro kubelet[2358]: W0530 00:01:33.979931 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-7vdm8 through plugin: invalid network status for May 30 00:01:33 issue-repro kubelet[2358]: W0530 00:01:33.989725 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-k9dnt through plugin: invalid network status for May 30 00:01:33 issue-repro kubelet[2358]: W0530 00:01:33.997915 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-tkrwc through plugin: invalid network status for May 30 00:02:02 issue-repro kubelet[2358]: W0530 00:02:02.420310 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-696dbcc666-xphg5 through plugin: invalid network status for May 30 00:02:02 issue-repro kubelet[2358]: I0530 00:02:02.426143 2358 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: e0aa5408ea75d74cb004dfa5bf16f1765c8f5d95ba747d06e3a5e459dabde87d May 30 00:02:02 issue-repro kubelet[2358]: I0530 00:02:02.426537 2358 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 55149cef34ef8ae80b10cb9a18ec9a6594aed979a9ec4a638041bbe348aea932 May 30 00:02:02 issue-repro kubelet[2358]: E0530 00:02:02.427257 2358 pod_workers.go:191] Error syncing pod 7cecac2c-84df-47cb-82a6-c7b6a2890944 ("kubernetes-dashboard-696dbcc666-xphg5_kubernetes-dashboard(7cecac2c-84df-47cb-82a6-c7b6a2890944)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-696dbcc666-xphg5_kubernetes-dashboard(7cecac2c-84df-47cb-82a6-c7b6a2890944)" May 30 00:02:03 issue-repro kubelet[2358]: W0530 00:02:03.440674 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-696dbcc666-xphg5 through plugin: invalid network status for May 30 00:02:05 issue-repro kubelet[2358]: I0530 00:02:05.044246 2358 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 55149cef34ef8ae80b10cb9a18ec9a6594aed979a9ec4a638041bbe348aea932 May 30 00:02:05 issue-repro kubelet[2358]: E0530 00:02:05.045566 2358 pod_workers.go:191] Error syncing pod 7cecac2c-84df-47cb-82a6-c7b6a2890944 ("kubernetes-dashboard-696dbcc666-xphg5_kubernetes-dashboard(7cecac2c-84df-47cb-82a6-c7b6a2890944)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-696dbcc666-xphg5_kubernetes-dashboard(7cecac2c-84df-47cb-82a6-c7b6a2890944)" May 30 00:02:17 issue-repro kubelet[2358]: I0530 00:02:17.279338 2358 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 55149cef34ef8ae80b10cb9a18ec9a6594aed979a9ec4a638041bbe348aea932 May 30 00:02:17 issue-repro kubelet[2358]: W0530 00:02:17.657003 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-696dbcc666-xphg5 through plugin: invalid network status for May 30 00:02:24 issue-repro kubelet[2358]: W0530 00:02:24.749472 2358 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-696dbcc666-xphg5 through plugin: invalid network status for ==> kubernetes-dashboard [55149cef34ef] <== 2020/05/30 00:01:31 Starting overwatch panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial 2cp 10.2906/.005./13:4 30:0:01 3i1/ oU tinmge onuatm espace: guobreorunteitnees -1d a[srhubnonairndg ]: github.com/kub/e3r0n e0t0e:s0/1d:a3s1h bUoaird/ rnc/capup/tbar config to connect tor fa.p(i*scesrrfToekr nManag2e0r2)0.init(0xc00035a9a0) /05/30 00:01:3 Us/home/travit/build/kubernetes/dashboar2/sr0/app/bac0end/client/csraf/manager.go:41 +n0x446m kuberneitthubdcosehrbnoetes/dashboard/srcr/app/backend/client/csrf.NewCsrfTokenManager(...) /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:66 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0000fa100) /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:501 +0xc6 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc0000fa100) /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:469 +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...) /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:550 main.main() /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x20d ==> kubernetes-dashboard [cee32e06fc8f] <== 2020/05/30 00:02:17 Starting overwatch 2020/05/30 00:02:17 Using namespace: kubernetes-dashboard 2020/05/30 00:02:17 Using in-cluster config to connect to apiserver 2020/05/30 00:02:17 Using secret token for csrf signing 2020/05/30 00:02:17 Initializing csrf token from kubernetes-dashboard-csrf secret 2020/05/30 00:02:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf 2020/05/30 00:02:17 Successful initial request to the apiserver, version: v1.18.2 2020/05/30 00:02:17 Generating JWE encryption key 2020/05/30 00:02:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting 2020/05/30 00:02:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard 2020/05/30 00:02:17 Initializing JWE encryption key from synchronized object 2020/05/30 00:02:17 Creating in-cluster Sidecar client 2020/05/30 00:02:17 Serving insecurely on HTTP port: 9090 2020/05/30 00:02:17 Successful request to sidecar ==> storage-provisioner [4c533009716c] <== ==> storage-provisioner [c66154e1ba5b] <== ```
ericwooley commented 4 years ago

I deployed this to gke, and am having the same problem. So, I no longer think it's an issue with minikube.

It's probably some stupid mistake I am making, I can't imagine a bug that big would go un-noticed