kubernetes / website

Kubernetes website and documentation repo:
https://kubernetes.io
Creative Commons Attribution 4.0 International
4.5k stars 14.44k forks source link

Issue with k8s.io/docs/tutorials/hello-minikube/ - hello-node service is not accessible #20364

Closed Zerotask closed 4 years ago

Zerotask commented 4 years ago

This is a Bug Report

Problem: I'm using Minikube (as docker container) and followed the steps until Create a Service ->3. Run the following command: minikube service hello-node

grafik

A new tab opened in my Firefox and it tried to load it for about 20 seconds, then the Firefox timeout page appeared. The tutorial says, I should see a "Hello World" message though.

On the Kubernetes dashboard everything is green except the service hello-node which is grey.

grafik

Page to Update: https://kubernetes.io/docs/tutorials/

Kubernetes Version: v1.18

Logs of minikube logs:

$ minikube logs * ==> Docker <== * -- Logs begin at Wed 2020-04-15 21:42:45 UTC, end at Wed 2020-04-15 21:55:41 UTC. -- * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.533670036Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.533703536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.533733536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.533742336Z" level=info msg="containerd successfully booted in 0.025776s" * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.544198136Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00005a190, READY" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.546215436Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.546240936Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.546256336Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.546263836Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.546309636Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00005adb0, CONNECTING" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.546529336Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00005adb0, READY" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.547418436Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.547458636Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.547477336Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.547487336Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.547536736Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0005fab50, CONNECTING" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.547795636Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0005fab50, READY" module=grpc * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.550211336Z" level=info msg="[graphdriver] using prior storage driver: overlay2" * Apr 15 21:42:45 minikube dockerd[84]: time="2020-04-15T21:42:45.720303736Z" level=info msg="Loading containers: start." * Apr 15 21:42:46 minikube dockerd[84]: time="2020-04-15T21:42:46.264704033Z" level=warning msg="caa5e2d3e84560c68c1f6f14e94d904f7ad80d3858f217cdba69b16dc1346d8c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/caa5e2d3e84560c68c1f6f14e94d904f7ad80d3858f217cdba69b16dc1346d8c/mounts/shm, flags: 0x2: no such file or directory" * Apr 15 21:42:47 minikube dockerd[84]: time="2020-04-15T21:42:47.035700330Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Apr 15 21:42:47 minikube dockerd[84]: time="2020-04-15T21:42:47.397639028Z" level=info msg="Loading containers: done." * Apr 15 21:42:47 minikube dockerd[84]: time="2020-04-15T21:42:47.484376828Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2 * Apr 15 21:42:47 minikube dockerd[84]: time="2020-04-15T21:42:47.484502928Z" level=info msg="Daemon has completed initialization" * Apr 15 21:42:47 minikube dockerd[84]: time="2020-04-15T21:42:47.628172127Z" level=info msg="API listen on /var/run/docker.sock" * Apr 15 21:42:47 minikube systemd[1]: Started Docker Application Container Engine. * Apr 15 21:42:47 minikube dockerd[84]: time="2020-04-15T21:42:47.629142627Z" level=info msg="API listen on [::]:2376" * Apr 15 21:42:51 minikube systemd[1]: Stopping Docker Application Container Engine... * Apr 15 21:42:51 minikube dockerd[84]: time="2020-04-15T21:42:51.042543511Z" level=info msg="Processing signal 'terminated'" * Apr 15 21:42:51 minikube dockerd[84]: time="2020-04-15T21:42:51.045880811Z" level=info msg="Daemon shutdown complete" * Apr 15 21:42:51 minikube dockerd[84]: time="2020-04-15T21:42:51.046072611Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby * Apr 15 21:42:51 minikube dockerd[84]: time="2020-04-15T21:42:51.046154411Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd * Apr 15 21:42:51 minikube dockerd[84]: time="2020-04-15T21:42:51.046274711Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby * Apr 15 21:42:51 minikube dockerd[84]: time="2020-04-15T21:42:51.047897211Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0005fab50, TRANSIENT_FAILURE" module=grpc * Apr 15 21:42:51 minikube dockerd[84]: time="2020-04-15T21:42:51.048184211Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0005fab50, CONNECTING" module=grpc * Apr 15 21:42:52 minikube systemd[1]: docker.service: Succeeded. * Apr 15 21:42:52 minikube systemd[1]: Stopped Docker Application Container Engine. * Apr 15 21:42:52 minikube systemd[1]: Starting Docker Application Container Engine... * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.009752702Z" level=info msg="Starting up" * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.011375702Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.011420502Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.011444302Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.011453802Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.011505302Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00012bb10, CONNECTING" module=grpc * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.011831302Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00012bb10, READY" module=grpc * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.012646302Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.012680302Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.012695702Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.012712402Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.012744302Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00062d7a0, CONNECTING" module=grpc * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.013024602Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00062d7a0, READY" module=grpc * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.015839102Z" level=info msg="[graphdriver] using prior storage driver: overlay2" * Apr 15 21:42:53 minikube dockerd[388]: time="2020-04-15T21:42:53.121256302Z" level=info msg="Loading containers: start." * Apr 15 21:42:54 minikube dockerd[388]: time="2020-04-15T21:42:54.194722497Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Apr 15 21:42:54 minikube dockerd[388]: time="2020-04-15T21:42:54.562342296Z" level=info msg="Loading containers: done." * Apr 15 21:42:54 minikube dockerd[388]: time="2020-04-15T21:42:54.676303595Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2 * Apr 15 21:42:54 minikube dockerd[388]: time="2020-04-15T21:42:54.676376095Z" level=info msg="Daemon has completed initialization" * Apr 15 21:42:54 minikube dockerd[388]: time="2020-04-15T21:42:54.827718495Z" level=info msg="API listen on /var/run/docker.sock" * Apr 15 21:42:54 minikube systemd[1]: Started Docker Application Container Engine. * Apr 15 21:42:54 minikube dockerd[388]: time="2020-04-15T21:42:54.828481895Z" level=info msg="API listen on [::]:2376" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * 0f1f037604ac4 gcr.io/hello-minikube-zero-install/hello-node@sha256:9cf82733f7278ae7ae899d432f8d3b3bb0fcb54e673c67496a9f76bb58f30a1c 12 minutes ago Running hello-node 1 f0419c6794169 * f6523b8a29ab8 cdc71b5a8a0ee 12 minutes ago Running kubernetes-dashboard 1 56393d9d8aac8 * e788d0de42385 3b08661dc379d 12 minutes ago Running dashboard-metrics-scraper 1 df45ff0b9c4f4 * 6b6acba1c0744 4689081edb103 12 minutes ago Running storage-provisioner 1 81a9e08b832e2 * 54d8ef7593bdd 67da37a9a360e 12 minutes ago Running coredns 1 4a7e6d176760f * 0f419d5b2619f 67da37a9a360e 12 minutes ago Running coredns 1 ea82106baaf1e * b46c015c832c5 43940c34f24f3 12 minutes ago Running kube-proxy 1 9201c4d54dd1f * 72edc8595cf4f aa67fec7d7ef7 12 minutes ago Running kindnet-cni 1 a078a8214664f * 86232abd49509 303ce5db0e90d 12 minutes ago Running etcd 1 4342200aa7897 * 7225f1e244fe3 d3e55153f52fb 12 minutes ago Running kube-controller-manager 1 3fcfe387b9efc * 98862ecc7ce64 a31f78c7c8ce1 12 minutes ago Running kube-scheduler 1 517adeef7788e * 29612627c97b7 74060cea7f704 12 minutes ago Running kube-apiserver 1 14d87fe8bf591 * caa5e2d3e8456 gcr.io/hello-minikube-zero-install/hello-node@sha256:9cf82733f7278ae7ae899d432f8d3b3bb0fcb54e673c67496a9f76bb58f30a1c 24 minutes ago Exited hello-node 0 19eccc96d66b6 * b0f3922373cc9 3b08661dc379d 30 minutes ago Exited dashboard-metrics-scraper 0 78813adda293d * 64b5cece638e8 cdc71b5a8a0ee 30 minutes ago Exited kubernetes-dashboard 0 7d9520d4bed1b * 4e008013cf059 67da37a9a360e 39 minutes ago Exited coredns 0 59c38b76ef06e * c6d8b9ae50bc6 67da37a9a360e 39 minutes ago Exited coredns 0 81841837dc79e * ad40664472902 4689081edb103 39 minutes ago Exited storage-provisioner 0 ae937d28a326e * fc41d89e4faf0 aa67fec7d7ef7 39 minutes ago Exited kindnet-cni 0 f7470e02e6dfe * ddecb6626bf5f 43940c34f24f3 39 minutes ago Exited kube-proxy 0 7c860cc620d4a * 5e4db3f010082 d3e55153f52fb 40 minutes ago Exited kube-controller-manager 0 aa82126ee2788 * 8a09399bc17ef 303ce5db0e90d 40 minutes ago Exited etcd 0 4633fbcacc76f * d4057e9b60671 a31f78c7c8ce1 40 minutes ago Exited kube-scheduler 0 bcbe06cd45565 * 4fa7dadff2320 74060cea7f704 40 minutes ago Exited kube-apiserver 0 6069598e591a9 * * ==> coredns [0f419d5b2619] <== * .:53 * [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 * CoreDNS-1.6.7 * linux/amd64, go1.13.6, da7f65b * * ==> coredns [4e008013cf05] <== * E0415 21:42:13.807849 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=3887&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp 10.96.0.1:443: connect: connection refused * E0415 21:42:13.807919 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1552&timeout=6m54s&timeoutSeconds=414&watch=true: dial tcp 10.96.0.1:443: connect: connection refused * E0415 21:42:13.808132 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=3237&timeout=6m35s&timeoutSeconds=395&watch=true: dial tcp 10.96.0.1:443: connect: connection refused * * ==> coredns [54d8ef7593bd] <== * .:53 * [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 * CoreDNS-1.6.7 * linux/amd64, go1.13.6, da7f65b * * ==> coredns [c6d8b9ae50bc] <== * E0415 21:42:13.806888 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1552&timeout=6m54s&timeoutSeconds=414&watch=true: dial tcp 10.96.0.1:443: connect: connection refused * E0415 21:42:13.809884 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=3237&timeout=6m35s&timeoutSeconds=395&watch=true: dial tcp 10.96.0.1:443: connect: connection refused * E0415 21:42:13.810718 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=3887&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp 10.96.0.1:443: connect: connection refused * * ==> describe nodes <== * Name: minikube * Roles: master * Labels: beta.kubernetes.io/arch=amd64 * beta.kubernetes.io/os=linux * kubernetes.io/arch=amd64 * kubernetes.io/hostname=minikube * kubernetes.io/os=linux * minikube.k8s.io/commit=93af9c1e43cab9618e301bc9fa720c63d5efa393 * minikube.k8s.io/name=minikube * minikube.k8s.io/updated_at=2020_04_15T23_15_59_0700 * minikube.k8s.io/version=v1.9.2 * node-role.kubernetes.io/master= * Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock * node.alpha.kubernetes.io/ttl: 0 * volumes.kubernetes.io/controller-managed-attach-detach: true * CreationTimestamp: Wed, 15 Apr 2020 21:15:37 +0000 * Taints: * Unschedulable: false * Lease: * HolderIdentity: minikube * AcquireTime: * RenewTime: Wed, 15 Apr 2020 21:55:44 +0000 * Conditions: * Type Status LastHeartbeatTime LastTransitionTime Reason Message * ---- ------ ----------------- ------------------ ------ ------- * MemoryPressure False Wed, 15 Apr 2020 21:53:14 +0000 Wed, 15 Apr 2020 21:15:33 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available * DiskPressure False Wed, 15 Apr 2020 21:53:14 +0000 Wed, 15 Apr 2020 21:15:33 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure * PIDPressure False Wed, 15 Apr 2020 21:53:14 +0000 Wed, 15 Apr 2020 21:15:33 +0000 KubeletHasSufficientPID kubelet has sufficient PID available * Ready True Wed, 15 Apr 2020 21:53:14 +0000 Wed, 15 Apr 2020 21:15:47 +0000 KubeletReady kubelet is posting ready status * Addresses: * InternalIP: 172.17.0.2 * Hostname: minikube * Capacity: * cpu: 2 * ephemeral-storage: 61255652Ki * hugepages-1Gi: 0 * hugepages-2Mi: 0 * memory: 8155660Ki * pods: 110 * Allocatable: * cpu: 2 * ephemeral-storage: 61255652Ki * hugepages-1Gi: 0 * hugepages-2Mi: 0 * memory: 8155660Ki * pods: 110 * System Info: * Machine ID: 3f8f63100f544a5e978790eced4e8b4d * System UUID: 94b6b36c-5dfa-463f-b3c7-7f524584feb7 * Boot ID: 9e90ad13-6948-4494-9d9e-475544ca3261 * Kernel Version: 4.19.76-linuxkit * OS Image: Ubuntu 19.10 * Operating System: linux * Architecture: amd64 * Container Runtime Version: docker://19.3.2 * Kubelet Version: v1.18.0 * Kube-Proxy Version: v1.18.0 * PodCIDR: 10.244.0.0/24 * PodCIDRs: 10.244.0.0/24 * Non-terminated Pods: (12 in total) * Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE * --------- ---- ------------ ---------- --------------- ------------- --- * default hello-node-677b9cfc6b-hbmc6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m * kube-system coredns-66bff467f8-2cgdf 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 39m * kube-system coredns-66bff467f8-hc5xl 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 39m * kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39m * kube-system kindnet-xgm9b 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 39m * kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 39m * kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 39m * kube-system kube-proxy-7j7zn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39m * kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 39m * kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39m * kubernetes-dashboard dashboard-metrics-scraper-84bfdf55ff-6tt8n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31m * kubernetes-dashboard kubernetes-dashboard-bc446cc64-ncxv6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31m * Allocated resources: * (Total limits may be over 100 percent, i.e., overcommitted.) * Resource Requests Limits * -------- -------- ------ * cpu 850m (42%) 100m (5%) * memory 190Mi (2%) 390Mi (4%) * ephemeral-storage 0 (0%) 0 (0%) * hugepages-1Gi 0 (0%) 0 (0%) * hugepages-2Mi 0 (0%) 0 (0%) * Events: * Type Reason Age From Message * ---- ------ ---- ---- ------- * Normal Starting 39m kubelet, minikube Starting kubelet. * Normal NodeHasSufficientMemory 39m kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 39m kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 39m kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal NodeAllocatableEnforced 39m kubelet, minikube Updated Node Allocatable limit across pods * Normal Starting 39m kube-proxy, minikube Starting kube-proxy. * Normal Starting 12m kubelet, minikube Starting kubelet. * Normal NodeAllocatableEnforced 12m kubelet, minikube Updated Node Allocatable limit across pods * Normal NodeHasSufficientMemory 12m (x8 over 12m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 12m (x8 over 12m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 12m (x7 over 12m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal Starting 12m kube-proxy, minikube Starting kube-proxy. * * ==> dmesg <== * [ +0.000000] R10: 0000000000000022 R11: 0000000000000246 R12: 00007f5b8e2812c0 * [ +0.000001] R13: 00007f5b8e2812c0 R14: 0000000000000002 R15: 00007f5b7e7ef020 * [ +0.000068] Memory cgroup out of memory: Kill process 13965 (stress) score 1968 or sacrifice child * [ +0.004933] Killed process 13965 (stress) total-vm:256780kB, anon-rss:100080kB, file-rss:248kB, shmem-rss:0kB * [ +18.362687] stress invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=994 * [ +0.000005] CPU: 1 PID: 14061 Comm: stress Tainted: G T 4.19.76-linuxkit #1 * [ +0.000001] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.0 11/01/2019 * [ +0.000000] Call Trace: * [ +0.000006] dump_stack+0x5a/0x6f * [ +0.000003] dump_header+0x67/0x272 * [ +0.000002] ? _raw_spin_unlock_irqrestore+0x16/0x18 * [ +0.000002] oom_kill_process+0x94/0x213 * [ +0.000001] out_of_memory+0x242/0x26a * [ +0.000003] mem_cgroup_out_of_memory+0x5c/0x8b * [ +0.000001] try_charge+0x1b2/0x596 * [ +0.000002] mem_cgroup_try_charge+0xc1/0xe1 * [ +0.000002] mem_cgroup_try_charge_delay+0x16/0x2e * [ +0.000002] __handle_mm_fault+0x52c/0x9fa * [ +0.000002] handle_mm_fault+0x13a/0x195 * [ +0.000002] __do_page_fault+0x2b1/0x434 * [ +0.000002] ? page_fault+0x8/0x30 * [ +0.000001] page_fault+0x1e/0x30 * [ +0.000002] RIP: 0033:0x55c51bc03e80 * [ +0.000001] Code: 84 a7 02 00 00 8b 54 24 0c 31 c0 85 d2 0f 94 c0 89 44 24 08 41 83 fe 02 0f 8f 46 01 00 00 31 c0 48 85 ed 7e 12 0f 1f 44 00 00 <41> c6 04 07 5a 48 01 d8 48 39 c5 7f f3 48 83 3c 24 00 0f 84 fe 01 * [ +0.000001] RSP: 002b:00007ffca165c860 EFLAGS: 00010206 * [ +0.000001] RAX: 0000000006337000 RBX: 0000000000001000 RCX: 00007faa934e7bd4 * [ +0.000001] RDX: 0000000000000000 RSI: 000000000fa01000 RDI: 00007faa83ab2000 * [ +0.000000] RBP: 000000000fa00000 R08: ffffffffffffffff R09: 0000000000000000 * [ +0.000001] R10: 0000000000000022 R11: 0000000000000246 R12: 00007faa935442c0 * [ +0.000000] R13: 00007faa935442c0 R14: 0000000000000002 R15: 00007faa83ab2020 * [ +0.000092] Memory cgroup out of memory: Kill process 14061 (stress) score 1025 or sacrifice child * [ +0.004813] Killed process 14061 (stress) total-vm:256780kB, anon-rss:99452kB, file-rss:260kB, shmem-rss:0kB * [Apr15 20:29] stress invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=994 * [ +0.000005] CPU: 0 PID: 14225 Comm: stress Tainted: G T 4.19.76-linuxkit #1 * [ +0.000001] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.0 11/01/2019 * [ +0.000000] Call Trace: * [ +0.000006] dump_stack+0x5a/0x6f * [ +0.000003] dump_header+0x67/0x272 * [ +0.000002] ? _raw_spin_unlock_irqrestore+0x16/0x18 * [ +0.000001] oom_kill_process+0x94/0x213 * [ +0.000002] out_of_memory+0x242/0x26a * [ +0.000002] mem_cgroup_out_of_memory+0x5c/0x8b * [ +0.000001] try_charge+0x1b2/0x596 * [ +0.000002] mem_cgroup_try_charge+0xc1/0xe1 * [ +0.000002] mem_cgroup_try_charge_delay+0x16/0x2e * [ +0.000002] __handle_mm_fault+0x52c/0x9fa * [ +0.000002] handle_mm_fault+0x13a/0x195 * [ +0.000002] __do_page_fault+0x2b1/0x434 * [ +0.000002] ? page_fault+0x8/0x30 * [ +0.000001] page_fault+0x1e/0x30 * [ +0.000002] RIP: 0033:0x5570af62ee80 * [ +0.000002] Code: 84 a7 02 00 00 8b 54 24 0c 31 c0 85 d2 0f 94 c0 89 44 24 08 41 83 fe 02 0f 8f 46 01 00 00 31 c0 48 85 ed 7e 12 0f 1f 44 00 00 <41> c6 04 07 5a 48 01 d8 48 39 c5 7f f3 48 83 3c 24 00 0f 84 fe 01 * [ +0.000000] RSP: 002b:00007ffcfb80fcd0 EFLAGS: 00010206 * [ +0.000001] RAX: 000000000634c000 RBX: 0000000000001000 RCX: 00007fbebef9abd4 * [ +0.000001] RDX: 0000000000000000 RSI: 000000000fa01000 RDI: 00007fbeaf565000 * [ +0.000001] RBP: 000000000fa00000 R08: ffffffffffffffff R09: 0000000000000000 * [ +0.000000] R10: 0000000000000022 R11: 0000000000000246 R12: 00007fbebeff72c0 * [ +0.000001] R13: 00007fbebeff72c0 R14: 0000000000000002 R15: 00007fbeaf565020 * [ +0.000090] Memory cgroup out of memory: Kill process 14225 (stress) score 1025 or sacrifice child * [ +0.005531] Killed process 14225 (stress) total-vm:256780kB, anon-rss:99752kB, file-rss:268kB, shmem-rss:0kB * * ==> etcd [86232abd4950] <== * 2020-04-15 21:51:40.755602 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (105.7487ms) to execute * 2020-04-15 21:51:42.938810 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (105.7227ms) to execute * 2020-04-15 21:52:03.591226 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (105.739699ms) to execute * 2020-04-15 21:52:05.716431 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (114.0132ms) to execute * 2020-04-15 21:52:07.866505 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (138.526ms) to execute * 2020-04-15 21:52:16.234097 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (131.6179ms) to execute * 2020-04-15 21:52:18.450701 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/hello-node\" " with result "range_response_count:1 size:617" took too long (170.716699ms) to execute * 2020-04-15 21:52:18.450878 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:576" took too long (116.953399ms) to execute * 2020-04-15 21:52:20.642661 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (148.0606ms) to execute * 2020-04-15 21:52:26.584245 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (129.864901ms) to execute * 2020-04-15 21:52:28.776167 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (181.332501ms) to execute * 2020-04-15 21:52:30.903043 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (115.110501ms) to execute * 2020-04-15 21:52:31.135306 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (146.0018ms) to execute * 2020-04-15 21:52:33.251845 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (105.1624ms) to execute * 2020-04-15 21:52:35.418676 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:480" took too long (157.2857ms) to execute * 2020-04-15 21:52:41.261161 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (139.1056ms) to execute * 2020-04-15 21:52:43.402332 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (130.3614ms) to execute * 2020-04-15 21:52:43.778150 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (105.2685ms) to execute * 2020-04-15 21:52:51.753761 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (106.5966ms) to execute * 2020-04-15 21:52:53.953302 W | etcdserver: read-only range request "key:\"/registry/events/default/\" range_end:\"/registry/events/default0\" " with result "range_response_count:26 size:17938" took too long (190.7385ms) to execute * 2020-04-15 21:52:53.953700 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (190.7694ms) to execute * 2020-04-15 21:52:54.153792 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:286" took too long (132.615999ms) to execute * 2020-04-15 21:52:54.153818 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (100.295699ms) to execute * 2020-04-15 21:53:04.054271 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:286" took too long (140.711999ms) to execute * 2020-04-15 21:53:06.371186 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (107.3225ms) to execute * 2020-04-15 21:53:09.120396 I | mvcc: store.index: compact 4607 * 2020-04-15 21:53:09.233255 I | mvcc: finished scheduled compaction at 4607 (took 112.445299ms) * 2020-04-15 21:53:10.763451 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (139.566401ms) to execute * 2020-04-15 21:53:12.938752 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:1 size:870" took too long (110.049401ms) to execute * 2020-04-15 21:53:14.622148 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (105.743201ms) to execute * 2020-04-15 21:53:14.963777 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:1 size:870" took too long (106.163001ms) to execute * 2020-04-15 21:53:16.963406 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (105.556301ms) to execute * 2020-04-15 21:53:18.821846 W | etcdserver: read-only range request "key:\"/registry/csinodes\" range_end:\"/registry/csinodet\" count_only:true " with result "range_response_count:0 size:7" took too long (131.048801ms) to execute * 2020-04-15 21:53:18.822158 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (147.303801ms) to execute * 2020-04-15 21:53:18.822442 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (155.144801ms) to execute * 2020-04-15 21:53:19.355257 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:1 size:870" took too long (108.8396ms) to execute * 2020-04-15 21:53:25.130735 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (137.4694ms) to execute * 2020-04-15 21:53:33.631630 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (137.898302ms) to execute * 2020-04-15 21:53:35.765613 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (123.043704ms) to execute * 2020-04-15 21:53:43.990879 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:602" took too long (103.7194ms) to execute * 2020-04-15 21:53:48.257576 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (111.9194ms) to execute * 2020-04-15 21:53:54.625636 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (114.0487ms) to execute * 2020-04-15 21:53:58.709383 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" " with result "range_response_count:1 size:891" took too long (125.736114ms) to execute * 2020-04-15 21:53:58.709474 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (132.563415ms) to execute * 2020-04-15 21:54:05.117506 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (129.2151ms) to execute * 2020-04-15 21:54:11.085091 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (115.3375ms) to execute * 2020-04-15 21:54:17.502595 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (130.8872ms) to execute * 2020-04-15 21:54:25.544811 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (108.8503ms) to execute * 2020-04-15 21:54:25.545062 W | etcdserver: read-only range request "key:\"/registry/leases\" range_end:\"/registry/leaset\" count_only:true " with result "range_response_count:0 size:7" took too long (105.5632ms) to execute * 2020-04-15 21:54:28.011198 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (121.3021ms) to execute * 2020-04-15 21:54:33.803769 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (106.0312ms) to execute * 2020-04-15 21:54:40.545409 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (121.6006ms) to execute * 2020-04-15 21:54:58.655460 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (111.886ms) to execute * 2020-04-15 21:55:03.389343 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:480" took too long (105.8843ms) to execute * 2020-04-15 21:55:05.505861 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (103.788ms) to execute * 2020-04-15 21:55:17.956923 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (105.591999ms) to execute * 2020-04-15 21:55:25.741208 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (118.933298ms) to execute * 2020-04-15 21:55:36.185549 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (132.962198ms) to execute * 2020-04-15 21:55:47.260069 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:480" took too long (137.0146ms) to execute * 2020-04-15 21:55:48.754885 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (141.7966ms) to execute * * ==> etcd [8a09399bc17e] <== * 2020-04-15 21:39:07.175372 W | etcdserver: read-only range request "key:\"/registry/limitranges\" range_end:\"/registry/limitranget\" count_only:true " with result "range_response_count:0 size:5" took too long (110.0047ms) to execute * 2020-04-15 21:39:11.093127 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (129.8616ms) to execute * 2020-04-15 21:39:15.284325 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (130.6699ms) to execute * 2020-04-15 21:39:15.284492 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (138.8105ms) to execute * 2020-04-15 21:39:19.484896 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (120.3804ms) to execute * 2020-04-15 21:39:21.618238 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (117.2135ms) to execute * 2020-04-15 21:39:23.744040 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (104.507899ms) to execute * 2020-04-15 21:39:25.894621 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (138.4625ms) to execute * 2020-04-15 21:39:30.119460 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (115.2712ms) to execute * 2020-04-15 21:39:30.318964 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" " with result "range_response_count:1 size:891" took too long (104.2109ms) to execute * 2020-04-15 21:39:36.420722 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (149.582202ms) to execute * 2020-04-15 21:39:46.912185 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (140.405902ms) to execute * 2020-04-15 21:39:49.046259 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (122.7687ms) to execute * 2020-04-15 21:39:55.288308 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (113.958701ms) to execute * 2020-04-15 21:39:57.996469 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:286" took too long (123.724996ms) to execute * 2020-04-15 21:40:01.655632 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:576" took too long (120.326196ms) to execute * 2020-04-15 21:40:07.739455 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (122.5837ms) to execute * 2020-04-15 21:40:09.897312 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (143.8609ms) to execute * 2020-04-15 21:40:09.897696 W | etcdserver: read-only range request "key:\"/registry/ingress/default/\" range_end:\"/registry/ingress/default0\" " with result "range_response_count:0 size:5" took too long (147.2027ms) to execute * 2020-04-15 21:40:12.048152 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (137.878599ms) to execute * 2020-04-15 21:40:14.182258 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (121.3746ms) to execute * 2020-04-15 21:40:18.565118 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:480" took too long (139.2567ms) to execute * 2020-04-15 21:40:22.456676 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (114.1343ms) to execute * 2020-04-15 21:40:24.606849 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (137.5285ms) to execute * 2020-04-15 21:40:26.740458 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (122.3685ms) to execute * 2020-04-15 21:40:28.890487 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (137.717598ms) to execute * 2020-04-15 21:40:35.382799 I | mvcc: store.index: compact 3039 * 2020-04-15 21:40:35.459750 I | mvcc: finished scheduled compaction at 3039 (took 76.486499ms) * 2020-04-15 21:40:37.249847 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (123.920898ms) to execute * 2020-04-15 21:40:39.392919 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (131.531498ms) to execute * 2020-04-15 21:40:43.825251 W | etcdserver: read-only range request "key:\"/registry/events/default/\" range_end:\"/registry/events/default0\" " with result "range_response_count:14 size:9722" took too long (145.341697ms) to execute * 2020-04-15 21:40:49.809194 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (161.944199ms) to execute * 2020-04-15 21:40:49.809551 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:1 size:870" took too long (123.859599ms) to execute * 2020-04-15 21:40:53.943278 W | etcdserver: read-only range request "key:\"/registry/leases\" range_end:\"/registry/leaset\" count_only:true " with result "range_response_count:0 size:7" took too long (107.375198ms) to execute * 2020-04-15 21:41:02.402406 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (148.671499ms) to execute * 2020-04-15 21:41:06.652111 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (123.747999ms) to execute * 2020-04-15 21:41:08.769571 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (104.4257ms) to execute * 2020-04-15 21:41:27.487123 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (138.7038ms) to execute * 2020-04-15 21:41:31.679742 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (127.4291ms) to execute * 2020-04-15 21:41:31.679934 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations\" range_end:\"/registry/mutatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (118.8128ms) to execute * 2020-04-15 21:41:39.988837 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (114.4438ms) to execute * 2020-04-15 21:41:42.147661 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (147.8885ms) to execute * 2020-04-15 21:41:44.288790 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (130.4526ms) to execute * 2020-04-15 21:41:48.022835 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:286" took too long (132.8751ms) to execute * 2020-04-15 21:41:48.488893 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (146.731499ms) to execute * 2020-04-15 21:41:48.489301 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5575" took too long (132.350299ms) to execute * 2020-04-15 21:41:50.822471 W | etcdserver: read-only range request "key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" " with result "range_response_count:5 size:1843" took too long (119.7277ms) to execute * 2020-04-15 21:41:56.848624 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (138.9233ms) to execute * 2020-04-15 21:41:56.849080 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:1 size:870" took too long (139.584ms) to execute * 2020-04-15 21:41:58.964957 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (103.808099ms) to execute * 2020-04-15 21:42:01.098382 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (119.9257ms) to execute * 2020-04-15 21:42:01.257503 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (124.4565ms) to execute * 2020-04-15 21:42:05.299288 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (156.396099ms) to execute * 2020-04-15 21:42:09.516766 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-6tt8n\" " with result "range_response_count:1 size:3849" took too long (168.359799ms) to execute * 2020-04-15 21:42:09.516914 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:1 size:870" took too long (118.944199ms) to execute * 2020-04-15 21:42:09.517030 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:1 size:870" took too long (102.580799ms) to execute * 2020-04-15 21:42:09.517132 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:1 size:870" took too long (150.877299ms) to execute * 2020-04-15 21:42:09.517243 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (165.424199ms) to execute * 2020-04-15 21:42:13.912238 N | pkg/osutil: received terminated signal, shutting down... * 2020-04-15 21:42:13.912573 I | etcdserver: skipped leadership transfer for single voting member cluster * * ==> kernel <== * 21:55:50 up 1:58, 0 users, load average: 0.76, 0.59, 0.68 * Linux minikube 4.19.76-linuxkit #1 SMP Thu Oct 17 19:31:58 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux * PRETTY_NAME="Ubuntu 19.10" * * ==> kube-apiserver [29612627c97b] <== * k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc000d33050, 0x5147220, 0xc000903730, 0xc0109c9300) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:288 +0xa4f * k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x455f018, 0xe, 0xc000d33050, 0xc0009b8e00, 0x5147220, 0xc000903730, 0xc0109c9300) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x4d3 * k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc003006070, 0x5147220, 0xc000903730, 0xc0109c9300) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:121 +0x161 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00abd30c0, 0x5147220, 0xc000903730, 0xc0109c9300) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x38a * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005fda850, 0x5147220, 0xc000903730, 0xc0109c9300) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x84 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x456239b, 0xf, 0xc007297b00, 0xc005fda850, 0x5147220, 0xc000903730, 0xc0109c9300) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x6b1 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x5147220, 0xc000903730, 0xc0109c9300) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x512 * net/http.HandlerFunc.ServeHTTP(0xc0072909c0, 0x5147220, 0xc000903730, 0xc0109c9300) * /usr/local/go/src/net/http/server.go:2007 +0x44 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x5147220, 0xc000903730, 0xc0109c9300) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:126 +0x59f * net/http.HandlerFunc.ServeHTTP(0xc0072ab650, 0x5147220, 0xc000903730, 0xc0109c9300) * /usr/local/go/src/net/http/server.go:2007 +0x44 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x5147220, 0xc000903730, 0xc0109c9300) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1fe6 * net/http.HandlerFunc.ServeHTTP(0xc007290a00, 0x5147220, 0xc000903730, 0xc0109c9300) * /usr/local/go/src/net/http/server.go:2007 +0x44 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x5147220, 0xc000903730, 0xc0109c9200) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:70 +0x5ce * net/http.HandlerFunc.ServeHTTP(0xc007265450, 0x5147220, 0xc000903730, 0xc0109c9200) * /usr/local/go/src/net/http/server.go:2007 +0x44 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0072b4620, 0x5147220, 0xc000903730, 0xc0109c9200) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x462 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x5147220, 0xc000903730, 0xc0109c9200) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:59 +0x121 * net/http.HandlerFunc.ServeHTTP(0xc0072ab680, 0x5147220, 0xc000903730, 0xc0109c9200) * /usr/local/go/src/net/http/server.go:2007 +0x44 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x5147220, 0xc000903730, 0xc0109c9100) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x274 * net/http.HandlerFunc.ServeHTTP(0xc0072ab6b0, 0x5147220, 0xc000903730, 0xc0109c9100) * /usr/local/go/src/net/http/server.go:2007 +0x44 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.WithLogging.func1(0x513a020, 0xc010613630, 0xc0109c9000) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:89 +0x2ca * net/http.HandlerFunc.ServeHTTP(0xc0072b4640, 0x513a020, 0xc010613630, 0xc0109c9000) * /usr/local/go/src/net/http/server.go:2007 +0x44 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1(0x513a020, 0xc010613630, 0xc0109c9000) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:51 +0x13e * net/http.HandlerFunc.ServeHTTP(0xc0072b4660, 0x513a020, 0xc010613630, 0xc0109c9000) * /usr/local/go/src/net/http/server.go:2007 +0x44 * k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc0072ab6e0, 0x513a020, 0xc010613630, 0xc0109c9000) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189 +0x51 * net/http.serverHandler.ServeHTTP(0xc00c4f6620, 0x513a020, 0xc010613630, 0xc0109c9000) * /usr/local/go/src/net/http/server.go:2802 +0xa4 * net/http.initNPNRequest.ServeHTTP(0x51546a0, 0xc0109ba240, 0xc00ac3ca80, 0xc00c4f6620, 0x513a020, 0xc010613630, 0xc0109c9000) * /usr/local/go/src/net/http/server.go:3366 +0x8d * k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).runHandler(0xc01095cd80, 0xc010613630, 0xc0109c9000, 0xc0109c7e40) * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:2149 +0x9f * created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).processHeaders * /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:1883 +0x4eb * I0415 21:43:30.436695 1 controller.go:606] quota admission added evaluator for: endpoints * I0415 21:43:45.043880 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io * * ==> kube-apiserver [4fa7dadff232] <== * I0415 21:16:09.919773 1 trace.go:116] Trace[1200151747]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (started: 2020-04-15 21:16:09.26744765 +0000 UTC m=+37.629717194) (total time: 652.308678ms): * Trace[1200151747]: [652.308678ms] [652.291078ms] END * I0415 21:16:09.919820 1 trace.go:116] Trace[1329429257]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/kube-dns-mttlt,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/9e99141/system:serviceaccount:kube-system:endpointslice-controller,client:172.17.0.2 (started: 2020-04-15 21:16:09.26737275 +0000 UTC m=+37.629642294) (total time: 652.435278ms): * Trace[1329429257]: [652.410578ms] [652.364178ms] Object stored in database * I0415 21:24:44.038346 1 trace.go:116] Trace[1810562789]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (started: 2020-04-15 21:24:43.498534592 +0000 UTC m=+551.860804236) (total time: 539.797086ms): * Trace[1810562789]: [539.797086ms] [539.747486ms] END * I0415 21:24:44.038413 1 trace.go:116] Trace[32913460]: "Update" url:/apis/apps/v1/namespaces/kubernetes-dashboard/replicasets/dashboard-metrics-scraper-84bfdf55ff/status,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/9e99141/system:serviceaccount:kube-system:replicaset-controller,client:172.17.0.2 (started: 2020-04-15 21:24:43.498385892 +0000 UTC m=+551.860655436) (total time: 540.011486ms): * Trace[32913460]: [540.011486ms] [539.905286ms] END * I0415 21:30:47.734112 1 trace.go:116] Trace[1471089160]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/9e99141/leader-election,client:172.17.0.2 (started: 2020-04-15 21:30:45.300471259 +0000 UTC m=+913.662740803) (total time: 2.433616808s): * Trace[1471089160]: [2.433585408s] [2.433569708s] About to write a response * I0415 21:30:47.736742 1 trace.go:116] Trace[474924240]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/9e99141/leader-election,client:172.17.0.2 (started: 2020-04-15 21:30:46.979415765 +0000 UTC m=+915.341685309) (total time: 757.302202ms): * Trace[474924240]: [757.264002ms] [757.249002ms] About to write a response * I0415 21:30:49.113392 1 trace.go:116] Trace[965385791]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-04-15 21:30:47.744309767 +0000 UTC m=+916.106579411) (total time: 1.369063205s): * Trace[965385791]: [1.369044205s] [1.368758605s] Transaction committed * I0415 21:30:49.113458 1 trace.go:116] Trace[517154975]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/9e99141/leader-election,client:172.17.0.2 (started: 2020-04-15 21:30:47.744217667 +0000 UTC m=+916.106487211) (total time: 1.369226305s): * Trace[517154975]: [1.369191505s] [1.369136605s] Object stored in database * I0415 21:30:49.114243 1 trace.go:116] Trace[98477117]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/9e99141,client:127.0.0.1 (started: 2020-04-15 21:30:47.826713668 +0000 UTC m=+916.188983212) (total time: 1.287512504s): * Trace[98477117]: [1.287489704s] [1.287483104s] About to write a response * I0415 21:30:49.114362 1 trace.go:116] Trace[1568408768]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/9e99141/leader-election,client:172.17.0.2 (started: 2020-04-15 21:30:47.755051068 +0000 UTC m=+916.117320612) (total time: 1.359298604s): * Trace[1568408768]: [1.359279004s] [1.359262004s] About to write a response * I0415 21:30:50.373899 1 trace.go:116] Trace[1841105848]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/9e99141,client:127.0.0.1 (started: 2020-04-15 21:30:49.114789072 +0000 UTC m=+917.477058616) (total time: 1.259074004s): * Trace[1841105848]: [1.259042004s] [1.259038004s] About to write a response * I0415 21:30:50.374460 1 trace.go:116] Trace[1274570795]: "List etcd3" key:/cronjobs/default,resourceVersion:,limit:0,continue: (started: 2020-04-15 21:30:49.118766372 +0000 UTC m=+917.481035916) (total time: 1.255680304s): * Trace[1274570795]: [1.255680304s] [1.255680304s] END * I0415 21:30:50.374633 1 trace.go:116] Trace[1337173384]: "List" url:/apis/batch/v1beta1/namespaces/default/cronjobs,user-agent:dashboard/v2.0.0-rc6,client:172.18.0.4 (started: 2020-04-15 21:30:49.118755472 +0000 UTC m=+917.481025016) (total time: 1.255861204s): * Trace[1337173384]: [1.255733704s] [1.255727704s] Listing from storage done * I0415 21:30:50.375327 1 trace.go:116] Trace[709471241]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-04-15 21:30:49.116403372 +0000 UTC m=+917.478673016) (total time: 1.258910304s): * Trace[709471241]: [1.258894204s] [1.258648604s] Transaction committed * I0415 21:30:50.375379 1 trace.go:116] Trace[1489816193]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/9e99141/leader-election,client:172.17.0.2 (started: 2020-04-15 21:30:49.116326572 +0000 UTC m=+917.478596116) (total time: 1.259040704s): * Trace[1489816193]: [1.259013904s] [1.258960004s] Object stored in database * I0415 21:30:52.017771 1 trace.go:116] Trace[616634965]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-04-15 21:30:50.374396076 +0000 UTC m=+918.736665620) (total time: 1.643353377s): * Trace[616634965]: [1.643318777s] [1.639870877s] Transaction committed * I0415 21:30:52.203679 1 trace.go:116] Trace[1928208473]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/9e99141/leader-election,client:172.17.0.2 (started: 2020-04-15 21:30:51.118588579 +0000 UTC m=+919.480858123) (total time: 1.085066848s): * Trace[1928208473]: [1.085043448s] [1.085028048s] About to write a response * E0415 21:39:03.175579 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} * E0415 21:39:03.315942 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} * E0415 21:39:03.316391 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} * I0415 21:42:13.800925 1 dynamic_cafile_content.go:182] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0415 21:42:13.800945 1 naming_controller.go:302] Shutting down NamingConditionController * I0415 21:42:13.800955 1 customresource_discovery_controller.go:220] Shutting down DiscoveryController * I0415 21:42:13.800968 1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController * I0415 21:42:13.800986 1 nonstructuralschema_controller.go:198] Shutting down NonStructuralSchemaConditionController * I0415 21:42:13.800994 1 crdregistration_controller.go:142] Shutting down crd-autoregister controller * I0415 21:42:13.801002 1 establishing_controller.go:87] Shutting down EstablishingController * I0415 21:42:13.801012 1 available_controller.go:399] Shutting down AvailableConditionController * I0415 21:42:13.801019 1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController * I0415 21:42:13.801030 1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller * I0415 21:42:13.801036 1 crd_finalizer.go:278] Shutting down CRDFinalizer * I0415 21:42:13.801043 1 autoregister_controller.go:165] Shutting down autoregister controller * I0415 21:42:13.801063 1 dynamic_cafile_content.go:182] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0415 21:42:13.801069 1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0415 21:42:13.801188 1 controller.go:87] Shutting down OpenAPI AggregationController * I0415 21:42:13.801365 1 tlsconfig.go:255] Shutting down DynamicServingCertificateController * I0415 21:42:13.801374 1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key * I0415 21:42:13.801386 1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0415 21:42:13.800929 1 controller.go:181] Shutting down kubernetes service endpoint reconciler * I0415 21:42:13.801488 1 controller.go:123] Shutting down OpenAPI controller * E0415 21:42:13.801999 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} * I0415 21:42:13.806471 1 secure_serving.go:222] Stopped listening on [::]:8443 * E0415 21:42:13.818691 1 controller.go:184] Get https://localhost:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:8443: connect: connection refused * * ==> kube-controller-manager [5e4db3f01008] <== * I0415 21:16:08.375224 1 shared_informer.go:230] Caches are synced for ReplicaSet * W0415 21:16:08.415672 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0415 21:16:08.420624 1 shared_informer.go:230] Caches are synced for GC * I0415 21:16:08.451178 1 shared_informer.go:230] Caches are synced for TTL * I0415 21:16:08.454762 1 shared_informer.go:230] Caches are synced for daemon sets * I0415 21:16:08.466429 1 shared_informer.go:230] Caches are synced for stateful set * I0415 21:16:08.479956 1 shared_informer.go:230] Caches are synced for node * I0415 21:16:08.480034 1 range_allocator.go:172] Starting range CIDR allocator * I0415 21:16:08.480038 1 shared_informer.go:223] Waiting for caches to sync for cidrallocator * I0415 21:16:08.480042 1 shared_informer.go:230] Caches are synced for cidrallocator * I0415 21:16:08.486275 1 shared_informer.go:230] Caches are synced for taint * I0415 21:16:08.486330 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: * W0415 21:16:08.486384 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0415 21:16:08.486406 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. * I0415 21:16:08.486605 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0415 21:16:08.486843 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"bce1f664-8000-471e-99c6-d2f40966110d", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller * I0415 21:16:08.500774 1 shared_informer.go:230] Caches are synced for persistent volume * I0415 21:16:08.546168 1 request.go:621] Throttling request took 1.006932766s, request: GET:https://172.17.0.2:8443/apis/policy/v1beta1?timeout=32s * I0415 21:16:08.694206 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"4bdde8c4-3770-42a2-89b1-91c5e9c8820a", APIVersion:"apps/v1", ResourceVersion:"307", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-xgm9b * I0415 21:16:08.696018 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"bf1b6b95-1982-4116-b82e-c6e2d947b217", APIVersion:"apps/v1", ResourceVersion:"251", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-7j7zn * I0415 21:16:08.754622 1 shared_informer.go:230] Caches are synced for disruption * I0415 21:16:08.754738 1 disruption.go:339] Sending events to api server. * I0415 21:16:08.754828 1 shared_informer.go:230] Caches are synced for deployment * I0415 21:16:08.864953 1 shared_informer.go:230] Caches are synced for attach detach * I0415 21:16:08.890953 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24] * I0415 21:16:08.916739 1 shared_informer.go:230] Caches are synced for HPA * I0415 21:16:08.933905 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"f12ff739-d8c5-4b3d-9279-d7cd381546a1", APIVersion:"apps/v1", ResourceVersion:"238", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2 * I0415 21:16:08.941256 1 shared_informer.go:230] Caches are synced for resource quota * I0415 21:16:08.952701 1 shared_informer.go:230] Caches are synced for garbage collector * I0415 21:16:08.952721 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I0415 21:16:08.980396 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"475dfd71-df64-40c6-bbe6-3a39accaf1da", APIVersion:"apps/v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-2cgdf * I0415 21:16:08.980467 1 shared_informer.go:230] Caches are synced for garbage collector * I0415 21:16:09.163326 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"475dfd71-df64-40c6-bbe6-3a39accaf1da", APIVersion:"apps/v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-hc5xl * E0415 21:16:09.163803 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again * I0415 21:16:09.189167 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0415 21:16:09.189195 1 shared_informer.go:230] Caches are synced for resource quota * I0415 21:24:41.006750 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"75e2155c-1c48-4ec7-97a4-8821422f13fc", APIVersion:"apps/v1", ResourceVersion:"1562", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-84bfdf55ff to 1 * I0415 21:24:41.224128 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-84bfdf55ff", UID:"664a2978-7f5d-48b7-9698-deaa9c7869cc", APIVersion:"apps/v1", ResourceVersion:"1563", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0415 21:24:41.282127 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff" failed with pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0415 21:24:41.282637 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"34b2a4dd-7b3a-47c6-846d-25c7a3df433c", APIVersion:"apps/v1", ResourceVersion:"1566", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-bc446cc64 to 1 * E0415 21:24:41.465278 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff" failed with pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0415 21:24:41.465419 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-84bfdf55ff", UID:"664a2978-7f5d-48b7-9698-deaa9c7869cc", APIVersion:"apps/v1", ResourceVersion:"1568", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0415 21:24:41.465519 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-bc446cc64", UID:"189cd48e-d574-4872-af0a-05abde6eee5c", APIVersion:"apps/v1", ResourceVersion:"1567", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-bc446cc64-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0415 21:24:41.471515 1 replica_set.go:535] sync "kubernetes-dashboard/kubernetes-dashboard-bc446cc64" failed with pods "kubernetes-dashboard-bc446cc64-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0415 21:24:41.472034 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff" failed with pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0415 21:24:41.472045 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-84bfdf55ff", UID:"664a2978-7f5d-48b7-9698-deaa9c7869cc", APIVersion:"apps/v1", ResourceVersion:"1568", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0415 21:24:41.697081 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff" failed with pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0415 21:24:41.697116 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-84bfdf55ff", UID:"664a2978-7f5d-48b7-9698-deaa9c7869cc", APIVersion:"apps/v1", ResourceVersion:"1568", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0415 21:24:41.699112 1 replica_set.go:535] sync "kubernetes-dashboard/kubernetes-dashboard-bc446cc64" failed with pods "kubernetes-dashboard-bc446cc64-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0415 21:24:41.699441 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-bc446cc64", UID:"189cd48e-d574-4872-af0a-05abde6eee5c", APIVersion:"apps/v1", ResourceVersion:"1573", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-bc446cc64-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0415 21:24:41.731409 1 replica_set.go:535] sync "kubernetes-dashboard/kubernetes-dashboard-bc446cc64" failed with pods "kubernetes-dashboard-bc446cc64-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0415 21:24:41.731448 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-bc446cc64", UID:"189cd48e-d574-4872-af0a-05abde6eee5c", APIVersion:"apps/v1", ResourceVersion:"1573", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-bc446cc64-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0415 21:24:41.788296 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff" failed with pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0415 21:24:41.788482 1 replica_set.go:535] sync "kubernetes-dashboard/kubernetes-dashboard-bc446cc64" failed with pods "kubernetes-dashboard-bc446cc64-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0415 21:24:41.788547 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-84bfdf55ff", UID:"664a2978-7f5d-48b7-9698-deaa9c7869cc", APIVersion:"apps/v1", ResourceVersion:"1568", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0415 21:24:41.788653 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-bc446cc64", UID:"189cd48e-d574-4872-af0a-05abde6eee5c", APIVersion:"apps/v1", ResourceVersion:"1573", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-bc446cc64-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0415 21:24:43.017302 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-bc446cc64", UID:"189cd48e-d574-4872-af0a-05abde6eee5c", APIVersion:"apps/v1", ResourceVersion:"1573", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-bc446cc64-ncxv6 * I0415 21:24:43.017689 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-84bfdf55ff", UID:"664a2978-7f5d-48b7-9698-deaa9c7869cc", APIVersion:"apps/v1", ResourceVersion:"1568", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-84bfdf55ff-6tt8n * I0415 21:30:03.931601 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-node", UID:"e0fbefe1-d1f1-4f12-aadc-ad41e10c073f", APIVersion:"apps/v1", ResourceVersion:"2321", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-node-677b9cfc6b to 1 * I0415 21:30:04.108056 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-node-677b9cfc6b", UID:"59495628-c3ef-49ca-b3f6-4aa135b80645", APIVersion:"apps/v1", ResourceVersion:"2322", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-node-677b9cfc6b-hbmc6 * * ==> kube-controller-manager [7225f1e244fe] <== * I0415 21:43:44.311484 1 cleaner.go:82] Starting CSR cleaner controller * I0415 21:43:44.372435 1 controllermanager.go:533] Started "ttl" * W0415 21:43:44.372507 1 controllermanager.go:525] Skipping "root-ca-cert-publisher" * I0415 21:43:44.372547 1 ttl_controller.go:118] Starting TTL controller * I0415 21:43:44.372554 1 shared_informer.go:223] Waiting for caches to sync for TTL * I0415 21:43:44.502055 1 controllermanager.go:533] Started "persistentvolume-expander" * I0415 21:43:44.502098 1 expand_controller.go:319] Starting expand controller * I0415 21:43:44.502205 1 shared_informer.go:223] Waiting for caches to sync for expand * I0415 21:43:44.652324 1 controllermanager.go:533] Started "statefulset" * I0415 21:43:44.652501 1 stateful_set.go:146] Starting stateful set controller * I0415 21:43:44.652870 1 shared_informer.go:223] Waiting for caches to sync for stateful set * I0415 21:43:44.795058 1 request.go:621] Throttling request took 1.042978396s, request: GET:https://172.17.0.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s * I0415 21:43:44.801584 1 controllermanager.go:533] Started "csrapproving" * I0415 21:43:44.801800 1 certificate_controller.go:119] Starting certificate controller "csrapproving" * I0415 21:43:44.801830 1 shared_informer.go:223] Waiting for caches to sync for certificate-csrapproving * I0415 21:43:44.951643 1 node_lifecycle_controller.go:78] Sending events to api server * E0415 21:43:44.951686 1 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided * W0415 21:43:44.951694 1 controllermanager.go:525] Skipping "cloud-node-lifecycle" * I0415 21:43:44.974341 1 shared_informer.go:230] Caches are synced for TTL * W0415 21:43:44.974393 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0415 21:43:44.996413 1 shared_informer.go:230] Caches are synced for HPA * I0415 21:43:45.002292 1 shared_informer.go:230] Caches are synced for certificate-csrapproving * I0415 21:43:45.002506 1 shared_informer.go:230] Caches are synced for expand * I0415 21:43:45.010582 1 shared_informer.go:230] Caches are synced for endpoint * I0415 21:43:45.024545 1 shared_informer.go:230] Caches are synced for namespace * I0415 21:43:45.026492 1 shared_informer.go:230] Caches are synced for persistent volume * I0415 21:43:45.026876 1 shared_informer.go:230] Caches are synced for PVC protection * I0415 21:43:45.029327 1 shared_informer.go:230] Caches are synced for node * I0415 21:43:45.029350 1 range_allocator.go:172] Starting range CIDR allocator * I0415 21:43:45.029353 1 shared_informer.go:223] Waiting for caches to sync for cidrallocator * I0415 21:43:45.029357 1 shared_informer.go:230] Caches are synced for cidrallocator * I0415 21:43:45.033500 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator * I0415 21:43:45.035014 1 shared_informer.go:230] Caches are synced for PV protection * I0415 21:43:45.035691 1 shared_informer.go:230] Caches are synced for certificate-csrsigning * I0415 21:43:45.042172 1 shared_informer.go:230] Caches are synced for endpoint_slice * I0415 21:43:45.044921 1 shared_informer.go:230] Caches are synced for service account * I0415 21:43:45.050738 1 shared_informer.go:230] Caches are synced for ReplicaSet * I0415 21:43:45.058244 1 shared_informer.go:230] Caches are synced for job * I0415 21:43:45.077118 1 shared_informer.go:230] Caches are synced for daemon sets * I0415 21:43:45.077936 1 shared_informer.go:230] Caches are synced for taint * I0415 21:43:45.078073 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: * W0415 21:43:45.078125 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0415 21:43:45.078174 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. * I0415 21:43:45.078206 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"bce1f664-8000-471e-99c6-d2f40966110d", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller * I0415 21:43:45.078223 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0415 21:43:45.080332 1 shared_informer.go:230] Caches are synced for GC * I0415 21:43:45.106803 1 shared_informer.go:230] Caches are synced for ReplicationController * I0415 21:43:45.230631 1 shared_informer.go:230] Caches are synced for deployment * I0415 21:43:45.246109 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * I0415 21:43:45.253320 1 shared_informer.go:230] Caches are synced for stateful set * I0415 21:43:45.376871 1 shared_informer.go:230] Caches are synced for attach detach * I0415 21:43:45.563744 1 shared_informer.go:230] Caches are synced for disruption * I0415 21:43:45.563768 1 disruption.go:339] Sending events to api server. * I0415 21:43:45.604765 1 shared_informer.go:230] Caches are synced for resource quota * I0415 21:43:45.645255 1 shared_informer.go:230] Caches are synced for bootstrap_signer * I0415 21:43:45.646434 1 shared_informer.go:230] Caches are synced for garbage collector * I0415 21:43:45.650576 1 shared_informer.go:230] Caches are synced for garbage collector * I0415 21:43:45.650612 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I0415 21:43:46.053258 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0415 21:43:46.053304 1 shared_informer.go:230] Caches are synced for resource quota * * ==> kube-proxy [b46c015c832c] <== * W0415 21:43:21.646057 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy * I0415 21:43:21.777005 1 node.go:136] Successfully retrieved node IP: 172.17.0.2 * I0415 21:43:21.777062 1 server_others.go:186] Using iptables Proxier. * I0415 21:43:21.777516 1 server.go:583] Version: v1.18.0 * I0415 21:43:21.777829 1 conntrack.go:52] Setting nf_conntrack_max to 131072 * I0415 21:43:21.777899 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I0415 21:43:21.777947 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I0415 21:43:21.778285 1 config.go:315] Starting service config controller * I0415 21:43:21.778328 1 shared_informer.go:223] Waiting for caches to sync for service config * I0415 21:43:21.778343 1 config.go:133] Starting endpoints config controller * I0415 21:43:21.778350 1 shared_informer.go:223] Waiting for caches to sync for endpoints config * I0415 21:43:21.878573 1 shared_informer.go:230] Caches are synced for service config * I0415 21:43:21.878826 1 shared_informer.go:230] Caches are synced for endpoints config * * ==> kube-proxy [ddecb6626bf5] <== * W0415 21:16:15.679712 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy * I0415 21:16:15.804658 1 node.go:136] Successfully retrieved node IP: 172.17.0.2 * I0415 21:16:15.804682 1 server_others.go:186] Using iptables Proxier. * I0415 21:16:15.804865 1 server.go:583] Version: v1.18.0 * I0415 21:16:15.805198 1 conntrack.go:52] Setting nf_conntrack_max to 131072 * I0415 21:16:15.805278 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I0415 21:16:15.805318 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I0415 21:16:15.806092 1 config.go:133] Starting endpoints config controller * I0415 21:16:15.806109 1 shared_informer.go:223] Waiting for caches to sync for endpoints config * I0415 21:16:15.806877 1 config.go:315] Starting service config controller * I0415 21:16:15.806888 1 shared_informer.go:223] Waiting for caches to sync for service config * I0415 21:16:15.906303 1 shared_informer.go:230] Caches are synced for endpoints config * I0415 21:16:15.910494 1 shared_informer.go:230] Caches are synced for service config * E0415 21:42:13.805645 1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://172.17.0.2:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3237&timeout=6m37s&timeoutSeconds=397&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused * E0415 21:42:13.805708 1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://172.17.0.2:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3887&timeout=8m56s&timeoutSeconds=536&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused * * ==> kube-scheduler [98862ecc7ce6] <== * I0415 21:43:07.244700 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0415 21:43:07.244755 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0415 21:43:07.822128 1 serving.go:313] Generated self-signed cert in-memory * W0415 21:43:11.455878 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0415 21:43:11.455895 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0415 21:43:11.455901 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. * W0415 21:43:11.455905 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0415 21:43:11.497051 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0415 21:43:11.497067 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0415 21:43:11.501177 1 authorization.go:47] Authorization is disabled * W0415 21:43:11.501193 1 authentication.go:40] Authentication is disabled * I0415 21:43:11.501202 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0415 21:43:11.503637 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0415 21:43:11.503772 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0415 21:43:11.503784 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0415 21:43:11.503824 1 tlsconfig.go:240] Starting DynamicServingCertificateController * I0415 21:43:11.604023 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0415 21:43:11.604096 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... * I0415 21:43:30.643403 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler * * ==> kube-scheduler [d4057e9b6067] <== * I0415 21:15:32.783650 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0415 21:15:32.783962 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0415 21:15:33.137564 1 serving.go:313] Generated self-signed cert in-memory * W0415 21:15:37.304692 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0415 21:15:37.304706 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0415 21:15:37.304724 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. * W0415 21:15:37.304729 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0415 21:15:37.333460 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0415 21:15:37.333648 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0415 21:15:37.334882 1 authorization.go:47] Authorization is disabled * W0415 21:15:37.335043 1 authentication.go:40] Authentication is disabled * I0415 21:15:37.335108 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0415 21:15:37.342107 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0415 21:15:37.342135 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0415 21:15:37.342198 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0415 21:15:37.344237 1 tlsconfig.go:240] Starting DynamicServingCertificateController * E0415 21:15:37.346034 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0415 21:15:37.346334 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0415 21:15:37.350213 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0415 21:15:37.350698 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0415 21:15:37.350883 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0415 21:15:37.353105 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0415 21:15:37.353292 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0415 21:15:37.353445 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0415 21:15:37.353543 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0415 21:15:37.353638 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0415 21:15:37.354143 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0415 21:15:37.354388 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0415 21:15:37.356247 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0415 21:15:37.357629 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0415 21:15:37.359845 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0415 21:15:37.360991 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0415 21:15:37.362834 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0415 21:15:37.365400 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0415 21:15:39.171474 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0415 21:15:39.411722 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0415 21:15:39.523649 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0415 21:15:39.591412 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0415 21:15:39.615241 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0415 21:15:39.789271 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0415 21:15:40.076198 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0415 21:15:40.308745 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0415 21:15:40.360949 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0415 21:15:43.408829 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0415 21:15:44.394195 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * I0415 21:15:52.646385 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0415 21:15:55.654155 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... * I0415 21:15:55.885257 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler * E0415 21:42:13.802960 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-scheduler: Get https://172.17.0.2:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: unexpected EOF * E0415 21:42:13.803210 1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://172.17.0.2:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=23&timeout=8m56s&timeoutSeconds=536&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused * E0415 21:42:13.803317 1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://172.17.0.2:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=3727&timeout=7m26s&timeoutSeconds=446&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused * E0415 21:42:13.803371 1 reflector.go:380] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: Get https://172.17.0.2:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=2437&timeoutSeconds=551&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused * E0415 21:42:13.803409 1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://172.17.0.2:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=318&timeout=9m51s&timeoutSeconds=591&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused * E0415 21:42:13.803520 1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://172.17.0.2:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=3237&timeout=7m10s&timeoutSeconds=430&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused * E0415 21:42:13.803552 1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://172.17.0.2:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=5m7s&timeoutSeconds=307&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused * E0415 21:42:13.803599 1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://172.17.0.2:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=5m7s&timeoutSeconds=307&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused * E0415 21:42:13.803957 1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://172.17.0.2:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=9m43s&timeoutSeconds=583&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused * E0415 21:42:13.804724 1 reflector.go:380] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://172.17.0.2:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=1560&timeout=8m47s&timeoutSeconds=527&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused * * ==> kubelet <== * -- Logs begin at Wed 2020-04-15 21:42:45 UTC, end at Wed 2020-04-15 21:55:53 UTC. -- * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.596573 665 topology_manager.go:233] [topologymanager] Topology Admit Handler * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.619435 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-4f2tn" (UniqueName: "kubernetes.io/secret/5a72d807-3f43-47d0-b0b8-d1ae80502d92-coredns-token-4f2tn") pod "coredns-66bff467f8-hc5xl" (UID: "5a72d807-3f43-47d0-b0b8-d1ae80502d92") * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.619659 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5a72d807-3f43-47d0-b0b8-d1ae80502d92-config-volume") pod "coredns-66bff467f8-hc5xl" (UID: "5a72d807-3f43-47d0-b0b8-d1ae80502d92") * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.679450 665 topology_manager.go:233] [topologymanager] Topology Admit Handler * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.720226 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/b390a7c6-595e-486e-8e91-ee7286300566-cni-cfg") pod "kindnet-xgm9b" (UID: "b390a7c6-595e-486e-8e91-ee7286300566") * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.720359 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b390a7c6-595e-486e-8e91-ee7286300566-lib-modules") pod "kindnet-xgm9b" (UID: "b390a7c6-595e-486e-8e91-ee7286300566") * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.720432 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b390a7c6-595e-486e-8e91-ee7286300566-xtables-lock") pod "kindnet-xgm9b" (UID: "b390a7c6-595e-486e-8e91-ee7286300566") * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.720498 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-5qtks" (UniqueName: "kubernetes.io/secret/b390a7c6-595e-486e-8e91-ee7286300566-kindnet-token-5qtks") pod "kindnet-xgm9b" (UID: "b390a7c6-595e-486e-8e91-ee7286300566") * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.763541 665 topology_manager.go:233] [topologymanager] Topology Admit Handler * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.820770 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/77da2b30-04b8-464c-bf55-317929d65824-config-volume") pod "coredns-66bff467f8-2cgdf" (UID: "77da2b30-04b8-464c-bf55-317929d65824") * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.820904 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-4f2tn" (UniqueName: "kubernetes.io/secret/77da2b30-04b8-464c-bf55-317929d65824-coredns-token-4f2tn") pod "coredns-66bff467f8-2cgdf" (UID: "77da2b30-04b8-464c-bf55-317929d65824") * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.904903 665 topology_manager.go:233] [topologymanager] Topology Admit Handler * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.916786 665 kubelet_node_status.go:112] Node minikube was previously registered * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.921188 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/bce690c6-2811-4c7c-b2d2-34834e340fc4-xtables-lock") pod "kube-proxy-7j7zn" (UID: "bce690c6-2811-4c7c-b2d2-34834e340fc4") * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.921342 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/bce690c6-2811-4c7c-b2d2-34834e340fc4-lib-modules") pod "kube-proxy-7j7zn" (UID: "bce690c6-2811-4c7c-b2d2-34834e340fc4") * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.921424 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-nrghf" (UniqueName: "kubernetes.io/secret/bce690c6-2811-4c7c-b2d2-34834e340fc4-kube-proxy-token-nrghf") pod "kube-proxy-7j7zn" (UID: "bce690c6-2811-4c7c-b2d2-34834e340fc4") * Apr 15 21:43:11 minikube kubelet[665]: I0415 21:43:11.921501 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/bce690c6-2811-4c7c-b2d2-34834e340fc4-kube-proxy") pod "kube-proxy-7j7zn" (UID: "bce690c6-2811-4c7c-b2d2-34834e340fc4") * Apr 15 21:43:12 minikube kubelet[665]: I0415 21:43:12.021873 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-glzq9" (UniqueName: "kubernetes.io/secret/5eea7787-bfe3-405b-aa4d-2a7399eaa453-kubernetes-dashboard-token-glzq9") pod "kubernetes-dashboard-bc446cc64-ncxv6" (UID: "5eea7787-bfe3-405b-aa4d-2a7399eaa453") * Apr 15 21:43:12 minikube kubelet[665]: I0415 21:43:12.022063 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/5eea7787-bfe3-405b-aa4d-2a7399eaa453-tmp-volume") pod "kubernetes-dashboard-bc446cc64-ncxv6" (UID: "5eea7787-bfe3-405b-aa4d-2a7399eaa453") * Apr 15 21:43:12 minikube kubelet[665]: I0415 21:43:12.089111 665 kubelet_node_status.go:73] Successfully registered node minikube * Apr 15 21:43:12 minikube kubelet[665]: I0415 21:43:12.089279 665 topology_manager.go:233] [topologymanager] Topology Admit Handler * Apr 15 21:43:12 minikube kubelet[665]: I0415 21:43:12.222729 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-glzq9" (UniqueName: "kubernetes.io/secret/1012917c-193b-4f46-9d56-2309b5be1ace-kubernetes-dashboard-token-glzq9") pod "dashboard-metrics-scraper-84bfdf55ff-6tt8n" (UID: "1012917c-193b-4f46-9d56-2309b5be1ace") * Apr 15 21:43:12 minikube kubelet[665]: I0415 21:43:12.223135 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/1012917c-193b-4f46-9d56-2309b5be1ace-tmp-volume") pod "dashboard-metrics-scraper-84bfdf55ff-6tt8n" (UID: "1012917c-193b-4f46-9d56-2309b5be1ace") * Apr 15 21:43:12 minikube kubelet[665]: I0415 21:43:12.363790 665 topology_manager.go:233] [topologymanager] Topology Admit Handler * Apr 15 21:43:12 minikube kubelet[665]: I0415 21:43:12.524406 665 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-5hx2m" (UniqueName: "kubernetes.io/secret/a993277a-54ed-4ad0-a8a1-69084cacf020-default-token-5hx2m") pod "hello-node-677b9cfc6b-hbmc6" (UID: "a993277a-54ed-4ad0-a8a1-69084cacf020") * Apr 15 21:43:12 minikube kubelet[665]: E0415 21:43:12.619886 665 secret.go:195] Couldn't get secret kube-system/storage-provisioner-token-kgh5w: failed to sync secret cache: timed out waiting for the condition * Apr 15 21:43:12 minikube kubelet[665]: E0415 21:43:12.620024 665 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/c67c709e-1ca3-4338-a57f-56249f0ab992-storage-provisioner-token-kgh5w podName:c67c709e-1ca3-4338-a57f-56249f0ab992 nodeName:}" failed. No retries permitted until 2020-04-15 21:43:13.119955122 +0000 UTC m=+14.280100145 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-kgh5w\" (UniqueName: \"kubernetes.io/secret/c67c709e-1ca3-4338-a57f-56249f0ab992-storage-provisioner-token-kgh5w\") pod \"storage-provisioner\" (UID: \"c67c709e-1ca3-4338-a57f-56249f0ab992\") : failed to sync secret cache: timed out waiting for the condition" * Apr 15 21:43:12 minikube kubelet[665]: I0415 21:43:12.725110 665 reconciler.go:157] Reconciler: start to sync state * Apr 15 21:43:14 minikube kubelet[665]: E0415 21:43:14.126991 665 secret.go:195] Couldn't get secret kube-system/storage-provisioner-token-kgh5w: failed to sync secret cache: timed out waiting for the condition * Apr 15 21:43:14 minikube kubelet[665]: E0415 21:43:14.127094 665 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/c67c709e-1ca3-4338-a57f-56249f0ab992-storage-provisioner-token-kgh5w podName:c67c709e-1ca3-4338-a57f-56249f0ab992 nodeName:}" failed. No retries permitted until 2020-04-15 21:43:15.127073316 +0000 UTC m=+16.287218339 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-kgh5w\" (UniqueName: \"kubernetes.io/secret/c67c709e-1ca3-4338-a57f-56249f0ab992-storage-provisioner-token-kgh5w\") pod \"storage-provisioner\" (UID: \"c67c709e-1ca3-4338-a57f-56249f0ab992\") : failed to sync secret cache: timed out waiting for the condition" * Apr 15 21:43:19 minikube kubelet[665]: W0415 21:43:19.549032 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-hc5xl through plugin: invalid network status for * Apr 15 21:43:19 minikube kubelet[665]: W0415 21:43:19.551935 665 pod_container_deletor.go:77] Container "ea82106baaf1eb3d2443d750a44f31448437e8548d25a791904e186d656ca691" not found in pod's containers * Apr 15 21:43:20 minikube kubelet[665]: W0415 21:43:20.337006 665 pod_container_deletor.go:77] Container "a078a8214664f317a6e3e7dda70d738357008387490a7b9741230d95a11f31e4" not found in pod's containers * Apr 15 21:43:21 minikube kubelet[665]: W0415 21:43:21.048604 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-2cgdf through plugin: invalid network status for * Apr 15 21:43:21 minikube kubelet[665]: W0415 21:43:21.274449 665 pod_container_deletor.go:77] Container "9201c4d54dd1ff7d67822717a6a3be7a309a4042bcf047dae28fbda008bddc26" not found in pod's containers * Apr 15 21:43:21 minikube kubelet[665]: E0415 21:43:21.280153 665 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" * Apr 15 21:43:21 minikube kubelet[665]: E0415 21:43:21.280223 665 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics * Apr 15 21:43:25 minikube kubelet[665]: W0415 21:43:25.395446 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-6tt8n through plugin: invalid network status for * Apr 15 21:43:25 minikube kubelet[665]: W0415 21:43:25.633722 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-ncxv6 through plugin: invalid network status for * Apr 15 21:43:25 minikube kubelet[665]: W0415 21:43:25.635584 665 pod_container_deletor.go:77] Container "56393d9d8aac8ec3bd201650f3b8e9f565fe66711bb23edfa5e04846196b8e54" not found in pod's containers * Apr 15 21:43:25 minikube kubelet[665]: W0415 21:43:25.642087 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-2cgdf through plugin: invalid network status for * Apr 15 21:43:25 minikube kubelet[665]: W0415 21:43:25.647050 665 pod_container_deletor.go:77] Container "4a7e6d176760f0a81fc2480160884feb8c61b59ec3fc945bd1cbb45e43ad15c7" not found in pod's containers * Apr 15 21:43:25 minikube kubelet[665]: W0415 21:43:25.889091 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-node-677b9cfc6b-hbmc6 through plugin: invalid network status for * Apr 15 21:43:25 minikube kubelet[665]: W0415 21:43:25.898319 665 pod_container_deletor.go:77] Container "f0419c67941690a529b6e05fcbc7fbd7e89c3753fcf438f3e124fdd8b298281c" not found in pod's containers * Apr 15 21:43:25 minikube kubelet[665]: W0415 21:43:25.901565 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-6tt8n through plugin: invalid network status for * Apr 15 21:43:25 minikube kubelet[665]: W0415 21:43:25.906905 665 pod_container_deletor.go:77] Container "df45ff0b9c4f4b250ace26469d03c1afb49011f84fc18015105eacc4d32a57f9" not found in pod's containers * Apr 15 21:43:26 minikube kubelet[665]: W0415 21:43:26.917304 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-ncxv6 through plugin: invalid network status for * Apr 15 21:43:27 minikube kubelet[665]: W0415 21:43:27.133736 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-2cgdf through plugin: invalid network status for * Apr 15 21:43:27 minikube kubelet[665]: W0415 21:43:27.144536 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-node-677b9cfc6b-hbmc6 through plugin: invalid network status for * Apr 15 21:43:27 minikube kubelet[665]: W0415 21:43:27.151824 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-6tt8n through plugin: invalid network status for * Apr 15 21:43:28 minikube kubelet[665]: W0415 21:43:28.097695 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-hc5xl through plugin: invalid network status for * Apr 15 21:43:29 minikube kubelet[665]: W0415 21:43:29.165104 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-node-677b9cfc6b-hbmc6 through plugin: invalid network status for * Apr 15 21:43:29 minikube kubelet[665]: W0415 21:43:29.180764 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-6tt8n through plugin: invalid network status for * Apr 15 21:43:29 minikube kubelet[665]: W0415 21:43:29.201297 665 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-ncxv6 through plugin: invalid network status for * Apr 15 21:43:31 minikube kubelet[665]: E0415 21:43:31.290519 665 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" * Apr 15 21:43:31 minikube kubelet[665]: E0415 21:43:31.290566 665 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics * Apr 15 21:43:41 minikube kubelet[665]: E0415 21:43:41.309681 665 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" * Apr 15 21:43:41 minikube kubelet[665]: E0415 21:43:41.309736 665 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics * Apr 15 21:43:51 minikube kubelet[665]: E0415 21:43:51.318446 665 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" * Apr 15 21:43:51 minikube kubelet[665]: E0415 21:43:51.318501 665 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics * * ==> kubernetes-dashboard [64b5cece638e] <== * 2020/04/15 21:42:04 [2020-04-15T21:42:04Z] Incoming HTTP/1.1 GET /api/v1/persistentvolumeclaim/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:34642: * 2020/04/15 21:42:04 Getting list persistent volumes claims * 2020/04/15 21:42:04 [2020-04-15T21:42:04Z] Outcoming response to 172.18.0.1:34642 with 200 status code * 2020/04/15 21:42:04 [2020-04-15T21:42:04Z] Incoming HTTP/1.1 GET /api/v1/secret/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:34642: * 2020/04/15 21:42:04 Getting list of secrets in &{[default]} namespace * 2020/04/15 21:42:04 [2020-04-15T21:42:04Z] Outcoming response to 172.18.0.1:34642 with 200 status code * 2020/04/15 21:42:05 [2020-04-15T21:42:05Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.18.0.1:34642: * 2020/04/15 21:42:05 Getting list of namespaces * 2020/04/15 21:42:05 [2020-04-15T21:42:05Z] Outcoming response to 172.18.0.1:34642 with 200 status code * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/cronjob/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:34642: * 2020/04/15 21:42:09 Getting list of all cron jobs in the cluster * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:34642 with 200 status code * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/daemonset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:35090: * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:35090 with 200 status code * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:35090: * 2020/04/15 21:42:09 Getting list of all deployments in the cluster * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/statefulset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:35132: * 2020/04/15 21:42:09 Getting list of all pet sets in the cluster * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/job/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:34642: * 2020/04/15 21:42:09 Getting list of all jobs in the cluster * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/replicationcontroller/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:35128: * 2020/04/15 21:42:09 Getting list of all replication controllers in the cluster * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/replicaset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:35130: * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:35126: * 2020/04/15 21:42:09 Getting list of all pods in the cluster * 2020/04/15 21:42:09 received 0 resources from sidecar instead of 1 * 2020/04/15 21:42:09 Getting list of all replica sets in the cluster * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:35128 with 200 status code * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:34642 with 200 status code * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:35132 with 200 status code * 2020/04/15 21:42:09 received 0 resources from sidecar instead of 1 * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:35090 with 200 status code * 2020/04/15 21:42:09 received 0 resources from sidecar instead of 1 * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/ingress/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:34642: * 2020/04/15 21:42:09 received 0 resources from sidecar instead of 1 * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:35128: * 2020/04/15 21:42:09 Getting list of all services in the cluster * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/persistentvolumeclaim/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:35136: * 2020/04/15 21:42:09 Getting list persistent volumes claims * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:34642 with 200 status code * 2020/04/15 21:42:09 received 0 resources from sidecar instead of 1 * 2020/04/15 21:42:09 received 0 resources from sidecar instead of 1 * 2020/04/15 21:42:09 Getting pod metrics * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:35128 with 200 status code * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:35136 with 200 status code * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/configmap/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:34642: * 2020/04/15 21:42:09 Getting list config maps in the namespace default * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:35130 with 200 status code * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:34642 with 200 status code * 2020/04/15 21:42:09 received 0 resources from sidecar instead of 1 * 2020/04/15 21:42:09 received 0 resources from sidecar instead of 1 * 2020/04/15 21:42:09 Skipping metric because of error: Metric label not set. * 2020/04/15 21:42:09 Skipping metric because of error: Metric label not set. * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:35126 with 200 status code * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Incoming HTTP/1.1 GET /api/v1/secret/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:35136: * 2020/04/15 21:42:09 Getting list of secrets in &{[default]} namespace * 2020/04/15 21:42:09 [2020-04-15T21:42:09Z] Outcoming response to 172.18.0.1:35136 with 200 status code * 2020/04/15 21:42:10 [2020-04-15T21:42:10Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.18.0.1:35136: * 2020/04/15 21:42:10 Getting list of namespaces * 2020/04/15 21:42:10 [2020-04-15T21:42:10Z] Outcoming response to 172.18.0.1:35136 with 200 status code * * ==> kubernetes-dashboard [f6523b8a29ab] <== * 2020/04/15 21:55:05 [2020-04-15T21:55:05Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:48112: * 2020/04/15 21:55:05 Getting list of all services in the cluster * 2020/04/15 21:55:05 [2020-04-15T21:55:05Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:05 [2020-04-15T21:55:05Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.18.0.1:48112: * 2020/04/15 21:55:05 Getting list of namespaces * 2020/04/15 21:55:05 [2020-04-15T21:55:05Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:10 [2020-04-15T21:55:10Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:48112: * 2020/04/15 21:55:10 Getting list of all services in the cluster * 2020/04/15 21:55:10 [2020-04-15T21:55:10Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:11 [2020-04-15T21:55:11Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.18.0.1:48112: * 2020/04/15 21:55:11 Getting list of namespaces * 2020/04/15 21:55:11 [2020-04-15T21:55:11Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:15 [2020-04-15T21:55:15Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:48112: * 2020/04/15 21:55:15 Getting list of all services in the cluster * 2020/04/15 21:55:15 [2020-04-15T21:55:15Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:16 [2020-04-15T21:55:16Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.18.0.1:48112: * 2020/04/15 21:55:16 Getting list of namespaces * 2020/04/15 21:55:16 [2020-04-15T21:55:16Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:20 [2020-04-15T21:55:20Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:48112: * 2020/04/15 21:55:20 Getting list of all services in the cluster * 2020/04/15 21:55:20 [2020-04-15T21:55:20Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:21 [2020-04-15T21:55:21Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.18.0.1:48112: * 2020/04/15 21:55:21 Getting list of namespaces * 2020/04/15 21:55:21 [2020-04-15T21:55:21Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:25 [2020-04-15T21:55:25Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:48112: * 2020/04/15 21:55:25 Getting list of all services in the cluster * 2020/04/15 21:55:25 [2020-04-15T21:55:25Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:26 [2020-04-15T21:55:26Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.18.0.1:48112: * 2020/04/15 21:55:26 Getting list of namespaces * 2020/04/15 21:55:26 [2020-04-15T21:55:26Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:30 [2020-04-15T21:55:30Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:48112: * 2020/04/15 21:55:30 Getting list of all services in the cluster * 2020/04/15 21:55:30 [2020-04-15T21:55:30Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:31 [2020-04-15T21:55:31Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.18.0.1:48112: * 2020/04/15 21:55:31 Getting list of namespaces * 2020/04/15 21:55:31 [2020-04-15T21:55:31Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:35 [2020-04-15T21:55:35Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:48112: * 2020/04/15 21:55:35 Getting list of all services in the cluster * 2020/04/15 21:55:35 [2020-04-15T21:55:35Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:36 [2020-04-15T21:55:36Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.18.0.1:48112: * 2020/04/15 21:55:36 Getting list of namespaces * 2020/04/15 21:55:36 [2020-04-15T21:55:36Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:40 [2020-04-15T21:55:40Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:48112: * 2020/04/15 21:55:40 Getting list of all services in the cluster * 2020/04/15 21:55:41 [2020-04-15T21:55:41Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:41 [2020-04-15T21:55:41Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.18.0.1:48112: * 2020/04/15 21:55:41 Getting list of namespaces * 2020/04/15 21:55:41 [2020-04-15T21:55:41Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:45 [2020-04-15T21:55:45Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:48112: * 2020/04/15 21:55:45 Getting list of all services in the cluster * 2020/04/15 21:55:45 [2020-04-15T21:55:45Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:46 [2020-04-15T21:55:46Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.18.0.1:48112: * 2020/04/15 21:55:46 Getting list of namespaces * 2020/04/15 21:55:46 [2020-04-15T21:55:46Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:50 [2020-04-15T21:55:50Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.18.0.1:48112: * 2020/04/15 21:55:50 Getting list of all services in the cluster * 2020/04/15 21:55:50 [2020-04-15T21:55:50Z] Outcoming response to 172.18.0.1:48112 with 200 status code * 2020/04/15 21:55:51 [2020-04-15T21:55:51Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.18.0.1:48112: * 2020/04/15 21:55:51 Getting list of namespaces * 2020/04/15 21:55:51 [2020-04-15T21:55:51Z] Outcoming response to 172.18.0.1:48112 with 200 status code * * ==> storage-provisioner [6b6acba1c074] <== * * ==> storage-provisioner [ad4066447290] <== * E0415 21:42:13.812523 1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:412: Failed to watch *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?resourceVersion=1&timeoutSeconds=533&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused * E0415 21:42:13.817572 1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:379: Failed to watch *v1.StorageClass: Get https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=318&timeoutSeconds=586&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused * E0415 21:42:13.817758 1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to watch *v1.PersistentVolumeClaim: Get https://10.96.0.1:443/api/v1/persistentvolumeclaims?resourceVersion=1&timeoutSeconds=592&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused
Kartik494 commented 4 years ago

@Zerotask can you provide the status of pod i think it may be caused due to pod is not in ready state

Zerotask commented 4 years ago

@Kartik494 The status of the pod is Running

Here's a screenshot of the Kubernetes Dashboard

grafik

prasadkatti commented 4 years ago

For issues related to minikube, you might get a better response if you move the issue to https://github.com/kubernetes/minikube

RA489 commented 4 years ago

@Zerotask The image hello-node --image=gcr.io/hello-minikube-zero-install/hello-node has been deprecated. As a work around please use kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4

tengqm commented 4 years ago

resolved. /close

k8s-ci-robot commented 4 years ago

@tengqm: Closing this issue.

In response to [this](https://github.com/kubernetes/website/issues/20364#issuecomment-643725012): >resolved. >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.