kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.36k stars 4.88k forks source link

Ingress addon stopped to work with 'none' VM driver starting from v1.12.x #9322

Closed storozhilov closed 3 years ago

storozhilov commented 4 years ago

Hi to all, unfortunately ingress addon stopped to work with none VM driver starting from v1.12.x. v1.11.0 works fine. Help please. Thanks in advance! Ilya

P.S. Great product by the way :+1: Thanks!

Steps to reproduce the issue:

  1. Spin up an EC2 instance with Linux, install minikube v1.12.* or later
  2. sudo minikube start --vm-driver=none
  3. sudo chown -Rv $USER $HOME/.kube $HOME/.minikube
  4. sudo minikube addons enable ingress

Full output of failed command:

W0925 13:45:35.036604   17384 root.go:252] Error reading config file at /home/ubuntu/.minikube/config/config.json: open /home/ubuntu/.minikube/config/config.json: no such file or directory
I0925 13:45:35.037178   17384 addons.go:55] Setting ingress=true in profile "minikube"
I0925 13:45:35.037227   17384 addons.go:131] Setting addon ingress=true in "minikube"
I0925 13:45:35.041719   17384 out.go:109] 

W0925 13:45:35.041851   17384 out.go:145] ❌  Exiting due to MK_USAGE: Due to networking limitations of driver none, ingress addon is not supported. Try using a different driver.
❌  Exiting due to MK_USAGE: Due to networking limitations of driver none, ingress addon is not supported. Try using a different driver.
I0925 13:45:35.045174   17384 out.go:109]

Full output of minikube start command used, if not already included:

😄  minikube v1.13.1 on Ubuntu 18.04 (xen/amd64)
✨  Using the none driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🤹  Running on localhost (CPUs=2, Memory=3933MB, Disk=15817MB) ...
ℹ️  OS release is Ubuntu 18.04.5 LTS
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.6 ...
    ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
    > kubeadm.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubelet.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubeadm: 37.30 MiB / 37.30 MiB [---------------] 100.00% 87.86 MiB p/s 1s
    > kubectl: 41.01 MiB / 41.01 MiB [---------------] 100.00% 62.77 MiB p/s 1s
    > kubelet: 104.88 MiB / 104.88 MiB [-------------] 100.00% 66.09 MiB p/s 2s
🤹  Configuring local host environment ...

❗  The 'none' driver is designed for experts who need to integrate with an existing VM
💡  Most users should use the newer 'docker' driver instead, which does not require root!
📘  For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/

❗  kubectl and minikube configuration will be stored in /home/ubuntu
❗  To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

    ▪ sudo mv /home/ubuntu/.kube /home/ubuntu/.minikube $HOME
    ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

💡  This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube" by default

Optional: Full output of minikube logs command:

``` ==> Docker <== -- Logs begin at Fri 2020-09-25 13:24:27 UTC, end at Fri 2020-09-25 13:49:01 UTC. -- Sep 25 13:36:54 ip-172-31-74-0 systemd[1]: Starting Docker Application Container Engine... Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.574325799Z" level=info msg="Starting up" Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.575822579Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf" Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.611722240Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.611755153Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.611780709Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.611794211Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.613415549Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.613452428Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.613477011Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.613495082Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.636860299Z" level=warning msg="Your kernel does not support swap memory limit" Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.638206138Z" level=warning msg="Your kernel does not support cgroup rt period" Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.638352075Z" level=warning msg="Your kernel does not support cgroup rt runtime" Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.638459566Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.638480972Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.638655763Z" level=info msg="Loading containers: start." Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.785315664Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 25 13:36:54 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:54.841978302Z" level=info msg="Loading containers: done." Sep 25 13:36:55 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:55.187075300Z" level=info msg="Docker daemon" commit=369ce74a3c graphdriver(s)=overlay2 version=19.03.6 Sep 25 13:36:55 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:55.187228460Z" level=info msg="Daemon has completed initialization" Sep 25 13:36:55 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:36:55.211801226Z" level=info msg="API listen on /var/run/docker.sock" Sep 25 13:36:55 ip-172-31-74-0 systemd[1]: Started Docker Application Container Engine. Sep 25 13:37:48 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:37:48.498716162Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap." Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.182916166Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.204467397Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.213361873Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.237210172Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.253984713Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.276496696Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.281320097Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.286363403Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.295453038Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.300774236Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.314848596Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.319608475Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:11 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:11.484556909Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:15 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:15.912873147Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 25 13:42:33 ip-172-31-74-0 dockerd[10685]: time="2020-09-25T13:42:33.118251745Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap." ==> container status <== sudo: crictl: command not found CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1004c9073fd5 bfe3a36ebd25 "/coredns -conf /etc…" 6 minutes ago Up 6 minutes k8s_coredns_coredns-f9fd979d6-vjprc_kube-system_989896f1-1362-4721-a4c1-5bdce0c72acf_1 048c485515eb bad58561c4be "/storage-provisioner" 6 minutes ago Up 6 minutes k8s_storage-provisioner_storage-provisioner_kube-system_f84588a6-9cfb-4249-81dd-c387aa64d1c0_1 dd2342ded528 k8s.gcr.io/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_coredns-f9fd979d6-vjprc_kube-system_989896f1-1362-4721-a4c1-5bdce0c72acf_1 86c42595f925 k8s.gcr.io/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_storage-provisioner_kube-system_f84588a6-9cfb-4249-81dd-c387aa64d1c0_1 1eeedba84e04 d373dd5a8593 "/usr/local/bin/kube…" 6 minutes ago Up 6 minutes k8s_kube-proxy_kube-proxy-rqmsf_kube-system_5f68ff80-1bb6-47d7-8ed7-a6edc4922090_1 0ec5a3b67458 k8s.gcr.io/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_kube-proxy-rqmsf_kube-system_5f68ff80-1bb6-47d7-8ed7-a6edc4922090_1 2bcde9d74ebf 2f32d66b884f "kube-scheduler --au…" 6 minutes ago Up 6 minutes k8s_kube-scheduler_kube-scheduler-ip-172-31-74-0_kube-system_ff7d12f9e4f14e202a85a7c5534a3129_1 5c602b363b24 8603821e1a7a "kube-controller-man…" 6 minutes ago Up 6 minutes k8s_kube-controller-manager_kube-controller-manager-ip-172-31-74-0_kube-system_dcc127c185c80a61d90d8e659e768641_1 bb0f7876b8ba 607331163122 "kube-apiserver --ad…" 6 minutes ago Up 6 minutes k8s_kube-apiserver_kube-apiserver-ip-172-31-74-0_kube-system_cf6de9da3c8186402e51071005040791_1 660b84fb611a 0369cf4303ff "etcd --advertise-cl…" 6 minutes ago Up 6 minutes k8s_etcd_etcd-ip-172-31-74-0_kube-system_312cf54d1239047f9e7ce8ca45cac0ec_1 6c003808d5e5 k8s.gcr.io/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_kube-scheduler-ip-172-31-74-0_kube-system_ff7d12f9e4f14e202a85a7c5534a3129_1 40c665c46fea k8s.gcr.io/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_kube-controller-manager-ip-172-31-74-0_kube-system_dcc127c185c80a61d90d8e659e768641_1 944d2f7e8dbd k8s.gcr.io/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_kube-apiserver-ip-172-31-74-0_kube-system_cf6de9da3c8186402e51071005040791_1 5b7fce72b1d0 k8s.gcr.io/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_etcd-ip-172-31-74-0_kube-system_312cf54d1239047f9e7ce8ca45cac0ec_1 f108d98198e9 gcr.io/k8s-minikube/storage-provisioner "/storage-provisioner" 11 minutes ago Exited (2) 6 minutes ago k8s_storage-provisioner_storage-provisioner_kube-system_f84588a6-9cfb-4249-81dd-c387aa64d1c0_0 f192c703e060 k8s.gcr.io/pause:3.2 "/pause" 11 minutes ago Exited (0) 6 minutes ago k8s_POD_storage-provisioner_kube-system_f84588a6-9cfb-4249-81dd-c387aa64d1c0_0 ee2f87366001 bfe3a36ebd25 "/coredns -conf /etc…" 11 minutes ago Exited (0) 6 minutes ago k8s_coredns_coredns-f9fd979d6-vjprc_kube-system_989896f1-1362-4721-a4c1-5bdce0c72acf_0 75a7148d12a9 k8s.gcr.io/pause:3.2 "/pause" 11 minutes ago Exited (0) 6 minutes ago k8s_POD_coredns-f9fd979d6-vjprc_kube-system_989896f1-1362-4721-a4c1-5bdce0c72acf_0 8800a0283a4a d373dd5a8593 "/usr/local/bin/kube…" 11 minutes ago Exited (2) 6 minutes ago k8s_kube-proxy_kube-proxy-rqmsf_kube-system_5f68ff80-1bb6-47d7-8ed7-a6edc4922090_0 56ea084dfdfc k8s.gcr.io/pause:3.2 "/pause" 11 minutes ago Exited (0) 6 minutes ago k8s_POD_kube-proxy-rqmsf_kube-system_5f68ff80-1bb6-47d7-8ed7-a6edc4922090_0 fa5e540758ff 607331163122 "kube-apiserver --ad…" 11 minutes ago Exited (0) 6 minutes ago k8s_kube-apiserver_kube-apiserver-ip-172-31-74-0_kube-system_cf6de9da3c8186402e51071005040791_0 f4266ae0bb67 8603821e1a7a "kube-controller-man…" 11 minutes ago Exited (2) 6 minutes ago k8s_kube-controller-manager_kube-controller-manager-ip-172-31-74-0_kube-system_dcc127c185c80a61d90d8e659e768641_0 368757e6ed64 2f32d66b884f "kube-scheduler --au…" 11 minutes ago Exited (2) 6 minutes ago k8s_kube-scheduler_kube-scheduler-ip-172-31-74-0_kube-system_ff7d12f9e4f14e202a85a7c5534a3129_0 16207c3fa1d7 0369cf4303ff "etcd --advertise-cl…" 11 minutes ago Exited (0) 6 minutes ago k8s_etcd_etcd-ip-172-31-74-0_kube-system_312cf54d1239047f9e7ce8ca45cac0ec_0 acf9f4717c18 k8s.gcr.io/pause:3.2 "/pause" 11 minutes ago Exited (0) 6 minutes ago k8s_POD_kube-scheduler-ip-172-31-74-0_kube-system_ff7d12f9e4f14e202a85a7c5534a3129_0 7cf81ceddcbe k8s.gcr.io/pause:3.2 "/pause" 11 minutes ago Exited (0) 6 minutes ago k8s_POD_kube-controller-manager-ip-172-31-74-0_kube-system_dcc127c185c80a61d90d8e659e768641_0 88b7afcab64d k8s.gcr.io/pause:3.2 "/pause" 11 minutes ago Exited (0) 6 minutes ago k8s_POD_kube-apiserver-ip-172-31-74-0_kube-system_cf6de9da3c8186402e51071005040791_0 15b79d8472f5 k8s.gcr.io/pause:3.2 "/pause" 11 minutes ago Exited (0) 6 minutes ago k8s_POD_etcd-ip-172-31-74-0_kube-system_312cf54d1239047f9e7ce8ca45cac0ec_0 ==> coredns [1004c9073fd5] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d ==> coredns [ee2f87366001] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s ==> describe nodes <== Name: ip-172-31-74-0 Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=ip-172-31-74-0 kubernetes.io/os=linux minikube.k8s.io/commit=1fd1f67f338cbab4b3e5a6e4c71c551f522ca138-dirty minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_09_25T13_37_34_0700 minikube.k8s.io/version=v1.13.1 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 25 Sep 2020 13:37:31 +0000 Taints: Unschedulable: false Lease: HolderIdentity: ip-172-31-74-0 AcquireTime: RenewTime: Fri, 25 Sep 2020 13:49:00 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 25 Sep 2020 13:47:30 +0000 Fri, 25 Sep 2020 13:37:28 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 25 Sep 2020 13:47:30 +0000 Fri, 25 Sep 2020 13:37:28 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 25 Sep 2020 13:47:30 +0000 Fri, 25 Sep 2020 13:37:28 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 25 Sep 2020 13:47:30 +0000 Fri, 25 Sep 2020 13:37:45 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 172.31.74.0 Hostname: ip-172-31-74-0 Capacity: cpu: 2 ephemeral-storage: 16197480Ki hugepages-2Mi: 0 memory: 4028180Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 16197480Ki hugepages-2Mi: 0 memory: 4028180Ki pods: 110 System Info: Machine ID: d3c0e006b82a48f9bd3e29b40bbefa0a System UUID: ec2e2c7a-f238-bc7b-3ed5-a0959bb7c1e3 Boot ID: 61a63aed-2612-4ff9-bf8e-b656c9bf2ef9 Kernel Version: 5.3.0-1035-aws OS Image: Ubuntu 18.04.5 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.6 Kubelet Version: v1.19.2 Kube-Proxy Version: v1.19.2 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-f9fd979d6-vjprc 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 11m kube-system etcd-ip-172-31-74-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m kube-system kube-apiserver-ip-172-31-74-0 250m (12%) 0 (0%) 0 (0%) 0 (0%) 11m kube-system kube-controller-manager-ip-172-31-74-0 200m (10%) 0 (0%) 0 (0%) 0 (0%) 11m kube-system kube-proxy-rqmsf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m kube-system kube-scheduler-ip-172-31-74-0 100m (5%) 0 (0%) 0 (0%) 0 (0%) 11m kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (32%) 0 (0%) memory 70Mi (1%) 170Mi (4%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 11m (x5 over 11m) kubelet Node ip-172-31-74-0 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 11m (x5 over 11m) kubelet Node ip-172-31-74-0 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 11m (x4 over 11m) kubelet Node ip-172-31-74-0 status is now: NodeHasSufficientPID Normal Starting 11m kubelet Starting kubelet. Normal NodeHasSufficientMemory 11m kubelet Node ip-172-31-74-0 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 11m kubelet Node ip-172-31-74-0 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 11m kubelet Node ip-172-31-74-0 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods Normal Starting 11m kube-proxy Starting kube-proxy. Normal NodeReady 11m kubelet Node ip-172-31-74-0 status is now: NodeReady Normal Starting 6m39s kubelet Starting kubelet. Normal NodeHasSufficientMemory 6m39s (x8 over 6m39s) kubelet Node ip-172-31-74-0 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6m39s (x8 over 6m39s) kubelet Node ip-172-31-74-0 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m39s (x7 over 6m39s) kubelet Node ip-172-31-74-0 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6m39s kubelet Updated Node Allocatable limit across pods Normal Starting 6m30s kube-proxy Starting kube-proxy. ==> dmesg <== [Sep25 13:24] Cannot get hvm parameter CONSOLE_EVTCHN (18): -22! [ +0.908590] cpu 0 spinlock event irq 53 [ +0.028501] cpu 1 spinlock event irq 59 [ +0.303483] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ +0.192061] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug, * this clock source is slow. Consider trying other clock sources [ +1.037332] Grant table initialized [ +0.004408] Cannot get hvm parameter CONSOLE_EVTCHN (18): -22! [ +0.324744] platform eisa.0: EISA: Cannot allocate resource for mainboard [ +0.007565] platform eisa.0: Cannot allocate resource for EISA slot 1 [ +0.008085] platform eisa.0: Cannot allocate resource for EISA slot 2 [ +0.008294] platform eisa.0: Cannot allocate resource for EISA slot 3 [ +0.013258] platform eisa.0: Cannot allocate resource for EISA slot 4 [ +0.025411] platform eisa.0: Cannot allocate resource for EISA slot 5 [ +0.009960] platform eisa.0: Cannot allocate resource for EISA slot 6 [ +0.016467] platform eisa.0: Cannot allocate resource for EISA slot 7 [ +0.010821] platform eisa.0: Cannot allocate resource for EISA slot 8 [ +10.918606] new mount options do not match the existing superblock, will be ignored [ +5.430982] kauditd_printk_skb: 5 callbacks suppressed [Sep25 13:34] kauditd_printk_skb: 1 callbacks suppressed [Sep25 13:36] kauditd_printk_skb: 6 callbacks suppressed [ +0.043752] Started bpfilter ==> etcd [16207c3fa1d7] <== 2020-09-25 13:37:25.709159 I | embed: election = 1000ms 2020-09-25 13:37:25.709164 I | embed: snapshot count = 10000 2020-09-25 13:37:25.709171 I | embed: advertise client URLs = https://172.31.74.0:2379 2020-09-25 13:37:25.740635 I | etcdserver: starting member 761f71dd8316c35b in cluster 4c8d62048056ea3 raft2020/09/25 13:37:25 INFO: 761f71dd8316c35b switched to configuration voters=() raft2020/09/25 13:37:25 INFO: 761f71dd8316c35b became follower at term 0 raft2020/09/25 13:37:25 INFO: newRaft 761f71dd8316c35b [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2020/09/25 13:37:25 INFO: 761f71dd8316c35b became follower at term 1 raft2020/09/25 13:37:25 INFO: 761f71dd8316c35b switched to configuration voters=(8511647016954544987) 2020-09-25 13:37:25.743335 W | auth: simple token is not cryptographically signed 2020-09-25 13:37:25.748296 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] 2020-09-25 13:37:25.755354 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-09-25 13:37:25.755536 I | embed: listening for metrics on http://127.0.0.1:2381 raft2020/09/25 13:37:25 INFO: 761f71dd8316c35b switched to configuration voters=(8511647016954544987) 2020-09-25 13:37:25.755854 I | embed: listening for peers on 172.31.74.0:2380 2020-09-25 13:37:25.756010 I | etcdserver: 761f71dd8316c35b as single-node; fast-forwarding 9 ticks (election ticks 10) 2020-09-25 13:37:25.756106 I | etcdserver/membership: added member 761f71dd8316c35b [https://172.31.74.0:2380] to cluster 4c8d62048056ea3 raft2020/09/25 13:37:26 INFO: 761f71dd8316c35b is starting a new election at term 1 raft2020/09/25 13:37:26 INFO: 761f71dd8316c35b became candidate at term 2 raft2020/09/25 13:37:26 INFO: 761f71dd8316c35b received MsgVoteResp from 761f71dd8316c35b at term 2 raft2020/09/25 13:37:26 INFO: 761f71dd8316c35b became leader at term 2 raft2020/09/25 13:37:26 INFO: raft.node: 761f71dd8316c35b elected leader 761f71dd8316c35b at term 2 2020-09-25 13:37:26.095224 I | etcdserver: published {Name:ip-172-31-74-0 ClientURLs:[https://172.31.74.0:2379]} to cluster 4c8d62048056ea3 2020-09-25 13:37:26.117586 I | etcdserver: setting up the initial cluster version to 3.4 2020-09-25 13:37:26.147149 I | embed: ready to serve client requests 2020-09-25 13:37:26.160447 I | embed: serving client requests on 172.31.74.0:2379 2020-09-25 13:37:26.308834 I | embed: ready to serve client requests 2020-09-25 13:37:26.363149 N | etcdserver/membership: set the initial cluster version to 3.4 2020-09-25 13:37:26.379205 I | etcdserver/api: enabled capabilities for version 3.4 2020-09-25 13:37:26.537317 I | embed: serving client requests on 127.0.0.1:2379 2020-09-25 13:37:43.895070 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:37:51.352214 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:38:01.352196 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:38:11.352112 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:38:21.352178 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:38:31.352168 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:38:41.352184 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:38:51.352158 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:39:01.352211 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:39:11.352103 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:39:21.352197 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:39:31.352132 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:39:41.351997 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:39:51.352195 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:40:01.352006 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:40:11.352395 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:40:21.352139 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:40:31.352118 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:40:41.352197 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:40:51.352167 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:41:01.352377 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:41:11.352207 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:41:21.352186 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:41:31.352118 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:41:41.352310 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:41:51.352039 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:42:01.352382 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:42:10.890546 N | pkg/osutil: received terminated signal, shutting down... WARNING: 2020/09/25 13:42:10 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... 2020-09-25 13:42:10.915899 I | etcdserver: skipped leadership transfer for single voting member cluster ==> etcd [660b84fb611a] <== raft2020/09/25 13:42:24 INFO: newRaft 761f71dd8316c35b [peers: [], term: 2, commit: 681, applied: 0, lastindex: 681, lastterm: 2] 2020-09-25 13:42:24.627245 W | auth: simple token is not cryptographically signed 2020-09-25 13:42:24.634855 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] raft2020/09/25 13:42:24 INFO: 761f71dd8316c35b switched to configuration voters=(8511647016954544987) 2020-09-25 13:42:24.635619 I | etcdserver/membership: added member 761f71dd8316c35b [https://172.31.74.0:2380] to cluster 4c8d62048056ea3 2020-09-25 13:42:24.635705 N | etcdserver/membership: set the initial cluster version to 3.4 2020-09-25 13:42:24.635740 I | etcdserver/api: enabled capabilities for version 3.4 2020-09-25 13:42:24.639832 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-09-25 13:42:24.639998 I | embed: listening for metrics on http://127.0.0.1:2381 2020-09-25 13:42:24.640081 I | embed: listening for peers on 172.31.74.0:2380 raft2020/09/25 13:42:26 INFO: 761f71dd8316c35b is starting a new election at term 2 raft2020/09/25 13:42:26 INFO: 761f71dd8316c35b became candidate at term 3 raft2020/09/25 13:42:26 INFO: 761f71dd8316c35b received MsgVoteResp from 761f71dd8316c35b at term 3 raft2020/09/25 13:42:26 INFO: 761f71dd8316c35b became leader at term 3 raft2020/09/25 13:42:26 INFO: raft.node: 761f71dd8316c35b elected leader 761f71dd8316c35b at term 3 2020-09-25 13:42:26.031732 I | etcdserver: published {Name:ip-172-31-74-0 ClientURLs:[https://172.31.74.0:2379]} to cluster 4c8d62048056ea3 2020-09-25 13:42:26.031846 I | embed: ready to serve client requests 2020-09-25 13:42:26.032735 I | embed: ready to serve client requests 2020-09-25 13:42:26.034516 I | embed: serving client requests on 127.0.0.1:2379 2020-09-25 13:42:26.037928 I | embed: serving client requests on 172.31.74.0:2379 2020-09-25 13:42:37.876529 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:42:41.333052 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:42:51.333178 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:43:01.333369 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:43:11.332951 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:43:21.333005 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:43:31.333112 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:43:41.333087 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:43:51.333142 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:44:01.333198 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:44:11.333121 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:44:21.333171 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:44:31.333204 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:44:41.333104 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:44:51.333288 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:45:01.332969 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:45:11.333094 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:45:21.333118 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:45:31.333092 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:45:41.333167 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:45:51.333190 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:46:01.333077 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:46:11.333143 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:46:21.333342 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:46:31.333326 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:46:41.333104 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:46:51.333235 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:47:01.333094 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:47:11.333144 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:47:21.333016 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:47:31.333161 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:47:41.333085 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:47:51.333124 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:48:01.333202 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:48:11.333177 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:48:21.332987 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:48:31.333078 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:48:41.333089 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:48:51.333129 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-09-25 13:49:01.333312 I | etcdserver/api/etcdhttp: /health OK (status code 200) ==> kernel <== 13:49:02 up 24 min, 1 user, load average: 0.32, 0.42, 0.32 Linux ip-172-31-74-0 5.3.0-1035-aws #37-Ubuntu SMP Sun Sep 6 01:17:09 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 18.04.5 LTS" ==> kube-apiserver [bb0f7876b8ba] <== I0925 13:42:29.941227 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController I0925 13:42:29.941250 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0925 13:42:29.941277 1 crd_finalizer.go:266] Starting CRDFinalizer I0925 13:42:29.941675 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key I0925 13:42:29.941812 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0925 13:42:29.941909 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0925 13:42:29.942006 1 available_controller.go:404] Starting AvailableConditionController I0925 13:42:29.942103 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0925 13:42:29.942212 1 controller.go:83] Starting OpenAPI AggregationController I0925 13:42:29.942826 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0925 13:42:29.942937 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0925 13:42:29.943051 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0925 13:42:29.943142 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I0925 13:42:29.940668 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0925 13:42:30.018415 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0925 13:42:30.019947 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt E0925 13:42:30.073996 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service I0925 13:42:30.140890 1 cache.go:39] Caches are synced for autoregister controller I0925 13:42:30.142090 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0925 13:42:30.151205 1 cache.go:39] Caches are synced for AvailableConditionController controller I0925 13:42:30.151412 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0925 13:42:30.151539 1 shared_informer.go:247] Caches are synced for crd-autoregister I0925 13:42:30.153158 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0925 13:42:30.939358 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0925 13:42:30.939396 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0925 13:42:30.944165 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I0925 13:42:31.549680 1 controller.go:606] quota admission added evaluator for: serviceaccounts I0925 13:42:31.568528 1 controller.go:606] quota admission added evaluator for: deployments.apps I0925 13:42:31.603431 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I0925 13:42:31.617042 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0925 13:42:31.622531 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0925 13:42:32.315580 1 controller.go:606] quota admission added evaluator for: endpoints I0925 13:42:36.616643 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io I0925 13:43:08.660929 1 client.go:360] parsed scheme: "passthrough" I0925 13:43:08.660978 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0925 13:43:08.660989 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0925 13:43:46.526131 1 client.go:360] parsed scheme: "passthrough" I0925 13:43:46.526367 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0925 13:43:46.526391 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0925 13:44:28.577666 1 client.go:360] parsed scheme: "passthrough" I0925 13:44:28.577718 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0925 13:44:28.577729 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0925 13:45:12.749971 1 client.go:360] parsed scheme: "passthrough" I0925 13:45:12.750018 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0925 13:45:12.750056 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0925 13:45:51.848827 1 client.go:360] parsed scheme: "passthrough" I0925 13:45:51.848880 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0925 13:45:51.848891 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0925 13:46:34.167815 1 client.go:360] parsed scheme: "passthrough" I0925 13:46:34.167862 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0925 13:46:34.167872 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0925 13:47:14.125254 1 client.go:360] parsed scheme: "passthrough" I0925 13:47:14.125469 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0925 13:47:14.125575 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0925 13:47:50.241154 1 client.go:360] parsed scheme: "passthrough" I0925 13:47:50.241231 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0925 13:47:50.241248 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0925 13:48:32.597742 1 client.go:360] parsed scheme: "passthrough" I0925 13:48:32.597933 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0925 13:48:32.597980 1 clientconn.go:948] ClientConn switching balancer to "pick_first" ==> kube-apiserver [fa5e540758ff] <== W0925 13:42:10.906222 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0925 13:42:10.906261 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0925 13:42:10.906295 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0925 13:42:10.906331 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0925 13:42:10.906368 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0925 13:42:10.906582 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... I0925 13:42:10.909283 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.909420 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.909494 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.909560 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.909623 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.909692 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.909762 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.909884 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.909947 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.910010 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.910076 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.910251 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.910377 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.910504 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.910616 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.910734 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.910800 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.910913 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.910983 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.911126 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick W0925 13:42:10.911362 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0925 13:42:10.911631 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0925 13:42:10.911713 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... I0925 13:42:10.912105 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.912180 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.912247 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.912313 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.912420 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.912487 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.912550 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.912613 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.912681 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.912745 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.912808 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.912841 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.912989 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.913118 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.913236 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick W0925 13:42:10.913419 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0925 13:42:10.913470 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0925 13:42:10.913511 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0925 13:42:10.913550 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... I0925 13:42:10.913665 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.913735 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.913801 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.913868 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.913933 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.913997 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.914144 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.914210 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.914294 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.914363 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I0925 13:42:10.914407 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick W0925 13:42:10.914525 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... ==> kube-controller-manager [5c602b363b24] <== I0925 13:42:35.708339 1 controllermanager.go:549] Started "pv-protection" I0925 13:42:35.708466 1 pv_protection_controller.go:83] Starting PV protection controller I0925 13:42:35.708481 1 shared_informer.go:240] Waiting for caches to sync for PV protection I0925 13:42:35.858379 1 controllermanager.go:549] Started "daemonset" I0925 13:42:35.858524 1 daemon_controller.go:285] Starting daemon sets controller I0925 13:42:35.858533 1 shared_informer.go:240] Waiting for caches to sync for daemon sets I0925 13:42:36.008626 1 controllermanager.go:549] Started "deployment" I0925 13:42:36.008724 1 deployment_controller.go:153] Starting deployment controller I0925 13:42:36.008755 1 shared_informer.go:240] Waiting for caches to sync for deployment I0925 13:42:36.158119 1 node_lifecycle_controller.go:77] Sending events to api server E0925 13:42:36.158159 1 core.go:230] failed to start cloud node lifecycle controller: no cloud provider provided W0925 13:42:36.158324 1 controllermanager.go:541] Skipping "cloud-node-lifecycle" I0925 13:42:36.308130 1 controllermanager.go:549] Started "csrapproving" I0925 13:42:36.308194 1 certificate_controller.go:118] Starting certificate controller "csrapproving" I0925 13:42:36.308349 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving I0925 13:42:36.458181 1 controllermanager.go:549] Started "ttl" I0925 13:42:36.458247 1 ttl_controller.go:118] Starting TTL controller I0925 13:42:36.458799 1 shared_informer.go:240] Waiting for caches to sync for TTL I0925 13:42:36.466853 1 shared_informer.go:240] Waiting for caches to sync for resource quota W0925 13:42:36.475415 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="ip-172-31-74-0" does not exist I0925 13:42:36.504426 1 shared_informer.go:247] Caches are synced for ReplicaSet I0925 13:42:36.508585 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0925 13:42:36.508796 1 shared_informer.go:247] Caches are synced for deployment I0925 13:42:36.509135 1 shared_informer.go:247] Caches are synced for PV protection I0925 13:42:36.509149 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0925 13:42:36.509164 1 shared_informer.go:247] Caches are synced for GC I0925 13:42:36.509285 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0925 13:42:36.509964 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0925 13:42:36.510443 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0925 13:42:36.517553 1 shared_informer.go:247] Caches are synced for namespace I0925 13:42:36.519951 1 shared_informer.go:247] Caches are synced for disruption I0925 13:42:36.520128 1 disruption.go:339] Sending events to api server. I0925 13:42:36.527261 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0925 13:42:36.533152 1 shared_informer.go:247] Caches are synced for expand I0925 13:42:36.537177 1 shared_informer.go:247] Caches are synced for HPA I0925 13:42:36.553131 1 shared_informer.go:247] Caches are synced for service account I0925 13:42:36.558549 1 shared_informer.go:247] Caches are synced for ReplicationController I0925 13:42:36.558564 1 shared_informer.go:247] Caches are synced for daemon sets I0925 13:42:36.558696 1 shared_informer.go:247] Caches are synced for persistent volume I0925 13:42:36.558823 1 shared_informer.go:247] Caches are synced for TTL I0925 13:42:36.558583 1 shared_informer.go:247] Caches are synced for PVC protection I0925 13:42:36.559427 1 shared_informer.go:247] Caches are synced for taint I0925 13:42:36.559635 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W0925 13:42:36.559801 1 node_lifecycle_controller.go:1044] Missing timestamp for Node ip-172-31-74-0. Assuming now as a timestamp. I0925 13:42:36.560913 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal. I0925 13:42:36.561094 1 taint_manager.go:187] Starting NoExecuteTaintManager I0925 13:42:36.561457 1 event.go:291] "Event occurred" object="ip-172-31-74-0" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ip-172-31-74-0 event: Registered Node ip-172-31-74-0 in Controller" I0925 13:42:36.561609 1 shared_informer.go:247] Caches are synced for stateful set I0925 13:42:36.611174 1 shared_informer.go:247] Caches are synced for endpoint_slice I0925 13:42:36.620689 1 shared_informer.go:247] Caches are synced for job I0925 13:42:36.658831 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0925 13:42:36.658949 1 shared_informer.go:247] Caches are synced for endpoint I0925 13:42:36.667046 1 shared_informer.go:247] Caches are synced for resource quota I0925 13:42:36.696655 1 shared_informer.go:247] Caches are synced for resource quota I0925 13:42:36.746037 1 shared_informer.go:247] Caches are synced for attach detach I0925 13:42:36.808573 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0925 13:42:36.815069 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0925 13:42:37.108320 1 shared_informer.go:247] Caches are synced for garbage collector I0925 13:42:37.108364 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0925 13:42:37.115338 1 shared_informer.go:247] Caches are synced for garbage collector ==> kube-controller-manager [f4266ae0bb67] <== I0925 13:37:39.682010 1 shared_informer.go:240] Waiting for caches to sync for TTL I0925 13:37:39.931716 1 controllermanager.go:549] Started "bootstrapsigner" I0925 13:37:39.931827 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer I0925 13:37:40.181661 1 controllermanager.go:549] Started "tokencleaner" I0925 13:37:40.181917 1 tokencleaner.go:118] Starting token cleaner controller I0925 13:37:40.181936 1 shared_informer.go:240] Waiting for caches to sync for token_cleaner I0925 13:37:40.181943 1 shared_informer.go:247] Caches are synced for token_cleaner I0925 13:37:40.431556 1 controllermanager.go:549] Started "pv-protection" I0925 13:37:40.431767 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0925 13:37:40.431818 1 pv_protection_controller.go:83] Starting PV protection controller I0925 13:37:40.431824 1 shared_informer.go:240] Waiting for caches to sync for PV protection W0925 13:37:40.453391 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="ip-172-31-74-0" does not exist I0925 13:37:40.462853 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0925 13:37:40.476262 1 shared_informer.go:247] Caches are synced for expand I0925 13:37:40.481768 1 shared_informer.go:247] Caches are synced for ReplicationController I0925 13:37:40.481845 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0925 13:37:40.481777 1 shared_informer.go:247] Caches are synced for stateful set I0925 13:37:40.481991 1 shared_informer.go:247] Caches are synced for PVC protection I0925 13:37:40.482471 1 shared_informer.go:247] Caches are synced for TTL E0925 13:37:40.496506 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again E0925 13:37:40.497156 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again I0925 13:37:40.509129 1 shared_informer.go:247] Caches are synced for persistent volume I0925 13:37:40.531263 1 shared_informer.go:247] Caches are synced for attach detach I0925 13:37:40.531898 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0925 13:37:40.532015 1 shared_informer.go:247] Caches are synced for service account I0925 13:37:40.532298 1 shared_informer.go:247] Caches are synced for ReplicaSet I0925 13:37:40.532383 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0925 13:37:40.532430 1 shared_informer.go:247] Caches are synced for GC I0925 13:37:40.532460 1 shared_informer.go:247] Caches are synced for PV protection I0925 13:37:40.532484 1 shared_informer.go:247] Caches are synced for taint I0925 13:37:40.532543 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W0925 13:37:40.532592 1 node_lifecycle_controller.go:1044] Missing timestamp for Node ip-172-31-74-0. Assuming now as a timestamp. I0925 13:37:40.532662 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I0925 13:37:40.532849 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0925 13:37:40.533247 1 shared_informer.go:247] Caches are synced for disruption I0925 13:37:40.533360 1 disruption.go:339] Sending events to api server. I0925 13:37:40.534048 1 taint_manager.go:187] Starting NoExecuteTaintManager I0925 13:37:40.534203 1 shared_informer.go:247] Caches are synced for daemon sets I0925 13:37:40.539800 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0925 13:37:40.545402 1 shared_informer.go:247] Caches are synced for namespace I0925 13:37:40.553995 1 event.go:291] "Event occurred" object="ip-172-31-74-0" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ip-172-31-74-0 event: Registered Node ip-172-31-74-0 in Controller" I0925 13:37:40.554048 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0925 13:37:40.555154 1 shared_informer.go:247] Caches are synced for job I0925 13:37:40.570625 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-ip-172-31-74-0" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0925 13:37:40.581853 1 shared_informer.go:247] Caches are synced for endpoint I0925 13:37:40.585296 1 shared_informer.go:247] Caches are synced for deployment I0925 13:37:40.614494 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rqmsf" I0925 13:37:40.614535 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1" I0925 13:37:40.634008 1 shared_informer.go:247] Caches are synced for endpoint_slice I0925 13:37:40.634047 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0925 13:37:40.656119 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-vjprc" E0925 13:37:40.678014 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"3e225167-5abe-42d9-bd55-2e4d3e1ce93c", ResourceVersion:"225", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63736637854, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000ec8cc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000ec8ce0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000ec8d00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000ecab40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000ec8d20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000ec8d40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000ec8d80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000f70e40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000171608), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a4b340), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0002fdab0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000171708)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again I0925 13:37:40.731665 1 shared_informer.go:247] Caches are synced for HPA I0925 13:37:40.743461 1 shared_informer.go:247] Caches are synced for resource quota I0925 13:37:40.745323 1 shared_informer.go:247] Caches are synced for resource quota I0925 13:37:40.799280 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0925 13:37:41.082111 1 shared_informer.go:247] Caches are synced for garbage collector I0925 13:37:41.082135 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0925 13:37:41.099438 1 shared_informer.go:247] Caches are synced for garbage collector I0925 13:37:45.532972 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. ==> kube-proxy [1eeedba84e04] <== I0925 13:42:32.588763 1 node.go:136] Successfully retrieved node IP: 172.31.74.0 I0925 13:42:32.588831 1 server_others.go:111] kube-proxy node IP is an IPv4 address (172.31.74.0), assume IPv4 operation W0925 13:42:32.630275 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy I0925 13:42:32.630355 1 server_others.go:186] Using iptables Proxier. W0925 13:42:32.630367 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I0925 13:42:32.630372 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I0925 13:42:32.630620 1 server.go:650] Version: v1.19.2 I0925 13:42:32.630997 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0925 13:42:32.631739 1 config.go:315] Starting service config controller I0925 13:42:32.631753 1 shared_informer.go:240] Waiting for caches to sync for service config I0925 13:42:32.631772 1 config.go:224] Starting endpoint slice config controller I0925 13:42:32.631777 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0925 13:42:32.731943 1 shared_informer.go:247] Caches are synced for endpoint slice config I0925 13:42:32.732003 1 shared_informer.go:247] Caches are synced for service config ==> kube-proxy [8800a0283a4a] <== I0925 13:37:41.348699 1 node.go:136] Successfully retrieved node IP: 172.31.74.0 I0925 13:37:41.348777 1 server_others.go:111] kube-proxy node IP is an IPv4 address (172.31.74.0), assume IPv4 operation W0925 13:37:41.397946 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy I0925 13:37:41.398036 1 server_others.go:186] Using iptables Proxier. W0925 13:37:41.398051 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I0925 13:37:41.398056 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I0925 13:37:41.398289 1 server.go:650] Version: v1.19.2 I0925 13:37:41.398646 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I0925 13:37:41.398680 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0925 13:37:41.398949 1 conntrack.go:83] Setting conntrack hashsize to 32768 I0925 13:37:41.419390 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0925 13:37:41.419464 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0925 13:37:41.419648 1 config.go:315] Starting service config controller I0925 13:37:41.419657 1 shared_informer.go:240] Waiting for caches to sync for service config I0925 13:37:41.419683 1 config.go:224] Starting endpoint slice config controller I0925 13:37:41.419688 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0925 13:37:41.519872 1 shared_informer.go:247] Caches are synced for endpoint slice config I0925 13:37:41.519874 1 shared_informer.go:247] Caches are synced for service config ==> kube-scheduler [2bcde9d74ebf] <== I0925 13:42:24.972742 1 registry.go:173] Registering SelectorSpread plugin I0925 13:42:24.972801 1 registry.go:173] Registering SelectorSpread plugin I0925 13:42:25.689918 1 serving.go:331] Generated self-signed cert in-memory W0925 13:42:30.049317 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0925 13:42:30.049348 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0925 13:42:30.049380 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous. W0925 13:42:30.049389 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0925 13:42:30.082321 1 registry.go:173] Registering SelectorSpread plugin I0925 13:42:30.082349 1 registry.go:173] Registering SelectorSpread plugin I0925 13:42:30.086791 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I0925 13:42:30.090949 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0925 13:42:30.091014 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0925 13:42:30.091022 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0925 13:42:30.191278 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kube-scheduler [368757e6ed64] <== I0925 13:37:25.986086 1 registry.go:173] Registering SelectorSpread plugin I0925 13:37:25.987118 1 registry.go:173] Registering SelectorSpread plugin I0925 13:37:27.637356 1 serving.go:331] Generated self-signed cert in-memory W0925 13:37:31.120068 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0925 13:37:31.120291 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0925 13:37:31.120438 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous. W0925 13:37:31.120538 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0925 13:37:31.142871 1 registry.go:173] Registering SelectorSpread plugin I0925 13:37:31.143336 1 registry.go:173] Registering SelectorSpread plugin I0925 13:37:31.147695 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I0925 13:37:31.147937 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0925 13:37:31.148028 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0925 13:37:31.148144 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0925 13:37:31.151437 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0925 13:37:31.152005 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0925 13:37:31.152325 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0925 13:37:31.152568 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0925 13:37:31.153153 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0925 13:37:31.160082 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0925 13:37:31.160465 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0925 13:37:31.160741 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0925 13:37:31.160987 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0925 13:37:31.161229 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0925 13:37:31.163181 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0925 13:37:31.163401 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0925 13:37:31.163650 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0925 13:37:32.020456 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0925 13:37:32.110011 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0925 13:37:32.146411 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0925 13:37:32.148031 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0925 13:37:32.150497 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0925 13:37:32.234033 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0925 13:37:34.048346 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== -- Logs begin at Fri 2020-09-25 13:24:27 UTC, end at Fri 2020-09-25 13:49:03 UTC. -- Sep 25 13:42:27 ip-172-31-74-0 kubelet[14956]: E0925 13:42:27.424126 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:27 ip-172-31-74-0 kubelet[14956]: E0925 13:42:27.524343 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:27 ip-172-31-74-0 kubelet[14956]: E0925 13:42:27.624529 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:27 ip-172-31-74-0 kubelet[14956]: E0925 13:42:27.724749 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:27 ip-172-31-74-0 kubelet[14956]: E0925 13:42:27.824963 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:27 ip-172-31-74-0 kubelet[14956]: E0925 13:42:27.925065 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:28 ip-172-31-74-0 kubelet[14956]: E0925 13:42:28.025639 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:28 ip-172-31-74-0 kubelet[14956]: E0925 13:42:28.125853 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:28 ip-172-31-74-0 kubelet[14956]: E0925 13:42:28.226596 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:28 ip-172-31-74-0 kubelet[14956]: E0925 13:42:28.327135 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:28 ip-172-31-74-0 kubelet[14956]: E0925 13:42:28.427686 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:28 ip-172-31-74-0 kubelet[14956]: E0925 13:42:28.528206 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:28 ip-172-31-74-0 kubelet[14956]: E0925 13:42:28.628716 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:28 ip-172-31-74-0 kubelet[14956]: E0925 13:42:28.728910 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:28 ip-172-31-74-0 kubelet[14956]: E0925 13:42:28.829077 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:28 ip-172-31-74-0 kubelet[14956]: E0925 13:42:28.929291 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:29 ip-172-31-74-0 kubelet[14956]: E0925 13:42:29.029498 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:29 ip-172-31-74-0 kubelet[14956]: E0925 13:42:29.129685 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:29 ip-172-31-74-0 kubelet[14956]: E0925 13:42:29.230461 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:29 ip-172-31-74-0 kubelet[14956]: E0925 13:42:29.330667 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:29 ip-172-31-74-0 kubelet[14956]: E0925 13:42:29.430746 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:29 ip-172-31-74-0 kubelet[14956]: E0925 13:42:29.530969 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:29 ip-172-31-74-0 kubelet[14956]: E0925 13:42:29.631214 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:29 ip-172-31-74-0 kubelet[14956]: E0925 13:42:29.731448 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:29 ip-172-31-74-0 kubelet[14956]: E0925 13:42:29.831655 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:29 ip-172-31-74-0 kubelet[14956]: E0925 13:42:29.931849 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: E0925 13:42:30.032348 14956 kubelet.go:2183] node "ip-172-31-74-0" not found Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.039936 14956 topology_manager.go:233] [topologymanager] Topology Admit Handler Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.043230 14956 topology_manager.go:233] [topologymanager] Topology Admit Handler Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: E0925 13:42:30.046097 14956 reflector.go:127] object-"kube-system"/"storage-provisioner-token-qtrcb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-qtrcb" is forbidden: User "system:node:ip-172-31-74-0" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-74-0' and this object Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: E0925 13:42:30.052121 14956 reflector.go:127] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-74-0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-74-0' and this object Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: E0925 13:42:30.052446 14956 reflector.go:127] object-"kube-system"/"coredns-token-qnvff": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-qnvff" is forbidden: User "system:node:ip-172-31-74-0" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-74-0' and this object Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.059358 14956 topology_manager.go:233] [topologymanager] Topology Admit Handler Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: E0925 13:42:30.072233 14956 reflector.go:127] object-"kube-system"/"kube-proxy-token-xl87j": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-xl87j" is forbidden: User "system:node:ip-172-31-74-0" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-74-0' and this object Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: E0925 13:42:30.072550 14956 reflector.go:127] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-74-0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-74-0' and this object Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.127159 14956 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f84588a6-9cfb-4249-81dd-c387aa64d1c0-tmp") pod "storage-provisioner" (UID: "f84588a6-9cfb-4249-81dd-c387aa64d1c0") Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.127215 14956 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-qnvff" (UniqueName: "kubernetes.io/secret/989896f1-1362-4721-a4c1-5bdce0c72acf-coredns-token-qnvff") pod "coredns-f9fd979d6-vjprc" (UID: "989896f1-1362-4721-a4c1-5bdce0c72acf") Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.127241 14956 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/5f68ff80-1bb6-47d7-8ed7-a6edc4922090-xtables-lock") pod "kube-proxy-rqmsf" (UID: "5f68ff80-1bb6-47d7-8ed7-a6edc4922090") Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.127261 14956 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-xl87j" (UniqueName: "kubernetes.io/secret/5f68ff80-1bb6-47d7-8ed7-a6edc4922090-kube-proxy-token-xl87j") pod "kube-proxy-rqmsf" (UID: "5f68ff80-1bb6-47d7-8ed7-a6edc4922090") Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.127283 14956 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-qtrcb" (UniqueName: "kubernetes.io/secret/f84588a6-9cfb-4249-81dd-c387aa64d1c0-storage-provisioner-token-qtrcb") pod "storage-provisioner" (UID: "f84588a6-9cfb-4249-81dd-c387aa64d1c0") Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.127302 14956 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/989896f1-1362-4721-a4c1-5bdce0c72acf-config-volume") pod "coredns-f9fd979d6-vjprc" (UID: "989896f1-1362-4721-a4c1-5bdce0c72acf") Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.127324 14956 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/5f68ff80-1bb6-47d7-8ed7-a6edc4922090-lib-modules") pod "kube-proxy-rqmsf" (UID: "5f68ff80-1bb6-47d7-8ed7-a6edc4922090") Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.127344 14956 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/5f68ff80-1bb6-47d7-8ed7-a6edc4922090-kube-proxy") pod "kube-proxy-rqmsf" (UID: "5f68ff80-1bb6-47d7-8ed7-a6edc4922090") Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.127357 14956 reconciler.go:157] Reconciler: start to sync state Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.157928 14956 kubelet_node_status.go:108] Node ip-172-31-74-0 was previously registered Sep 25 13:42:30 ip-172-31-74-0 kubelet[14956]: I0925 13:42:30.158527 14956 kubelet_node_status.go:73] Successfully registered node ip-172-31-74-0 Sep 25 13:42:31 ip-172-31-74-0 kubelet[14956]: E0925 13:42:31.228411 14956 secret.go:195] Couldn't get secret kube-system/kube-proxy-token-xl87j: failed to sync secret cache: timed out waiting for the condition Sep 25 13:42:31 ip-172-31-74-0 kubelet[14956]: E0925 13:42:31.228411 14956 secret.go:195] Couldn't get secret kube-system/storage-provisioner-token-qtrcb: failed to sync secret cache: timed out waiting for the condition Sep 25 13:42:31 ip-172-31-74-0 kubelet[14956]: E0925 13:42:31.228438 14956 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Sep 25 13:42:31 ip-172-31-74-0 kubelet[14956]: E0925 13:42:31.228450 14956 secret.go:195] Couldn't get secret kube-system/coredns-token-qnvff: failed to sync secret cache: timed out waiting for the condition Sep 25 13:42:31 ip-172-31-74-0 kubelet[14956]: E0925 13:42:31.228465 14956 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 25 13:42:31 ip-172-31-74-0 kubelet[14956]: E0925 13:42:31.229028 14956 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/5f68ff80-1bb6-47d7-8ed7-a6edc4922090-kube-proxy-token-xl87j podName:5f68ff80-1bb6-47d7-8ed7-a6edc4922090 nodeName:}" failed. No retries permitted until 2020-09-25 13:42:31.728995189 +0000 UTC m=+8.761553026 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy-token-xl87j\" (UniqueName: \"kubernetes.io/secret/5f68ff80-1bb6-47d7-8ed7-a6edc4922090-kube-proxy-token-xl87j\") pod \"kube-proxy-rqmsf\" (UID: \"5f68ff80-1bb6-47d7-8ed7-a6edc4922090\") : failed to sync secret cache: timed out waiting for the condition" Sep 25 13:42:31 ip-172-31-74-0 kubelet[14956]: E0925 13:42:31.229365 14956 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/f84588a6-9cfb-4249-81dd-c387aa64d1c0-storage-provisioner-token-qtrcb podName:f84588a6-9cfb-4249-81dd-c387aa64d1c0 nodeName:}" failed. No retries permitted until 2020-09-25 13:42:31.729306869 +0000 UTC m=+8.761864688 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-qtrcb\" (UniqueName: \"kubernetes.io/secret/f84588a6-9cfb-4249-81dd-c387aa64d1c0-storage-provisioner-token-qtrcb\") pod \"storage-provisioner\" (UID: \"f84588a6-9cfb-4249-81dd-c387aa64d1c0\") : failed to sync secret cache: timed out waiting for the condition" Sep 25 13:42:31 ip-172-31-74-0 kubelet[14956]: E0925 13:42:31.229429 14956 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/5f68ff80-1bb6-47d7-8ed7-a6edc4922090-kube-proxy podName:5f68ff80-1bb6-47d7-8ed7-a6edc4922090 nodeName:}" failed. No retries permitted until 2020-09-25 13:42:31.729409107 +0000 UTC m=+8.761966928 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f68ff80-1bb6-47d7-8ed7-a6edc4922090-kube-proxy\") pod \"kube-proxy-rqmsf\" (UID: \"5f68ff80-1bb6-47d7-8ed7-a6edc4922090\") : failed to sync configmap cache: timed out waiting for the condition" Sep 25 13:42:31 ip-172-31-74-0 kubelet[14956]: E0925 13:42:31.229474 14956 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/989896f1-1362-4721-a4c1-5bdce0c72acf-coredns-token-qnvff podName:989896f1-1362-4721-a4c1-5bdce0c72acf nodeName:}" failed. No retries permitted until 2020-09-25 13:42:31.729437323 +0000 UTC m=+8.761995134 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-qnvff\" (UniqueName: \"kubernetes.io/secret/989896f1-1362-4721-a4c1-5bdce0c72acf-coredns-token-qnvff\") pod \"coredns-f9fd979d6-vjprc\" (UID: \"989896f1-1362-4721-a4c1-5bdce0c72acf\") : failed to sync secret cache: timed out waiting for the condition" Sep 25 13:42:31 ip-172-31-74-0 kubelet[14956]: E0925 13:42:31.229500 14956 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/989896f1-1362-4721-a4c1-5bdce0c72acf-config-volume podName:989896f1-1362-4721-a4c1-5bdce0c72acf nodeName:}" failed. No retries permitted until 2020-09-25 13:42:31.729484249 +0000 UTC m=+8.762042127 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/989896f1-1362-4721-a4c1-5bdce0c72acf-config-volume\") pod \"coredns-f9fd979d6-vjprc\" (UID: \"989896f1-1362-4721-a4c1-5bdce0c72acf\") : failed to sync configmap cache: timed out waiting for the condition" Sep 25 13:42:32 ip-172-31-74-0 kubelet[14956]: W0925 13:42:32.235130 14956 pod_container_deletor.go:79] Container "0ec5a3b6745805aab51449b6c0e63f41d51d7cc9dc56f7339a95137532da9808" not found in pod's containers Sep 25 13:42:33 ip-172-31-74-0 kubelet[14956]: W0925 13:42:33.110780 14956 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-vjprc through plugin: invalid network status for Sep 25 13:42:33 ip-172-31-74-0 kubelet[14956]: W0925 13:42:33.305594 14956 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-vjprc through plugin: invalid network status for Sep 25 13:42:34 ip-172-31-74-0 kubelet[14956]: W0925 13:42:34.329528 14956 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-vjprc through plugin: invalid network status for ==> storage-provisioner [048c485515eb] <== I0925 13:42:33.038791 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0925 13:42:50.433539 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0925 13:42:50.434043 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2db40a63-ef4d-435d-a589-65ce0cfa2b4d", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-172-31-74-0_bc0494d3-4380-43ee-9d7e-216d32b43e8e became leader I0925 13:42:50.434891 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_ip-172-31-74-0_bc0494d3-4380-43ee-9d7e-216d32b43e8e! I0925 13:42:50.535094 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_ip-172-31-74-0_bc0494d3-4380-43ee-9d7e-216d32b43e8e! ==> storage-provisioner [f108d98198e9] <== I0925 13:37:55.448109 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0925 13:37:55.459575 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0925 13:37:55.460010 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2db40a63-ef4d-435d-a589-65ce0cfa2b4d", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-172-31-74-0_5405136a-51cf-4703-862b-cbc80cb5c763 became leader I0925 13:37:55.460040 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_ip-172-31-74-0_5405136a-51cf-4703-862b-cbc80cb5c763! I0925 13:37:55.560487 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_ip-172-31-74-0_5405136a-51cf-4703-862b-cbc80cb5c763! ```
afbjorklund commented 4 years ago

This was intentional, #8841 (b27440d4aeabe0657f77a02f80acd8a300d4ac4d). It doesn't really say why, though.

@sharifelgamal might have more details on what was the problem (in 1.11) ?

tstromberg commented 4 years ago

@sharifelgamal - any more context on this? It seems like surprisingly this once worked.

As an unrelated bug, this check should respect --force but does not appear to yet.

sharifelgamal commented 4 years ago

Interesting, that would be my fault then. I always assumed it never worked so I added that error message, I'll look into fixing it. If anyone wants to take a crack at it in the interim, take the assignment from me.

storozhilov commented 4 years ago

Hi to all, thank you so much for your participation!

This was intentional, #8841 (b27440d). It doesn't really say why, though.

We were facing the same behavior as it's mentioned in #8841. This was due to some ingress-related images pulling takes too much time. Ingress addon would start in the end but minikube addons enable ingress command is to fail after 3 min timeout.

Perhaps it makes sense to revert b27440d and to re-open #8841 where a sort of a timeout increase is to be done.. Not sure. Any idea?

sharifelgamal commented 4 years ago

@storozhilov That sounds like the right idea. I'll eventually get around to open the PR to revert that change, but if someone gets to it before me then I'll be happy to review.

storozhilov commented 3 years ago

Hi @sharifelgamal , I've issued https://github.com/kubernetes/minikube/pull/9574 but I need to complete a CLA signing procedure which I would like to avoid for now if possible. Would you be so kind to re-issue another PR with the same changes yourself? Thanks in advance! Ilya

sharifelgamal commented 3 years ago

Sure thing @storozhilov, thanks for the help.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

storozhilov commented 3 years ago

/remove-lifecycle stale