kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.46k stars 4.89k forks source link

This container is having trouble accessing https://k8s.gcr.io #9798

Closed OkayJosh closed 1 year ago

OkayJosh commented 3 years ago

Steps to reproduce the issue:

  1. minikube start --driver=docker

Full output of failed command:

Full output of minikube start command used, if not already included:

Deleting "minikube" in docker ... 🔥 Deleting container "minikube" ... 🔥 Removing /home/cloudsigma/.minikube/machines/minikube ... 💀 Removed all traces of the "minikube" cluster. [cloudsigma@Fedora-32 django]$ minikube start --driver=docker --image-repository=auto 😄 minikube v1.15.1 on Fedora 32 ✨ Using the docker driver based on user configuration ✅ Using image repository 👍 Starting control plane node minikube in cluster minikube 🔥 Creating docker container (CPUs=2, Memory=2200MB) ... ❗ This container is having trouble accessing https://k8s.gcr.io 💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ 🐳 Preparing Kubernetes v1.19.4 on Docker 19.03.13 ... 🔎 Verifying Kubernetes components... 🌟 Enabled addons: storage-provisioner, default-storageclass 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Sun 2020-11-29 09:30:46 UTC, end at Sun 2020-11-29 09:37:02 UTC. -- Nov 29 09:30:46 minikube systemd[1]: Starting Docker Application Container Engine... Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.338566925Z" level=info msg="Starting up" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.339932857Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.339959716Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.339976556Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.339995703Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.348324553Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.348463779Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.348539167Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.348548228Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.365955417Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.384459784Z" level=warning msg="Your kernel does not support cgroup rt period" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.384558438Z" level=warning msg="Your kernel does not support cgroup rt runtime" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.384615570Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.384667184Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.384877950Z" level=info msg="Loading containers: start." Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.469200751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.506675387Z" level=info msg="Loading containers: done." Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.520682027Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13 Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.520835403Z" level=info msg="Daemon has completed initialization" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.548158008Z" level=info msg="API listen on /run/docker.sock" Nov 29 09:30:46 minikube systemd[1]: Started Docker Application Container Engine. Nov 29 09:30:48 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Nov 29 09:30:48 minikube systemd[1]: Stopping Docker Application Container Engine... Nov 29 09:30:48 minikube dockerd[180]: time="2020-11-29T09:30:48.961096281Z" level=info msg="Processing signal 'terminated'" Nov 29 09:30:48 minikube dockerd[180]: time="2020-11-29T09:30:48.962680319Z" level=info msg="Daemon shutdown complete" Nov 29 09:30:48 minikube systemd[1]: docker.service: Succeeded. Nov 29 09:30:48 minikube systemd[1]: Stopped Docker Application Container Engine. Nov 29 09:30:48 minikube systemd[1]: Starting Docker Application Container Engine... Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.014400568Z" level=info msg="Starting up" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.016256603Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.016287167Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.016304428Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.016316195Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.020699031Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.020808482Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.020879195Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.020941302Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.056812347Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.062913662Z" level=warning msg="Your kernel does not support cgroup rt period" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.063008264Z" level=warning msg="Your kernel does not support cgroup rt runtime" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.063084137Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.063137926Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.063332836Z" level=info msg="Loading containers: start." Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.181413651Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.253328047Z" level=info msg="Loading containers: done." Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.266634246Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13 Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.266701024Z" level=info msg="Daemon has completed initialization" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.281139385Z" level=info msg="API listen on /var/run/docker.sock" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.281198634Z" level=info msg="API listen on [::]:2376" Nov 29 09:30:49 minikube systemd[1]: Started Docker Application Container Engine. ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID c814c6f73d688 bad58561c4be7 5 minutes ago Running storage-provisioner 0 264237d87eafc f8a59f80375ce bfe3a36ebd252 5 minutes ago Running coredns 0 993fa3c98bacf fc55aa1fcd37a 635b36f4d89f0 5 minutes ago Running kube-proxy 0 384c9ebc92709 ec995506eee9a 0369cf4303ffd 5 minutes ago Running etcd 0 554b138fb4d5b 115eec88b6801 14cd22f7abe78 5 minutes ago Running kube-scheduler 0 c5f649e41e7b1 3c477bc86e4c7 4830ab6185860 5 minutes ago Running kube-controller-manager 0 a71a49519e607 92c80200590e3 b15c6247777d7 5 minutes ago Running kube-apiserver 0 93c49fa40df0e ==> coredns [f8a59f80375c] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:40272->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:54760->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:55188->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:49012->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:46902->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:57478->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:36878->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:37883->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:54437->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:56933->192.168.49.1:53: i/o timeout ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=23f40a012abb52eff365ff99a709501a61ac5876 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_11_29T09_31_19_0700 minikube.k8s.io/version=v1.15.1 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sun, 29 Nov 2020 09:31:16 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Sun, 29 Nov 2020 09:37:00 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sun, 29 Nov 2020 09:36:31 +0000 Sun, 29 Nov 2020 09:31:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sun, 29 Nov 2020 09:36:31 +0000 Sun, 29 Nov 2020 09:31:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sun, 29 Nov 2020 09:36:31 +0000 Sun, 29 Nov 2020 09:31:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sun, 29 Nov 2020 09:36:31 +0000 Sun, 29 Nov 2020 09:31:30 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 4 ephemeral-storage: 82510724Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8144724Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 82510724Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8144724Ki pods: 110 System Info: Machine ID: dc0441139eae465ead0805eb541bcb4e System UUID: 6312edaf-2a2f-4b32-80d7-375aec2b4544 Boot ID: 9963988f-9421-4695-839b-f196bd1bacf6 Kernel Version: 5.8.18-200.fc32.x86_64 OS Image: Ubuntu 20.04.1 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.13 Kubelet Version: v1.19.4 Kube-Proxy Version: v1.19.4 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-f9fd979d6-rw8s8 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 5m37s kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m42s kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 5m42s kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 5m42s kube-system kube-proxy-zp8wj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m37s kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 5m42s kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m41s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (16%) 0 (0%) memory 70Mi (0%) 170Mi (2%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 5m54s (x4 over 5m54s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 5m54s (x5 over 5m54s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 5m54s (x4 over 5m54s) kubelet Node minikube status is now: NodeHasSufficientPID Normal Starting 5m43s kubelet Starting kubelet. Normal NodeHasSufficientMemory 5m42s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 5m42s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 5m42s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeNotReady 5m42s kubelet Node minikube status is now: NodeNotReady Normal NodeAllocatableEnforced 5m42s kubelet Updated Node Allocatable limit across pods Normal Starting 5m36s kube-proxy Starting kube-proxy. Normal NodeReady 5m32s kubelet Node minikube status is now: NodeReady ==> dmesg <== [Nov29 05:12] #2 [ +0.000233] #3 [ +0.178024] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ +4.826931] kauditd_printk_skb: 32 callbacks suppressed [ +0.811058] systemd-journald[458]: File /var/log/journal/37b77124313f41d6af88b51d4456a3ad/system.journal corrupted or uncleanly shut down, renaming and replacing. [Nov29 05:15] process 'docker/tmp/qemu-check567254077/check' started with executable stack [ +3.284960] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality. [Nov29 06:51] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.094786] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.004259] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.001534] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.033649] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.003482] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.029681] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000002] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.097332] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000012] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. ==> etcd [ec995506eee9] <== raft2020/11/29 09:31:09 INFO: aec36adc501070cc became follower at term 0 raft2020/11/29 09:31:09 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2020/11/29 09:31:09 INFO: aec36adc501070cc became follower at term 1 raft2020/11/29 09:31:09 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2020-11-29 09:31:10.279028 W | auth: simple token is not cryptographically signed 2020-11-29 09:31:10.358500 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] 2020-11-29 09:31:10.365819 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10) 2020-11-29 09:31:10.368367 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-11-29 09:31:10.368720 I | embed: listening for metrics on http://127.0.0.1:2381 raft2020/11/29 09:31:10 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2020-11-29 09:31:10.368838 I | embed: listening for peers on 192.168.49.2:2380 2020-11-29 09:31:10.369264 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be raft2020/11/29 09:31:11 INFO: aec36adc501070cc is starting a new election at term 1 raft2020/11/29 09:31:11 INFO: aec36adc501070cc became candidate at term 2 raft2020/11/29 09:31:11 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2 raft2020/11/29 09:31:11 INFO: aec36adc501070cc became leader at term 2 raft2020/11/29 09:31:11 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2 2020-11-29 09:31:11.187516 I | etcdserver: setting up the initial cluster version to 3.4 2020-11-29 09:31:11.187945 N | etcdserver/membership: set the initial cluster version to 3.4 2020-11-29 09:31:11.188229 I | etcdserver/api: enabled capabilities for version 3.4 2020-11-29 09:31:11.188318 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be 2020-11-29 09:31:11.188386 I | embed: ready to serve client requests 2020-11-29 09:31:11.191695 I | embed: serving client requests on 127.0.0.1:2379 2020-11-29 09:31:11.192827 I | embed: ready to serve client requests 2020-11-29 09:31:11.194273 I | embed: serving client requests on 192.168.49.2:2379 2020-11-29 09:31:26.281810 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:31:30.789535 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:31:40.789475 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:31:50.789580 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:32:00.789651 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:32:10.789583 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:32:20.789624 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:32:30.789578 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:32:40.789653 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:32:50.789532 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:33:00.793033 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:33:10.789532 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:33:20.789522 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:33:30.790161 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:33:40.789495 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:33:50.789486 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:34:00.789480 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:34:10.789559 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:34:20.789491 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:34:30.789495 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:34:40.789446 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:34:50.789578 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:35:00.789731 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:35:10.789502 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:35:20.789541 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:35:30.789491 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:35:40.789582 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:35:50.789472 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:36:00.789501 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:36:10.789428 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:36:20.789493 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:36:30.789577 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:36:40.789725 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:36:50.789452 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-11-29 09:37:00.789452 I | etcdserver/api/etcdhttp: /health OK (status code 200) ==> kernel <== 09:37:02 up 4:24, 0 users, load average: 0.94, 1.04, 1.08 Linux minikube 5.8.18-200.fc32.x86_64 #1 SMP Mon Nov 2 19:49:11 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.1 LTS" ==> kube-apiserver [92c80200590e] <== E1129 09:31:16.261585 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: I1129 09:31:16.298019 1 controller.go:86] Starting OpenAPI controller I1129 09:31:16.350781 1 cache.go:39] Caches are synced for autoregister controller I1129 09:31:16.351287 1 cache.go:39] Caches are synced for AvailableConditionController controller I1129 09:31:16.357626 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1129 09:31:16.359091 1 shared_informer.go:247] Caches are synced for crd-autoregister I1129 09:31:16.360952 1 naming_controller.go:291] Starting NamingConditionController I1129 09:31:16.361161 1 establishing_controller.go:76] Starting EstablishingController I1129 09:31:16.361269 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController I1129 09:31:16.361451 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I1129 09:31:16.361558 1 crd_finalizer.go:266] Starting CRDFinalizer I1129 09:31:16.373891 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I1129 09:31:16.393178 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I1129 09:31:16.395096 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I1129 09:31:17.249723 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1129 09:31:17.249920 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I1129 09:31:17.254899 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 I1129 09:31:17.258275 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 I1129 09:31:17.258291 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I1129 09:31:17.666700 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I1129 09:31:17.692469 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W1129 09:31:17.866532 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I1129 09:31:17.867288 1 controller.go:606] quota admission added evaluator for: endpoints I1129 09:31:17.870648 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io I1129 09:31:18.814159 1 controller.go:606] quota admission added evaluator for: serviceaccounts I1129 09:31:19.259225 1 controller.go:606] quota admission added evaluator for: deployments.apps I1129 09:31:19.449693 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I1129 09:31:19.970082 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I1129 09:31:25.869040 1 controller.go:606] quota admission added evaluator for: replicasets.apps I1129 09:31:25.882826 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps I1129 09:31:42.385189 1 client.go:360] parsed scheme: "passthrough" I1129 09:31:42.385323 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1129 09:31:42.385392 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1129 09:32:21.216485 1 client.go:360] parsed scheme: "passthrough" I1129 09:32:21.216528 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1129 09:32:21.216536 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1129 09:32:58.164659 1 client.go:360] parsed scheme: "passthrough" I1129 09:32:58.164737 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1129 09:32:58.164749 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1129 09:33:31.884535 1 client.go:360] parsed scheme: "passthrough" I1129 09:33:31.884575 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1129 09:33:31.884586 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1129 09:34:14.105786 1 client.go:360] parsed scheme: "passthrough" I1129 09:34:14.105869 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1129 09:34:14.105886 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1129 09:34:48.110866 1 client.go:360] parsed scheme: "passthrough" I1129 09:34:48.110913 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1129 09:34:48.110921 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1129 09:35:19.199526 1 client.go:360] parsed scheme: "passthrough" I1129 09:35:19.199573 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1129 09:35:19.199582 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1129 09:35:54.577610 1 client.go:360] parsed scheme: "passthrough" I1129 09:35:54.577650 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1129 09:35:54.577658 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1129 09:36:25.173891 1 client.go:360] parsed scheme: "passthrough" I1129 09:36:25.173999 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1129 09:36:25.174020 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1129 09:37:02.466572 1 client.go:360] parsed scheme: "passthrough" I1129 09:37:02.466638 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1129 09:37:02.466649 1 clientconn.go:948] ClientConn switching balancer to "pick_first" ==> kube-controller-manager [3c477bc86e4c] <== I1129 09:31:25.010749 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator I1129 09:31:25.260992 1 controllermanager.go:549] Started "endpoint" I1129 09:31:25.261045 1 endpoints_controller.go:184] Starting endpoint controller I1129 09:31:25.261050 1 shared_informer.go:240] Waiting for caches to sync for endpoint I1129 09:31:25.511387 1 controllermanager.go:549] Started "serviceaccount" I1129 09:31:25.511435 1 serviceaccounts_controller.go:117] Starting service account controller I1129 09:31:25.511441 1 shared_informer.go:240] Waiting for caches to sync for service account I1129 09:31:25.610868 1 request.go:645] Throttling request took 1.048847873s, request: GET:https://192.168.49.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s I1129 09:31:25.760962 1 controllermanager.go:549] Started "statefulset" I1129 09:31:25.760988 1 stateful_set.go:146] Starting stateful set controller I1129 09:31:25.761267 1 shared_informer.go:240] Waiting for caches to sync for stateful set W1129 09:31:25.777553 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1129 09:31:25.810888 1 shared_informer.go:247] Caches are synced for GC I1129 09:31:25.811220 1 shared_informer.go:247] Caches are synced for PVC protection I1129 09:31:25.811556 1 shared_informer.go:247] Caches are synced for service account I1129 09:31:25.811714 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1129 09:31:25.812181 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1129 09:31:25.812184 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1129 09:31:25.812353 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1129 09:31:25.816705 1 shared_informer.go:247] Caches are synced for namespace I1129 09:31:25.817568 1 shared_informer.go:247] Caches are synced for bootstrap_signer I1129 09:31:25.838231 1 shared_informer.go:247] Caches are synced for job I1129 09:31:25.849247 1 shared_informer.go:247] Caches are synced for HPA I1129 09:31:25.850270 1 shared_informer.go:247] Caches are synced for ReplicaSet I1129 09:31:25.860991 1 shared_informer.go:247] Caches are synced for taint I1129 09:31:25.861307 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W1129 09:31:25.861797 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. I1129 09:31:25.861519 1 shared_informer.go:247] Caches are synced for TTL I1129 09:31:25.861204 1 shared_informer.go:247] Caches are synced for ReplicationController I1129 09:31:25.861137 1 shared_informer.go:247] Caches are synced for endpoint I1129 09:31:25.861522 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I1129 09:31:25.861531 1 taint_manager.go:187] Starting NoExecuteTaintManager I1129 09:31:25.861539 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I1129 09:31:25.861451 1 shared_informer.go:247] Caches are synced for stateful set I1129 09:31:25.863244 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I1129 09:31:25.864401 1 shared_informer.go:247] Caches are synced for deployment I1129 09:31:25.873404 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1" I1129 09:31:25.879554 1 shared_informer.go:247] Caches are synced for daemon sets I1129 09:31:25.885679 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-rw8s8" I1129 09:31:25.908141 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zp8wj" I1129 09:31:25.962215 1 shared_informer.go:247] Caches are synced for disruption I1129 09:31:25.962234 1 disruption.go:339] Sending events to api server. E1129 09:31:25.963997 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"598fe9a5-38fd-45a2-a979-90b0f50610af", ResourceVersion:"217", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63742239079, loc:(*time.Location)(0x6a61c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001941dc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001941de0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001941e00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0010815c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001941e20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001941e40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.4", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001941e80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001958960), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000a29cf8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0005c4cb0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000176ce0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000a29d78)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again I1129 09:31:25.964391 1 shared_informer.go:247] Caches are synced for resource quota I1129 09:31:25.974046 1 shared_informer.go:247] Caches are synced for expand I1129 09:31:25.983385 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1129 09:31:26.011371 1 shared_informer.go:247] Caches are synced for endpoint_slice I1129 09:31:26.050264 1 shared_informer.go:247] Caches are synced for PV protection I1129 09:31:26.061260 1 shared_informer.go:247] Caches are synced for persistent volume I1129 09:31:26.061408 1 shared_informer.go:247] Caches are synced for attach detach I1129 09:31:26.110955 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1129 09:31:26.114544 1 shared_informer.go:240] Waiting for caches to sync for garbage collector E1129 09:31:26.153899 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again E1129 09:31:26.154480 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again I1129 09:31:26.317094 1 shared_informer.go:240] Waiting for caches to sync for resource quota I1129 09:31:26.317120 1 shared_informer.go:247] Caches are synced for resource quota I1129 09:31:26.411030 1 shared_informer.go:247] Caches are synced for garbage collector I1129 09:31:26.411087 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1129 09:31:26.415603 1 shared_informer.go:247] Caches are synced for garbage collector I1129 09:31:30.891697 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. ==> kube-proxy [fc55aa1fcd37] <== I1129 09:31:26.751639 1 node.go:136] Successfully retrieved node IP: 192.168.49.2 I1129 09:31:26.751870 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation W1129 09:31:26.785840 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy I1129 09:31:26.785961 1 server_others.go:186] Using iptables Proxier. W1129 09:31:26.785970 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I1129 09:31:26.785974 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I1129 09:31:26.786167 1 server.go:650] Version: v1.19.4 I1129 09:31:26.786454 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I1129 09:31:26.787185 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1129 09:31:26.787234 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1129 09:31:26.787394 1 config.go:315] Starting service config controller I1129 09:31:26.787407 1 shared_informer.go:240] Waiting for caches to sync for service config I1129 09:31:26.787425 1 config.go:224] Starting endpoint slice config controller I1129 09:31:26.787428 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I1129 09:31:26.887511 1 shared_informer.go:247] Caches are synced for endpoint slice config I1129 09:31:26.887719 1 shared_informer.go:247] Caches are synced for service config ==> kube-scheduler [115eec88b680] <== I1129 09:31:09.866107 1 registry.go:173] Registering SelectorSpread plugin I1129 09:31:09.869803 1 registry.go:173] Registering SelectorSpread plugin I1129 09:31:11.489373 1 serving.go:331] Generated self-signed cert in-memory W1129 09:31:16.455300 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1129 09:31:16.455343 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1129 09:31:16.455354 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous. W1129 09:31:16.455361 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1129 09:31:16.483727 1 registry.go:173] Registering SelectorSpread plugin I1129 09:31:16.483781 1 registry.go:173] Registering SelectorSpread plugin I1129 09:31:16.486764 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1129 09:31:16.486981 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file E1129 09:31:16.488022 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I1129 09:31:16.488375 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I1129 09:31:16.488472 1 tlsconfig.go:240] Starting DynamicServingCertificateController E1129 09:31:16.492050 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1129 09:31:16.492273 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1129 09:31:16.492526 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1129 09:31:16.492628 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1129 09:31:16.492714 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1129 09:31:16.492855 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1129 09:31:16.492963 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1129 09:31:16.493039 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1129 09:31:16.493124 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1129 09:31:16.493224 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1129 09:31:16.493287 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1129 09:31:16.493354 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1129 09:31:17.398500 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1129 09:31:17.448636 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1129 09:31:17.448893 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope I1129 09:31:18.087180 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== -- Logs begin at Sun 2020-11-29 09:30:46 UTC, end at Sun 2020-11-29 09:37:02 UTC. -- Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.932680 2131 kuberuntime_manager.go:214] Container runtime docker initialized, version: 19.03.13, apiVersion: 1.40.0 Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.933050 2131 server.go:1147] Started kubelet Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.933188 2131 server.go:152] Starting to listen on 0.0.0.0:10250 Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.933992 2131 server.go:424] Adding debug handlers to kubelet server. Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.934878 2131 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.936123 2131 volume_manager.go:265] Starting Kubelet Volume Manager Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.950488 2131 desired_state_of_world_populator.go:139] Desired state populator starts to run Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.988264 2131 status_manager.go:158] Starting to sync pod status with apiserver Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.988324 2131 kubelet.go:1741] Starting kubelet main sync loop. Nov 29 09:31:19 minikube kubelet[2131]: E1129 09:31:19.988390 2131 kubelet.go:1765] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.994118 2131 client.go:87] parsed scheme: "unix" Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.994265 2131 client.go:87] scheme "unix" not registered, fallback to default scheme Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.994376 2131 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] } Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.994439 2131 clientconn.go:948] ClientConn switching balancer to "pick_first" Nov 29 09:31:20 minikube kubelet[2131]: E1129 09:31:20.088734 2131 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.149431 2131 kubelet_node_status.go:70] Attempting to register node minikube Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.159837 2131 kubelet_node_status.go:108] Node minikube was previously registered Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.159918 2131 kubelet_node_status.go:73] Successfully registered node minikube Nov 29 09:31:20 minikube kubelet[2131]: E1129 09:31:20.289134 2131 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.392263 2131 setters.go:555] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-11-29 09:31:20.392233412 +0000 UTC m=+1.191927022 LastTransitionTime:2020-11-29 09:31:20.392233412 +0000 UTC m=+1.191927022 Reason:KubeletNotReady Message:container runtime status check may not have completed yet} Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.659069 2131 cpu_manager.go:184] [cpumanager] starting with none policy Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.659088 2131 cpu_manager.go:185] [cpumanager] reconciling every 10s Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.659107 2131 state_mem.go:36] [cpumanager] initializing new in-memory state store Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.659235 2131 state_mem.go:88] [cpumanager] updated default cpuset: "" Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.659248 2131 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]" Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.659263 2131 policy_none.go:43] [cpumanager] none policy: Start Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.660728 2131 plugin_manager.go:114] Starting Kubelet Plugin Manager Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.690839 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.692636 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.694358 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.695508 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.756844 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/e30eb1a2f7c2dbcda239c972918b3eb4-ca-certs") pod "kube-apiserver-minikube" (UID: "e30eb1a2f7c2dbcda239c972918b3eb4") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757026 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/e30eb1a2f7c2dbcda239c972918b3eb4-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "e30eb1a2f7c2dbcda239c972918b3eb4") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757157 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/e30eb1a2f7c2dbcda239c972918b3eb4-k8s-certs") pod "kube-apiserver-minikube" (UID: "e30eb1a2f7c2dbcda239c972918b3eb4") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757270 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/e30eb1a2f7c2dbcda239c972918b3eb4-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "e30eb1a2f7c2dbcda239c972918b3eb4") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757410 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757558 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-k8s-certs") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757694 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-kubeconfig") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757806 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757916 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/e30eb1a2f7c2dbcda239c972918b3eb4-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "e30eb1a2f7c2dbcda239c972918b3eb4") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758049 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758151 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/d186e6390814d4dd7e770f47c08e98a2-etcd-data") pod "etcd-minikube" (UID: "d186e6390814d4dd7e770f47c08e98a2") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758239 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-ca-certs") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758332 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758418 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/38744c90661b22e9ae232b0452c54538-kubeconfig") pod "kube-scheduler-minikube" (UID: "38744c90661b22e9ae232b0452c54538") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758539 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/d186e6390814d4dd7e770f47c08e98a2-etcd-certs") pod "etcd-minikube" (UID: "d186e6390814d4dd7e770f47c08e98a2") Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758632 2131 reconciler.go:157] Reconciler: start to sync state Nov 29 09:31:25 minikube kubelet[2131]: I1129 09:31:25.961121 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler Nov 29 09:31:25 minikube kubelet[2131]: I1129 09:31:25.972379 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b0006185-2ba4-4684-9499-1e40147c1724-lib-modules") pod "kube-proxy-zp8wj" (UID: "b0006185-2ba4-4684-9499-1e40147c1724") Nov 29 09:31:25 minikube kubelet[2131]: I1129 09:31:25.972585 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b0006185-2ba4-4684-9499-1e40147c1724-xtables-lock") pod "kube-proxy-zp8wj" (UID: "b0006185-2ba4-4684-9499-1e40147c1724") Nov 29 09:31:25 minikube kubelet[2131]: I1129 09:31:25.972717 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-9n29q" (UniqueName: "kubernetes.io/secret/b0006185-2ba4-4684-9499-1e40147c1724-kube-proxy-token-9n29q") pod "kube-proxy-zp8wj" (UID: "b0006185-2ba4-4684-9499-1e40147c1724") Nov 29 09:31:25 minikube kubelet[2131]: I1129 09:31:25.973230 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b0006185-2ba4-4684-9499-1e40147c1724-kube-proxy") pod "kube-proxy-zp8wj" (UID: "b0006185-2ba4-4684-9499-1e40147c1724") Nov 29 09:31:34 minikube kubelet[2131]: I1129 09:31:34.997455 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler Nov 29 09:31:35 minikube kubelet[2131]: I1129 09:31:35.084643 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-9j9gc" (UniqueName: "kubernetes.io/secret/a46f4729-2f38-4d31-bd0f-edf82bd612bd-coredns-token-9j9gc") pod "coredns-f9fd979d6-rw8s8" (UID: "a46f4729-2f38-4d31-bd0f-edf82bd612bd") Nov 29 09:31:35 minikube kubelet[2131]: I1129 09:31:35.084678 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a46f4729-2f38-4d31-bd0f-edf82bd612bd-config-volume") pod "coredns-f9fd979d6-rw8s8" (UID: "a46f4729-2f38-4d31-bd0f-edf82bd612bd") Nov 29 09:31:35 minikube kubelet[2131]: W1129 09:31:35.703376 2131 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-rw8s8 through plugin: invalid network status for Nov 29 09:31:36 minikube kubelet[2131]: W1129 09:31:36.131610 2131 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-rw8s8 through plugin: invalid network status for Nov 29 09:31:38 minikube kubelet[2131]: I1129 09:31:38.996396 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler Nov 29 09:31:39 minikube kubelet[2131]: I1129 09:31:39.088898 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-8whpk" (UniqueName: "kubernetes.io/secret/8308b99d-c9b9-40c9-852c-374039edeff5-storage-provisioner-token-8whpk") pod "storage-provisioner" (UID: "8308b99d-c9b9-40c9-852c-374039edeff5") Nov 29 09:31:39 minikube kubelet[2131]: I1129 09:31:39.088942 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/8308b99d-c9b9-40c9-852c-374039edeff5-tmp") pod "storage-provisioner" (UID: "8308b99d-c9b9-40c9-852c-374039edeff5") ==> storage-provisioner [c814c6f73d68] <== I1129 09:31:39.686820 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I1129 09:31:39.692380 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I1129 09:31:39.692803 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_62ea6c00-3a6b-404d-ac19-eed305caaa58! I1129 09:31:39.692936 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58150db4-cbae-4805-8da3-11bd133ea991", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_62ea6c00-3a6b-404d-ac19-eed305caaa58 became leader I1129 09:31:39.793442 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_62ea6c00-3a6b-404d-ac19-eed305caaa58!
esbc-disciple commented 7 months ago

@josvazg's comment was indispensable. Got me going on the right track. A true legend.

In my case, I was using Debian 12.2 with minikube 1.32.0, Kubernetes 1.28.3, Docker 24.0.7.

If you get this error, it means you botched some kind of network config. In my case, I'm using an AWS instance and don't have it set to pull an IPv4 automatically (I'm assigning an Elastic IPv4 as needed). I had forgotten to assign the Elastic IP, so my dango EC2 instance had no way to reach the IPv4 internet.

If you get this error, think about how you set up your instance/server. Did you change DNS settings (as in the case of @josvazg)? Did you make sure you have IP-level network connectivity? Do your basic Linux connectivity commands: ping, ip addr show, etc.

peterhoneder commented 6 months ago

For people using minikube with hyperkit: check that in your firewall settings, the setting for "block all incoming connections" in the Details dialog is not enabled -> it will also block all traffic coming from the bridge.

OkayJosh commented 3 months ago

i am checking back here after this while, seeing that this issue persists? @medyagh

@medyagh yes i was able to fix the issue as at then