kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.33k stars 4.88k forks source link

Can not pull any image in minikube #9580

Closed dongyu closed 3 years ago

dongyu commented 3 years ago

Steps to reproduce the issue:

  1. minikube start --driver=docker
  2. minikube ssh
  3. docker search alpine

Full output of failed command: docker search alpine Error response from daemon: Get https://index.docker.io/v1/search?q=alpine&n=25: dial tcp: lookup index.docker.io on 192.168.49.1:53: read udp 192.168.49.2:58279->192.168.49.1:53: i/o timeout

Full output of minikube start command used, if not already included:

๐Ÿ˜„ minikube v1.14.2 on Centos 8.2.2004 โœจ Using the docker driver based on user configuration ๐Ÿ‘ Starting control plane node minikube in cluster minikube ๐Ÿšœ Pulling base image ... ๐Ÿ’พ Downloading Kubernetes v1.19.2 preload ...

preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 486.33 MiB ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=2200MB) ... โ— This container is having trouble accessing https://k8s.gcr.io ๐Ÿ’ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ ๐Ÿณ Preparing Kubernetes v1.19.2 on Docker 19.03.8 ... ๐Ÿ”Ž Verifying Kubernetes components... ๐ŸŒŸ Enabled addons: default-storageclass, storage-provisioner ๐Ÿ„ Done! kubectl is now configured to use "minikube" by default

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Thu 2020-10-29 02:05:19 UTC, end at Thu 2020-10-29 02:11:22 UTC. -- Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.003678544Z" level=info msg="Starting up" Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.005573950Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.005621370Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.005648090Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.005658512Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.107112581Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.107164052Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.107210141Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.107220532Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.119891193Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.140767186Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.140808437Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.140976492Z" level=info msg="Loading containers: start." Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.202415656Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.250274718Z" level=info msg="Loading containers: done." Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.270220011Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.270476057Z" level=info msg="Daemon has completed initialization" Oct 29 02:05:20 minikube dockerd[160]: time="2020-10-29T02:05:20.296639886Z" level=info msg="API listen on /run/docker.sock" Oct 29 02:05:20 minikube systemd[1]: Started Docker Application Container Engine. Oct 29 02:05:22 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Oct 29 02:05:22 minikube systemd[1]: Stopping Docker Application Container Engine... Oct 29 02:05:22 minikube dockerd[160]: time="2020-10-29T02:05:22.215707680Z" level=info msg="Processing signal 'terminated'" Oct 29 02:05:22 minikube dockerd[160]: time="2020-10-29T02:05:22.216836265Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Oct 29 02:05:22 minikube dockerd[160]: time="2020-10-29T02:05:22.217423749Z" level=info msg="Daemon shutdown complete" Oct 29 02:05:22 minikube systemd[1]: docker.service: Succeeded. Oct 29 02:05:22 minikube systemd[1]: Stopped Docker Application Container Engine. Oct 29 02:05:22 minikube systemd[1]: Starting Docker Application Container Engine... Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.291125624Z" level=info msg="Starting up" Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.294349973Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.294397473Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.294430690Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.294445766Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.296767473Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.296796735Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.296822929Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.296836053Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.304208324Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.309913235Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.309952152Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.310108276Z" level=info msg="Loading containers: start." Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.422246821Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.471469829Z" level=info msg="Loading containers: done." Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.488086373Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.488182812Z" level=info msg="Daemon has completed initialization" Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.506388250Z" level=info msg="API listen on /var/run/docker.sock" Oct 29 02:05:22 minikube dockerd[376]: time="2020-10-29T02:05:22.506494542Z" level=info msg="API listen on [::]:2376" Oct 29 02:05:22 minikube systemd[1]: Started Docker Application Container Engine. Oct 29 02:07:47 minikube dockerd[376]: time="2020-10-29T02:07:47.992899787Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:48049->192.168.49.1:53: i/o timeout" Oct 29 02:07:47 minikube dockerd[376]: time="2020-10-29T02:07:47.993004596Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:48049->192.168.49.1:53: i/o timeout" Oct 29 02:07:47 minikube dockerd[376]: time="2020-10-29T02:07:47.993122106Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:48049->192.168.49.1:53: i/o timeout" Oct 29 02:08:41 minikube dockerd[376]: time="2020-10-29T02:08:41.821097366Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:56942->192.168.49.1:53: i/o timeout" Oct 29 02:08:41 minikube dockerd[376]: time="2020-10-29T02:08:41.821137346Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:56942->192.168.49.1:53: i/o timeout" Oct 29 02:08:41 minikube dockerd[376]: time="2020-10-29T02:08:41.821168204Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:56942->192.168.49.1:53: i/o timeout" Oct 29 02:09:51 minikube dockerd[376]: time="2020-10-29T02:09:51.827265521Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:47348->192.168.49.1:53: i/o timeout" Oct 29 02:09:51 minikube dockerd[376]: time="2020-10-29T02:09:51.827325142Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:47348->192.168.49.1:53: i/o timeout" Oct 29 02:09:51 minikube dockerd[376]: time="2020-10-29T02:09:51.827370295Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:47348->192.168.49.1:53: i/o timeout" Oct 29 02:09:51 minikube dockerd[376]: time="2020-10-29T02:09:51.835399333Z" level=error msg="Handler for GET /v1.40/images/search returned error: Get https://index.docker.io/v1/search?q=alpine&n=25: dial tcp: lookup index.docker.io on 192.168.49.1:53: read udp 192.168.49.2:58279->192.168.49.1:53: i/o timeout" Oct 29 02:11:15 minikube dockerd[376]: time="2020-10-29T02:11:15.827263680Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:59041->192.168.49.1:53: i/o timeout" Oct 29 02:11:15 minikube dockerd[376]: time="2020-10-29T02:11:15.827312014Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:59041->192.168.49.1:53: i/o timeout" Oct 29 02:11:15 minikube dockerd[376]: time="2020-10-29T02:11:15.827344878Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:59041->192.168.49.1:53: i/o timeout" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 18ccdb1ff6c5c bfe3a36ebd252 5 minutes ago Running coredns 0 4f2517c41e0de e20801ccb3b46 bad58561c4be7 5 minutes ago Running storage-provisioner 0 fd75e87ec9340 1abebe3785bf4 d373dd5a8593a 5 minutes ago Running kube-proxy 0 a34454e8eafe7 dc3d7539e7fb1 607331163122e 5 minutes ago Running kube-apiserver 0 d12889422d7a5 c69842619ed0a 2f32d66b884f8 5 minutes ago Running kube-scheduler 0 005bbdda7d4bc 37c7a218d3db8 8603821e1a7a5 5 minutes ago Running kube-controller-manager 0 bb623aeca1ad2 f1cb8870b8cd3 0369cf4303ffd 5 minutes ago Running etcd 0 2b6aa0a2b6854 ==> coredns [18ccdb1ff6c5] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [ERROR] plugin/errors: 2 4249402989909864157.6410841100896656417. HINFO: read udp 172.17.0.2:57398->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 4249402989909864157.6410841100896656417. HINFO: read udp 172.17.0.2:43023->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 4249402989909864157.6410841100896656417. HINFO: read udp 172.17.0.2:45817->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 4249402989909864157.6410841100896656417. HINFO: read udp 172.17.0.2:43952->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 4249402989909864157.6410841100896656417. HINFO: read udp 172.17.0.2:43743->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 4249402989909864157.6410841100896656417. HINFO: read udp 172.17.0.2:33806->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 4249402989909864157.6410841100896656417. HINFO: read udp 172.17.0.2:47951->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 4249402989909864157.6410841100896656417. HINFO: read udp 172.17.0.2:41025->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 4249402989909864157.6410841100896656417. HINFO: read udp 172.17.0.2:43978->192.168.49.1:53: i/o timeout [ERROR] plugin/errors: 2 4249402989909864157.6410841100896656417. HINFO: read udp 172.17.0.2:55998->192.168.49.1:53: i/o timeout ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=2c82918e2347188e21c4e44c8056fc80408bce10 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_10_29T02_05_55_0700 minikube.k8s.io/version=v1.14.2 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 29 Oct 2020 02:05:52 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Thu, 29 Oct 2020 02:11:22 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 29 Oct 2020 02:11:13 +0000 Thu, 29 Oct 2020 02:05:47 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 29 Oct 2020 02:11:13 +0000 Thu, 29 Oct 2020 02:05:47 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 29 Oct 2020 02:11:13 +0000 Thu, 29 Oct 2020 02:05:47 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 29 Oct 2020 02:11:13 +0000 Thu, 29 Oct 2020 02:06:12 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 78570884Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3868860Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 78570884Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3868860Ki pods: 110 System Info: Machine ID: 9dca74cbd9b44bd9a9a4e60480f87cde System UUID: 60c2ac70-0b2e-4e4a-9c15-306163fc02dd Boot ID: 3c98e28f-52fe-42b8-ba34-f56bd8624af8 Kernel Version: 4.18.0-193.19.1.el8_2.x86_64 OS Image: Ubuntu 20.04 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.8 Kubelet Version: v1.19.2 Kube-Proxy Version: v1.19.2 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default hello-minikube-6ddfcc9757-t7pmx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m15s kube-system coredns-f9fd979d6-6ddmm 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 5m20s kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m20s kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 5m20s kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 5m20s kube-system kube-proxy-mft5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m21s kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m20s kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m26s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (32%) 0 (0%) memory 70Mi (1%) 170Mi (4%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 5m21s kubelet Starting kubelet. Normal NodeHasSufficientMemory 5m21s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 5m21s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 5m21s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 5m20s kubelet Updated Node Allocatable limit across pods Warning readOnlySysFS 5m19s kube-proxy CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) Normal Starting 5m19s kube-proxy Starting kube-proxy. Normal NodeReady 5m10s kubelet Node minikube status is now: NodeReady ==> dmesg <== [Oct29 01:56] Spectre V2 : Using retpoline on Skylake-generation processors may not fully mitigate the vulnerability. [ +0.000001] Spectre V2 : Add the "spectre_v2=ibrs" kernel boot flag to enable IBRS on Skylake systems that need full mitigation. [ +0.010886] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ +4.625202] printk: systemd: 16 output lines suppressed due to ratelimiting ==> etcd [f1cb8870b8cd] <== 2020-10-29 02:05:47.258865 I | etcdserver: starting member aec36adc501070cc in cluster fa54960ea34d58be raft2020/10/29 02:05:47 INFO: aec36adc501070cc switched to configuration voters=() raft2020/10/29 02:05:47 INFO: aec36adc501070cc became follower at term 0 raft2020/10/29 02:05:47 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2020/10/29 02:05:47 INFO: aec36adc501070cc became follower at term 1 raft2020/10/29 02:05:47 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2020-10-29 02:05:47.678812 W | auth: simple token is not cryptographically signed 2020-10-29 02:05:47.906704 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] 2020-10-29 02:05:48.311276 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10) raft2020/10/29 02:05:48 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2020-10-29 02:05:48.311976 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be 2020-10-29 02:05:48.313362 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-10-29 02:05:48.314006 I | embed: listening for metrics on http://127.0.0.1:2381 2020-10-29 02:05:48.314154 I | embed: listening for peers on 192.168.49.2:2380 raft2020/10/29 02:05:48 INFO: aec36adc501070cc is starting a new election at term 1 raft2020/10/29 02:05:48 INFO: aec36adc501070cc became candidate at term 2 raft2020/10/29 02:05:48 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2 raft2020/10/29 02:05:48 INFO: aec36adc501070cc became leader at term 2 raft2020/10/29 02:05:48 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2 2020-10-29 02:05:48.315153 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be 2020-10-29 02:05:48.315479 I | etcdserver: setting up the initial cluster version to 3.4 2020-10-29 02:05:48.315538 I | embed: ready to serve client requests 2020-10-29 02:05:48.317725 I | embed: serving client requests on 192.168.49.2:2379 2020-10-29 02:05:48.317794 I | embed: ready to serve client requests 2020-10-29 02:05:48.319520 I | embed: serving client requests on 127.0.0.1:2379 2020-10-29 02:05:48.345058 N | etcdserver/membership: set the initial cluster version to 3.4 2020-10-29 02:05:48.345133 I | etcdserver/api: enabled capabilities for version 3.4 2020-10-29 02:06:09.513960 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:06:10.228871 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:06:20.229100 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:06:30.229357 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:06:40.229324 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:06:50.228969 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:07:00.229247 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:07:10.228951 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:07:20.229081 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:07:30.229010 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:07:40.229242 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:07:50.229307 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:08:00.229140 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:08:10.229136 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:08:20.229144 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:08:30.229209 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:08:40.229323 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:08:50.229118 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:09:00.229131 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:09:10.229212 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:09:20.229200 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:09:30.229093 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:09:40.228921 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:09:50.228904 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:10:00.228941 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:10:10.229285 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:10:20.229159 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:10:30.229044 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:10:40.228964 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:10:50.229147 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:11:00.229310 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:11:10.228993 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-29 02:11:20.229488 I | etcdserver/api/etcdhttp: /health OK (status code 200) ==> kernel <== 02:11:22 up 14 min, 0 users, load average: 0.21, 0.32, 0.21 Linux minikube 4.18.0-193.19.1.el8_2.x86_64 #1 SMP Mon Sep 14 14:37:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04 LTS" ==> kube-apiserver [dc3d7539e7fb] <== I1029 02:05:52.370576 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I1029 02:05:52.370590 1 controller.go:86] Starting OpenAPI controller I1029 02:05:52.370601 1 naming_controller.go:291] Starting NamingConditionController I1029 02:05:52.370636 1 establishing_controller.go:76] Starting EstablishingController I1029 02:05:52.370648 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController I1029 02:05:52.370657 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I1029 02:05:52.370721 1 crd_finalizer.go:266] Starting CRDFinalizer I1029 02:05:52.372495 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I1029 02:05:52.372515 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I1029 02:05:52.373197 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key I1029 02:05:52.373231 1 available_controller.go:404] Starting AvailableConditionController I1029 02:05:52.373237 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I1029 02:05:52.407488 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I1029 02:05:52.407567 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt E1029 02:05:52.501617 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: I1029 02:05:52.568415 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1029 02:05:52.570627 1 shared_informer.go:247] Caches are synced for crd-autoregister I1029 02:05:52.570643 1 cache.go:39] Caches are synced for autoregister controller I1029 02:05:52.572628 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I1029 02:05:52.573573 1 cache.go:39] Caches are synced for AvailableConditionController controller I1029 02:05:53.367271 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1029 02:05:53.367329 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I1029 02:05:53.378634 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 I1029 02:05:53.381685 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 I1029 02:05:53.381713 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I1029 02:05:53.773551 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I1029 02:05:53.809229 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W1029 02:05:53.942022 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I1029 02:05:53.943263 1 controller.go:606] quota admission added evaluator for: endpoints I1029 02:05:53.947532 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io I1029 02:05:54.838368 1 controller.go:606] quota admission added evaluator for: serviceaccounts I1029 02:05:55.275920 1 controller.go:606] quota admission added evaluator for: deployments.apps I1029 02:05:55.345503 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I1029 02:06:01.848238 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I1029 02:06:01.912134 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps I1029 02:06:02.039977 1 controller.go:606] quota admission added evaluator for: replicasets.apps I1029 02:06:32.830956 1 client.go:360] parsed scheme: "passthrough" I1029 02:06:32.831023 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1029 02:06:32.831032 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1029 02:07:16.602681 1 client.go:360] parsed scheme: "passthrough" I1029 02:07:16.602738 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1029 02:07:16.602748 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1029 02:07:48.645526 1 client.go:360] parsed scheme: "passthrough" I1029 02:07:48.645668 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1029 02:07:48.645684 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1029 02:08:28.101748 1 client.go:360] parsed scheme: "passthrough" I1029 02:08:28.101819 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1029 02:08:28.101829 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1029 02:09:11.765493 1 client.go:360] parsed scheme: "passthrough" I1029 02:09:11.765556 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1029 02:09:11.765565 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1029 02:09:47.415953 1 client.go:360] parsed scheme: "passthrough" I1029 02:09:47.416024 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1029 02:09:47.416033 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1029 02:10:22.726974 1 client.go:360] parsed scheme: "passthrough" I1029 02:10:22.727051 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1029 02:10:22.727066 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1029 02:11:05.370032 1 client.go:360] parsed scheme: "passthrough" I1029 02:11:05.370089 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1029 02:11:05.370097 1 clientconn.go:948] ClientConn switching balancer to "pick_first" ==> kube-controller-manager [37c7a218d3db] <== I1029 02:06:01.285354 1 node_lifecycle_controller.go:380] Sending events to api server. I1029 02:06:01.285691 1 taint_manager.go:163] Sending events to api server. I1029 02:06:01.285787 1 node_lifecycle_controller.go:508] Controller will reconcile labels. I1029 02:06:01.285832 1 controllermanager.go:549] Started "nodelifecycle" I1029 02:06:01.285921 1 node_lifecycle_controller.go:542] Starting node controller I1029 02:06:01.285934 1 shared_informer.go:240] Waiting for caches to sync for taint I1029 02:06:01.535401 1 controllermanager.go:549] Started "persistentvolume-expander" I1029 02:06:01.535493 1 expand_controller.go:319] Starting expand controller I1029 02:06:01.535507 1 shared_informer.go:240] Waiting for caches to sync for expand I1029 02:06:01.787241 1 controllermanager.go:549] Started "pvc-protection" I1029 02:06:01.787489 1 shared_informer.go:240] Waiting for caches to sync for resource quota I1029 02:06:01.787536 1 pvc_protection_controller.go:110] Starting PVC protection controller I1029 02:06:01.787539 1 shared_informer.go:240] Waiting for caches to sync for PVC protection W1029 02:06:01.807791 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1029 02:06:01.841038 1 shared_informer.go:247] Caches are synced for TTL I1029 02:06:01.842694 1 shared_informer.go:247] Caches are synced for PV protection I1029 02:06:01.842725 1 shared_informer.go:247] Caches are synced for expand I1029 02:06:01.848715 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I1029 02:06:01.853419 1 shared_informer.go:247] Caches are synced for endpoint_slice I1029 02:06:01.870026 1 shared_informer.go:247] Caches are synced for namespace I1029 02:06:01.884004 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1029 02:06:01.885205 1 shared_informer.go:247] Caches are synced for GC I1029 02:06:01.885289 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1029 02:06:01.885370 1 shared_informer.go:247] Caches are synced for daemon sets I1029 02:06:01.885387 1 shared_informer.go:247] Caches are synced for stateful set I1029 02:06:01.885403 1 shared_informer.go:247] Caches are synced for endpoint I1029 02:06:01.888543 1 shared_informer.go:247] Caches are synced for ReplicationController I1029 02:06:01.888576 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1029 02:06:01.900580 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1029 02:06:01.901794 1 shared_informer.go:247] Caches are synced for PVC protection I1029 02:06:01.902283 1 shared_informer.go:247] Caches are synced for taint I1029 02:06:01.902342 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W1029 02:06:01.902404 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. I1029 02:06:01.902443 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I1029 02:06:01.902661 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1029 02:06:01.902836 1 taint_manager.go:187] Starting NoExecuteTaintManager I1029 02:06:01.902993 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I1029 02:06:01.903115 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1029 02:06:01.903474 1 shared_informer.go:247] Caches are synced for persistent volume I1029 02:06:01.942841 1 shared_informer.go:247] Caches are synced for HPA I1029 02:06:01.942908 1 shared_informer.go:247] Caches are synced for job I1029 02:06:01.942942 1 shared_informer.go:247] Caches are synced for service account I1029 02:06:01.943998 1 shared_informer.go:247] Caches are synced for attach detach I1029 02:06:01.944745 1 shared_informer.go:247] Caches are synced for bootstrap_signer I1029 02:06:01.964279 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mft5w" I1029 02:06:02.036115 1 shared_informer.go:247] Caches are synced for ReplicaSet I1029 02:06:02.036843 1 shared_informer.go:247] Caches are synced for disruption I1029 02:06:02.036853 1 disruption.go:339] Sending events to api server. I1029 02:06:02.036923 1 shared_informer.go:247] Caches are synced for deployment I1029 02:06:02.059052 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1" I1029 02:06:02.071792 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-6ddmm" I1029 02:06:02.087985 1 shared_informer.go:247] Caches are synced for resource quota I1029 02:06:02.138342 1 shared_informer.go:247] Caches are synced for resource quota I1029 02:06:02.155198 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1029 02:06:02.435703 1 shared_informer.go:247] Caches are synced for garbage collector I1029 02:06:02.435741 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1029 02:06:02.455431 1 shared_informer.go:247] Caches are synced for garbage collector I1029 02:06:16.903354 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. I1029 02:07:07.328690 1 event.go:291] "Event occurred" object="default/hello-minikube" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-minikube-6ddfcc9757 to 1" I1029 02:07:07.337860 1 event.go:291] "Event occurred" object="default/hello-minikube-6ddfcc9757" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-minikube-6ddfcc9757-t7pmx" ==> kube-proxy [1abebe3785bf] <== I1029 02:06:03.489947 1 node.go:136] Successfully retrieved node IP: 192.168.49.2 I1029 02:06:03.490067 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation W1029 02:06:03.570289 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy I1029 02:06:03.570395 1 server_others.go:186] Using iptables Proxier. W1029 02:06:03.570406 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I1029 02:06:03.570410 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I1029 02:06:03.570999 1 server.go:650] Version: v1.19.2 I1029 02:06:03.571376 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I1029 02:06:03.571398 1 conntrack.go:52] Setting nf_conntrack_max to 131072 E1029 02:06:03.571908 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro seclabel nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro seclabel nosuid nodev noexec relatime]) I1029 02:06:03.572015 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1029 02:06:03.572071 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1029 02:06:03.574131 1 config.go:315] Starting service config controller I1029 02:06:03.574153 1 shared_informer.go:240] Waiting for caches to sync for service config I1029 02:06:03.574176 1 config.go:224] Starting endpoint slice config controller I1029 02:06:03.574179 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I1029 02:06:03.674338 1 shared_informer.go:247] Caches are synced for endpoint slice config I1029 02:06:03.674351 1 shared_informer.go:247] Caches are synced for service config ==> kube-scheduler [c69842619ed0] <== I1029 02:05:47.703267 1 registry.go:173] Registering SelectorSpread plugin I1029 02:05:47.703338 1 registry.go:173] Registering SelectorSpread plugin I1029 02:05:48.142683 1 serving.go:331] Generated self-signed cert in-memory W1029 02:05:52.454432 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1029 02:05:52.459193 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1029 02:05:52.459239 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous. W1029 02:05:52.459250 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1029 02:05:52.507965 1 registry.go:173] Registering SelectorSpread plugin I1029 02:05:52.508004 1 registry.go:173] Registering SelectorSpread plugin I1029 02:05:52.512407 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I1029 02:05:52.512569 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1029 02:05:52.512590 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1029 02:05:52.512845 1 tlsconfig.go:240] Starting DynamicServingCertificateController E1029 02:05:52.516386 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1029 02:05:52.516470 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1029 02:05:52.516539 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1029 02:05:52.516594 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1029 02:05:52.516667 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1029 02:05:52.516729 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1029 02:05:52.516910 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1029 02:05:52.516970 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1029 02:05:52.517790 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1029 02:05:52.517925 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1029 02:05:52.518004 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1029 02:05:52.518062 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1029 02:05:52.518107 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1029 02:05:53.426871 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope I1029 02:05:54.112808 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== -- Logs begin at Thu 2020-10-29 02:05:19 UTC, end at Thu 2020-10-29 02:11:23 UTC. -- Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.253006 1717 policy_none.go:43] [cpumanager] none policy: Start Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.269193 1717 plugin_manager.go:114] Starting Kubelet Plugin Manager Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.514256 1717 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.515822 1717 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.521024 1717 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.527870 1717 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.532946 1717 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.591585 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-ca-certs") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.591713 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ff7d12f9e4f14e202a85a7c5534a3129-kubeconfig") pod "kube-scheduler-minikube" (UID: "ff7d12f9e4f14e202a85a7c5534a3129") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.591894 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/94a17e1b-fc40-404d-a170-aebc9b39d628-lib-modules") pod "kube-proxy-mft5w" (UID: "94a17e1b-fc40-404d-a170-aebc9b39d628") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.591966 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592021 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592088 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/94a17e1b-fc40-404d-a170-aebc9b39d628-kube-proxy") pod "kube-proxy-mft5w" (UID: "94a17e1b-fc40-404d-a170-aebc9b39d628") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592130 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-k8s-certs") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592193 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592229 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-k8s-certs") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592253 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-kubeconfig") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592278 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592317 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/94a17e1b-fc40-404d-a170-aebc9b39d628-xtables-lock") pod "kube-proxy-mft5w" (UID: "94a17e1b-fc40-404d-a170-aebc9b39d628") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592351 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-zpljj" (UniqueName: "kubernetes.io/secret/94a17e1b-fc40-404d-a170-aebc9b39d628-kube-proxy-token-zpljj") pod "kube-proxy-mft5w" (UID: "94a17e1b-fc40-404d-a170-aebc9b39d628") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592379 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/d186e6390814d4dd7e770f47c08e98a2-etcd-certs") pod "etcd-minikube" (UID: "d186e6390814d4dd7e770f47c08e98a2") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592410 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/d186e6390814d4dd7e770f47c08e98a2-etcd-data") pod "etcd-minikube" (UID: "d186e6390814d4dd7e770f47c08e98a2") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592433 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-ca-certs") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592460 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592482 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592515 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 29 02:06:02 minikube kubelet[1717]: I1029 02:06:02.592554 1717 reconciler.go:157] Reconciler: start to sync state Oct 29 02:06:12 minikube kubelet[1717]: I1029 02:06:12.231684 1717 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 29 02:06:12 minikube kubelet[1717]: I1029 02:06:12.321567 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-q5dbg" (UniqueName: "kubernetes.io/secret/e2b952a9-f930-458b-be5b-fbc4511f45ea-storage-provisioner-token-q5dbg") pod "storage-provisioner" (UID: "e2b952a9-f930-458b-be5b-fbc4511f45ea") Oct 29 02:06:12 minikube kubelet[1717]: I1029 02:06:12.321647 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/e2b952a9-f930-458b-be5b-fbc4511f45ea-tmp") pod "storage-provisioner" (UID: "e2b952a9-f930-458b-be5b-fbc4511f45ea") Oct 29 02:06:16 minikube kubelet[1717]: I1029 02:06:16.627977 1717 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 29 02:06:16 minikube kubelet[1717]: I1029 02:06:16.735137 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7592bd9b-b7bb-408e-8242-1b4e2832c444-config-volume") pod "coredns-f9fd979d6-6ddmm" (UID: "7592bd9b-b7bb-408e-8242-1b4e2832c444") Oct 29 02:06:16 minikube kubelet[1717]: I1029 02:06:16.735189 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-v5xf6" (UniqueName: "kubernetes.io/secret/7592bd9b-b7bb-408e-8242-1b4e2832c444-coredns-token-v5xf6") pod "coredns-f9fd979d6-6ddmm" (UID: "7592bd9b-b7bb-408e-8242-1b4e2832c444") Oct 29 02:06:17 minikube kubelet[1717]: W1029 02:06:17.290389 1717 pod_container_deletor.go:79] Container "4f2517c41e0de65a0162b327ac47e1aa9eb47b8a81c585a984b8dac1e0e8e9c0" not found in pod's containers Oct 29 02:06:17 minikube kubelet[1717]: W1029 02:06:17.292723 1717 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-6ddmm through plugin: invalid network status for Oct 29 02:06:18 minikube kubelet[1717]: W1029 02:06:18.296731 1717 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-6ddmm through plugin: invalid network status for Oct 29 02:07:07 minikube kubelet[1717]: I1029 02:07:07.349578 1717 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 29 02:07:07 minikube kubelet[1717]: I1029 02:07:07.459019 1717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-45k5d" (UniqueName: "kubernetes.io/secret/8ea8c461-b6a8-4c16-9b57-71ba32ae8cba-default-token-45k5d") pod "hello-minikube-6ddfcc9757-t7pmx" (UID: "8ea8c461-b6a8-4c16-9b57-71ba32ae8cba") Oct 29 02:07:07 minikube kubelet[1717]: W1029 02:07:07.984192 1717 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-minikube-6ddfcc9757-t7pmx through plugin: invalid network status for Oct 29 02:07:08 minikube kubelet[1717]: W1029 02:07:08.541987 1717 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-minikube-6ddfcc9757-t7pmx through plugin: invalid network status for Oct 29 02:07:47 minikube kubelet[1717]: E1029 02:07:47.993588 1717 remote_image.go:113] PullImage "k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:48049->192.168.49.1:53: i/o timeout Oct 29 02:07:47 minikube kubelet[1717]: E1029 02:07:47.993663 1717 kuberuntime_image.go:51] Pull image "k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:48049->192.168.49.1:53: i/o timeout Oct 29 02:07:47 minikube kubelet[1717]: E1029 02:07:47.993767 1717 kuberuntime_manager.go:804] container &Container{Name:echoserver,Image:k8s.gcr.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45k5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod hello-minikube-6ddfcc9757-t7pmx_default(8ea8c461-b6a8-4c16-9b57-71ba32ae8cba): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:48049->192.168.49.1:53: i/o timeout Oct 29 02:07:47 minikube kubelet[1717]: E1029 02:07:47.993798 1717 pod_workers.go:191] Error syncing pod 8ea8c461-b6a8-4c16-9b57-71ba32ae8cba ("hello-minikube-6ddfcc9757-t7pmx_default(8ea8c461-b6a8-4c16-9b57-71ba32ae8cba)"), skipping: failed to "StartContainer" for "echoserver" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:48049->192.168.49.1:53: i/o timeout" Oct 29 02:07:48 minikube kubelet[1717]: E1029 02:07:48.743994 1717 pod_workers.go:191] Error syncing pod 8ea8c461-b6a8-4c16-9b57-71ba32ae8cba ("hello-minikube-6ddfcc9757-t7pmx_default(8ea8c461-b6a8-4c16-9b57-71ba32ae8cba)"), skipping: failed to "StartContainer" for "echoserver" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/echoserver:1.4\"" Oct 29 02:08:41 minikube kubelet[1717]: E1029 02:08:41.821624 1717 remote_image.go:113] PullImage "k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:56942->192.168.49.1:53: i/o timeout Oct 29 02:08:41 minikube kubelet[1717]: E1029 02:08:41.821666 1717 kuberuntime_image.go:51] Pull image "k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:56942->192.168.49.1:53: i/o timeout Oct 29 02:08:41 minikube kubelet[1717]: E1029 02:08:41.821751 1717 kuberuntime_manager.go:804] container &Container{Name:echoserver,Image:k8s.gcr.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45k5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod hello-minikube-6ddfcc9757-t7pmx_default(8ea8c461-b6a8-4c16-9b57-71ba32ae8cba): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:56942->192.168.49.1:53: i/o timeout Oct 29 02:08:41 minikube kubelet[1717]: E1029 02:08:41.821785 1717 pod_workers.go:191] Error syncing pod 8ea8c461-b6a8-4c16-9b57-71ba32ae8cba ("hello-minikube-6ddfcc9757-t7pmx_default(8ea8c461-b6a8-4c16-9b57-71ba32ae8cba)"), skipping: failed to "StartContainer" for "echoserver" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:56942->192.168.49.1:53: i/o timeout" Oct 29 02:08:56 minikube kubelet[1717]: E1029 02:08:56.813236 1717 pod_workers.go:191] Error syncing pod 8ea8c461-b6a8-4c16-9b57-71ba32ae8cba ("hello-minikube-6ddfcc9757-t7pmx_default(8ea8c461-b6a8-4c16-9b57-71ba32ae8cba)"), skipping: failed to "StartContainer" for "echoserver" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/echoserver:1.4\"" Oct 29 02:09:51 minikube kubelet[1717]: E1029 02:09:51.827700 1717 remote_image.go:113] PullImage "k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:47348->192.168.49.1:53: i/o timeout Oct 29 02:09:51 minikube kubelet[1717]: E1029 02:09:51.827739 1717 kuberuntime_image.go:51] Pull image "k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:47348->192.168.49.1:53: i/o timeout Oct 29 02:09:51 minikube kubelet[1717]: E1029 02:09:51.827842 1717 kuberuntime_manager.go:804] container &Container{Name:echoserver,Image:k8s.gcr.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45k5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod hello-minikube-6ddfcc9757-t7pmx_default(8ea8c461-b6a8-4c16-9b57-71ba32ae8cba): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:47348->192.168.49.1:53: i/o timeout Oct 29 02:09:51 minikube kubelet[1717]: E1029 02:09:51.827873 1717 pod_workers.go:191] Error syncing pod 8ea8c461-b6a8-4c16-9b57-71ba32ae8cba ("hello-minikube-6ddfcc9757-t7pmx_default(8ea8c461-b6a8-4c16-9b57-71ba32ae8cba)"), skipping: failed to "StartContainer" for "echoserver" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:47348->192.168.49.1:53: i/o timeout" Oct 29 02:10:06 minikube kubelet[1717]: E1029 02:10:06.813086 1717 pod_workers.go:191] Error syncing pod 8ea8c461-b6a8-4c16-9b57-71ba32ae8cba ("hello-minikube-6ddfcc9757-t7pmx_default(8ea8c461-b6a8-4c16-9b57-71ba32ae8cba)"), skipping: failed to "StartContainer" for "echoserver" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/echoserver:1.4\"" Oct 29 02:10:20 minikube kubelet[1717]: E1029 02:10:20.813451 1717 pod_workers.go:191] Error syncing pod 8ea8c461-b6a8-4c16-9b57-71ba32ae8cba ("hello-minikube-6ddfcc9757-t7pmx_default(8ea8c461-b6a8-4c16-9b57-71ba32ae8cba)"), skipping: failed to "StartContainer" for "echoserver" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/echoserver:1.4\"" Oct 29 02:11:15 minikube kubelet[1717]: E1029 02:11:15.827741 1717 remote_image.go:113] PullImage "k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:59041->192.168.49.1:53: i/o timeout Oct 29 02:11:15 minikube kubelet[1717]: E1029 02:11:15.827780 1717 kuberuntime_image.go:51] Pull image "k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:59041->192.168.49.1:53: i/o timeout Oct 29 02:11:15 minikube kubelet[1717]: E1029 02:11:15.827866 1717 kuberuntime_manager.go:804] container &Container{Name:echoserver,Image:k8s.gcr.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45k5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod hello-minikube-6ddfcc9757-t7pmx_default(8ea8c461-b6a8-4c16-9b57-71ba32ae8cba): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:59041->192.168.49.1:53: i/o timeout Oct 29 02:11:15 minikube kubelet[1717]: E1029 02:11:15.827892 1717 pod_workers.go:191] Error syncing pod 8ea8c461-b6a8-4c16-9b57-71ba32ae8cba ("hello-minikube-6ddfcc9757-t7pmx_default(8ea8c461-b6a8-4c16-9b57-71ba32ae8cba)"), skipping: failed to "StartContainer" for "echoserver" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.49.1:53: read udp 192.168.49.2:59041->192.168.49.1:53: i/o timeout" ==> storage-provisioner [e20801ccb3b4] <== I1029 02:06:12.849971 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I1029 02:06:12.858545 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I1029 02:06:12.859131 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"abe7d036-6e0e-4fb7-bfad-9ea2a288e658", APIVersion:"v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_cc1f75bf-d379-488b-a41e-2b8b6fd68915 became leader I1029 02:06:12.859172 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_cc1f75bf-d379-488b-a41e-2b8b6fd68915! I1029 02:06:12.959460 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_cc1f75bf-d379-488b-a41e-2b8b6fd68915!
RA489 commented 3 years ago

/kind support

RA489 commented 3 years ago

@dongyu If you don't mind can you please try this WIP PR https://github.com/kubernetes/minikube/pull/8555 as a workaround?

dongyu commented 3 years ago

@dongyu If you don't mind can you please try this WIP PR #8555 as a workaround?

thank you ,i think the issue it's not about china GWF,it's about minikube docker container DNS issue, use minikube ssh login minikube container,execute cmd curl google.com output curl: (6) Could not resolve host: google.com

bitcoffeeiux commented 3 years ago

@dongyu If you don't mind can you please try this WIP PR #8555 as a workaround?

thank you ,i think the issue it's not about china GWF,it's about minikube docker container DNS issue, use minikube ssh login minikube container,execute cmd curl google.com output curl: (6) Could not resolve host: google.com

I have encountered the same problem. Is there any solution?

dongyu commented 3 years ago

@dongyu If you don't mind can you please try this WIP PR #8555 as a workaround?

thank you ,i think the issue it's not about china GWF,it's about minikube docker container DNS issue, use minikube ssh login minikube container,execute cmd curl google.com output curl: (6) Could not resolve host: google.com

I have encountered the same problem. Is there any solution?

update DNS

ulan-yisaev commented 1 year ago

The same issue here. What does mean "update DNS"? Do you mean adding 8.8.8.8 to /etc/resolv.conf?

mjallow2021 commented 1 year ago

Encountered a similar issue a few days ago. Was able to fix after reading this article from the minikube documentation.
Mine was a certificate issue, hence allowing minikube to pull from insecure registry fixed the issue. Installed the metrics server addon to allow the metrics to work. Commands below:

Documentation: https://minikube.sigs.k8s.io/docs/handbook/registry/

bitcoffeeiux commented 1 year ago

่ฟ™ๆ˜ฏๆฅ่‡ชQQ้‚ฎ็ฎฑ็š„ๅ‡ๆœŸ่‡ชๅŠจๅ›žๅค้‚ฎไปถใ€‚ ย  ไฝ ๅฅฝ๏ผŒไฝ ็š„้‚ฎไปถๆˆ‘ๅทฒ็ปๆ”ถๅˆฐ๏ผŒ่ฐข่ฐข~