kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.48k stars 4.89k forks source link

NodeName of `minikube status` is incorrect #7452

Closed govargo closed 4 years ago

govargo commented 4 years ago

Steps to reproduce the issue:

$ out/minikube version
minikube version: v1.9.2
commit: 8c8ceabdf413afda2f9f850c279c4e91f8467b33
  1. start minikube cluster

    $ out/minikube start --driver docker
  2. check the worker nodename. It is minikube.

    $ kubectl get nodes
    NAME       STATUS   ROLES    AGE   VERSION
    minikube   Ready    master   29s   v1.18.0
  3. check the worker nodename from minikube status. It is m01. It should be minikube, but the node name is wrong.

    $ minikube status
    m01
    host: Running
    kubelet: Running
    apiserver: Running
    kubeconfig: Configured

Optional: Full output of minikube logs command:

``` ==> Docker <== -- Logs begin at Mon 2020-04-06 14:47:41 UTC, end at Mon 2020-04-06 14:49:05 UTC. -- Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.418025863Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.418065887Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.418149310Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.418332162Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.418797991Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.418868708Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.418940748Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419000279Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419041197Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419122163Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419178645Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419230730Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419285357Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419324488Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419367198Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419428282Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419470040Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419508779Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419546849Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419683679Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419750353Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.419792665Z" level=info msg="containerd successfully booted in 0.005751s" Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.431432627Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.431540840Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.431597732Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.431643131Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.432467813Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.432543833Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.432643865Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.432695758Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.747770050Z" level=warning msg="Your kernel does not support cgroup blkio weight" Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.747815982Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.747824615Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.747829342Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.747834116Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.747838511Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.748017810Z" level=info msg="Loading containers: start." Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.841669666Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.884002470Z" level=info msg="Loading containers: done." Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.925096690Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.925241458Z" level=info msg="Daemon has completed initialization" Apr 06 14:48:03 minikube systemd[1]: Started Docker Application Container Engine. Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.947603260Z" level=info msg="API listen on /var/run/docker.sock" Apr 06 14:48:03 minikube dockerd[2068]: time="2020-04-06T14:48:03.948291937Z" level=info msg="API listen on [::]:2376" Apr 06 14:48:15 minikube dockerd[2068]: time="2020-04-06T14:48:15.610405318Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3f5c38c9e44c237f3374cdc27e92cc33ec7568ca77f55c63a364e6d11eb2ed26/shim.sock" debug=false pid=2993 Apr 06 14:48:15 minikube dockerd[2068]: time="2020-04-06T14:48:15.611506049Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3bff00a623f4e11b3def5f7b24fb75359dffe908aca01a7a21134c81d8fda2e8/shim.sock" debug=false pid=2995 Apr 06 14:48:15 minikube dockerd[2068]: time="2020-04-06T14:48:15.791695226Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/47bab6b00991d20d8b6129db750e79490133d9e32f999b122f2476c7a97423b7/shim.sock" debug=false pid=3068 Apr 06 14:48:15 minikube dockerd[2068]: time="2020-04-06T14:48:15.809144157Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f9411a67b5c4891aebc8a15865c0bc6815ab187d4d344e5ca52fb06eaa38b944/shim.sock" debug=false pid=3083 Apr 06 14:48:15 minikube dockerd[2068]: time="2020-04-06T14:48:15.881923829Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/eeca3f798035e07bb89cec10218c05c46e6b1cd730aa6e77b8b7c37b1c0b8a25/shim.sock" debug=false pid=3117 Apr 06 14:48:15 minikube dockerd[2068]: time="2020-04-06T14:48:15.953665566Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/edf4bd6f4eb579145cefebcf55321cc6ddd286dfc42eac8228edeb8c4e997cb2/shim.sock" debug=false pid=3147 Apr 06 14:48:16 minikube dockerd[2068]: time="2020-04-06T14:48:16.156209993Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/90b108586bda5eb659a4bd7028610853d81c9949d35e4b4abd9da3ed20332b2d/shim.sock" debug=false pid=3218 Apr 06 14:48:16 minikube dockerd[2068]: time="2020-04-06T14:48:16.326678550Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/03fdccd13b42c78989b4731f699e12fe3c8b3e09200233b7fe3f88fcf1635384/shim.sock" debug=false pid=3263 Apr 06 14:48:32 minikube dockerd[2068]: time="2020-04-06T14:48:32.241318254Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7cdd1620837160edd0c0f4a4574bf3f67d1e4f587950e18bcc28ef8373091722/shim.sock" debug=false pid=3879 Apr 06 14:48:32 minikube dockerd[2068]: time="2020-04-06T14:48:32.470471839Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a777c84f420a0a8d7034f32418ea31c47e974dcd939cc45abd6b36fa2892f482/shim.sock" debug=false pid=3922 Apr 06 14:48:32 minikube dockerd[2068]: time="2020-04-06T14:48:32.678152342Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/27b865d7bf9c8a387e2ebb828e95cdeafc4d1319edb7a4ff9e6b762c2b859ff9/shim.sock" debug=false pid=3958 Apr 06 14:48:32 minikube dockerd[2068]: time="2020-04-06T14:48:32.900520266Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8e35a0c118a51dc5a2e4b38237aa5b5d5deca2e41cfc609e73133433186e043a/shim.sock" debug=false pid=4023 Apr 06 14:48:33 minikube dockerd[2068]: time="2020-04-06T14:48:33.923012527Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5163d1f73442f1e61b61727d228adb57554a50ed74bf9403e6b48a7c916ae4ff/shim.sock" debug=false pid=4105 Apr 06 14:48:34 minikube dockerd[2068]: time="2020-04-06T14:48:34.180523843Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/63988d89aaa70914b07a953b45a971ae173a5c345f74cf27fdc988d312ad538f/shim.sock" debug=false pid=4157 Apr 06 14:48:35 minikube dockerd[2068]: time="2020-04-06T14:48:35.051811901Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3fbbe60066921f73c5420a3cee8c2b25e587cfb29d492aed484cc15dce8f64fa/shim.sock" debug=false pid=4208 Apr 06 14:48:35 minikube dockerd[2068]: time="2020-04-06T14:48:35.376314925Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/828a637c7550c3dc29a66d6ba0595db43011dd0a47f8a7dc483a8e328aeabdb9/shim.sock" debug=false pid=4262 ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 828a637c7550c 67da37a9a360e 30 seconds ago Running coredns 0 3fbbe60066921 63988d89aaa70 67da37a9a360e 31 seconds ago Running coredns 0 5163d1f73442f 8e35a0c118a51 4689081edb103 33 seconds ago Running storage-provisioner 0 27b865d7bf9c8 a777c84f420a0 43940c34f24f3 33 seconds ago Running kube-proxy 0 7cdd162083716 03fdccd13b42c 303ce5db0e90d 49 seconds ago Running etcd 0 f9411a67b5c48 90b108586bda5 a31f78c7c8ce1 49 seconds ago Running kube-scheduler 0 47bab6b00991d edf4bd6f4eb57 74060cea7f704 50 seconds ago Running kube-apiserver 0 3f5c38c9e44c2 eeca3f798035e d3e55153f52fb 50 seconds ago Running kube-controller-manager 0 3bff00a623f4e ==> coredns [63988d89aaa7] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b ==> coredns [828a637c7550] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=8c8ceabdf413afda2f9f850c279c4e91f8467b33 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_04_06T23_48_24_0700 minikube.k8s.io/version=v1.9.2 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 06 Apr 2020 14:48:21 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Mon, 06 Apr 2020 14:49:05 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 06 Apr 2020 14:48:25 +0000 Mon, 06 Apr 2020 14:48:16 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 06 Apr 2020 14:48:25 +0000 Mon, 06 Apr 2020 14:48:16 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 06 Apr 2020 14:48:25 +0000 Mon, 06 Apr 2020 14:48:16 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 06 Apr 2020 14:48:25 +0000 Mon, 06 Apr 2020 14:48:25 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.64.40 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 16954224Ki hugepages-2Mi: 0 memory: 3936948Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 16954224Ki hugepages-2Mi: 0 memory: 3936948Ki pods: 110 System Info: Machine ID: 53bacb1073f849e9ac4e6d2ac8e2d50d System UUID: 8b5111ea-0000-0000-9810-f018980a2bd5 Boot ID: 0506ba79-9a90-4c17-b779-688b5b3e0c8b Kernel Version: 4.19.107 OS Image: Buildroot 2019.02.10 Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.8 Kubelet Version: v1.18.0 Kube-Proxy Version: v1.18.0 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-66bff467f8-8dpr6 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 34s kube-system coredns-66bff467f8-t58jx 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 34s kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40s kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 40s kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 40s kube-system kube-proxy-cf2g2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34s kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 40s kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (37%) 0 (0%) memory 140Mi (3%) 340Mi (8%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 51s kubelet, minikube Starting kubelet. Normal NodeAllocatableEnforced 51s kubelet, minikube Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 50s (x3 over 51s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 50s (x3 over 51s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 50s (x3 over 51s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 40s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal Starting 40s kubelet, minikube Starting kubelet. Normal NodeHasNoDiskPressure 40s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 40s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeNotReady 40s kubelet, minikube Node minikube status is now: NodeNotReady Normal NodeAllocatableEnforced 40s kubelet, minikube Updated Node Allocatable limit across pods Normal NodeReady 40s kubelet, minikube Node minikube status is now: NodeReady Normal Starting 33s kube-proxy, minikube Starting kube-proxy. ==> dmesg <== [Apr 6 14:47] ERROR: earlyprintk= earlyser already used [ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.201876] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xC0, should be 0x1D (20180810/tbprint-177) [ +4.674230] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184) [ +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620) [ +0.014847] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +2.178095] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument [ +0.007424] systemd-fstab-generator[1099]: Ignoring "noauto" for root device [ +0.003265] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ +0.000003] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [ +0.937073] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [ +0.608325] vboxguest: loading out-of-tree module taints kernel. [ +0.002862] vboxguest: PCI device not found, probably running on physical hardware. [ +3.496275] systemd-fstab-generator[1879]: Ignoring "noauto" for root device [Apr 6 14:48] kauditd_printk_skb: 65 callbacks suppressed [ +0.645074] systemd-fstab-generator[2274]: Ignoring "noauto" for root device [ +1.223697] systemd-fstab-generator[2493]: Ignoring "noauto" for root device [ +9.023255] kauditd_printk_skb: 107 callbacks suppressed [ +9.744780] systemd-fstab-generator[3592]: Ignoring "noauto" for root device [ +8.377920] kauditd_printk_skb: 32 callbacks suppressed [ +9.050651] kauditd_printk_skb: 50 callbacks suppressed ==> etcd [03fdccd13b42] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-04-06 14:48:17.792968 I | etcdmain: etcd Version: 3.4.3 2020-04-06 14:48:17.794469 I | etcdmain: Git SHA: 3cf2f69b5 2020-04-06 14:48:17.794498 I | etcdmain: Go Version: go1.12.12 2020-04-06 14:48:17.794548 I | etcdmain: Go OS/Arch: linux/amd64 2020-04-06 14:48:17.794553 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-04-06 14:48:17.794786 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-04-06 14:48:17.796519 I | embed: name = minikube 2020-04-06 14:48:17.796548 I | embed: data dir = /var/lib/minikube/etcd 2020-04-06 14:48:17.796553 I | embed: member dir = /var/lib/minikube/etcd/member 2020-04-06 14:48:17.796556 I | embed: heartbeat = 100ms 2020-04-06 14:48:17.796558 I | embed: election = 1000ms 2020-04-06 14:48:17.796561 I | embed: snapshot count = 10000 2020-04-06 14:48:17.796567 I | embed: advertise client URLs = https://192.168.64.40:2379 2020-04-06 14:48:17.805345 I | etcdserver: starting member 8700b5f30fd8925d in cluster 39244c9d1c1d508b raft2020/04/06 14:48:17 INFO: 8700b5f30fd8925d switched to configuration voters=() raft2020/04/06 14:48:17 INFO: 8700b5f30fd8925d became follower at term 0 raft2020/04/06 14:48:17 INFO: newRaft 8700b5f30fd8925d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2020/04/06 14:48:17 INFO: 8700b5f30fd8925d became follower at term 1 raft2020/04/06 14:48:17 INFO: 8700b5f30fd8925d switched to configuration voters=(9727975250667803229) 2020-04-06 14:48:17.811207 W | auth: simple token is not cryptographically signed 2020-04-06 14:48:17.814794 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] raft2020/04/06 14:48:17 INFO: 8700b5f30fd8925d switched to configuration voters=(9727975250667803229) 2020-04-06 14:48:17.821252 I | etcdserver/membership: added member 8700b5f30fd8925d [https://192.168.64.40:2380] to cluster 39244c9d1c1d508b 2020-04-06 14:48:17.821451 I | etcdserver: 8700b5f30fd8925d as single-node; fast-forwarding 9 ticks (election ticks 10) 2020-04-06 14:48:17.823414 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-04-06 14:48:17.823779 I | embed: listening for peers on 192.168.64.40:2380 2020-04-06 14:48:17.824065 I | embed: listening for metrics on http://127.0.0.1:2381 raft2020/04/06 14:48:18 INFO: 8700b5f30fd8925d is starting a new election at term 1 raft2020/04/06 14:48:18 INFO: 8700b5f30fd8925d became candidate at term 2 raft2020/04/06 14:48:18 INFO: 8700b5f30fd8925d received MsgVoteResp from 8700b5f30fd8925d at term 2 raft2020/04/06 14:48:18 INFO: 8700b5f30fd8925d became leader at term 2 raft2020/04/06 14:48:18 INFO: raft.node: 8700b5f30fd8925d elected leader 8700b5f30fd8925d at term 2 2020-04-06 14:48:18.409274 I | etcdserver: setting up the initial cluster version to 3.4 2020-04-06 14:48:18.409522 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.64.40:2379]} to cluster 39244c9d1c1d508b 2020-04-06 14:48:18.409640 I | embed: ready to serve client requests 2020-04-06 14:48:18.410958 I | embed: ready to serve client requests 2020-04-06 14:48:18.412821 I | embed: serving client requests on 127.0.0.1:2379 2020-04-06 14:48:18.414373 N | etcdserver/membership: set the initial cluster version to 3.4 2020-04-06 14:48:18.415286 I | etcdserver/api: enabled capabilities for version 3.4 2020-04-06 14:48:18.417998 I | embed: serving client requests on 192.168.64.40:2379 ==> kernel <== 14:49:05 up 1 min, 0 users, load average: 0.63, 0.25, 0.09 Linux minikube 4.19.107 #1 SMP Thu Mar 26 11:33:10 PDT 2020 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2019.02.10" ==> kube-apiserver [edf4bd6f4eb5] <== W0406 14:48:19.335210 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W0406 14:48:19.364981 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0406 14:48:19.378999 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0406 14:48:19.381658 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0406 14:48:19.393422 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0406 14:48:19.408327 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. W0406 14:48:19.408369 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. I0406 14:48:19.415690 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0406 14:48:19.415708 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0406 14:48:19.417243 1 client.go:361] parsed scheme: "endpoint" I0406 14:48:19.417282 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0406 14:48:19.439509 1 client.go:361] parsed scheme: "endpoint" I0406 14:48:19.439904 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0406 14:48:21.261904 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0406 14:48:21.262328 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0406 14:48:21.262722 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I0406 14:48:21.263494 1 secure_serving.go:178] Serving securely on [::]:8443 I0406 14:48:21.263546 1 controller.go:81] Starting OpenAPI AggregationController I0406 14:48:21.263625 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0406 14:48:21.265006 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I0406 14:48:21.265078 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0406 14:48:21.266701 1 crd_finalizer.go:266] Starting CRDFinalizer I0406 14:48:21.266862 1 available_controller.go:387] Starting AvailableConditionController I0406 14:48:21.266891 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0406 14:48:21.267160 1 autoregister_controller.go:141] Starting autoregister controller I0406 14:48:21.267215 1 cache.go:32] Waiting for caches to sync for autoregister controller I0406 14:48:21.267557 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0406 14:48:21.267603 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller I0406 14:48:21.333110 1 controller.go:86] Starting OpenAPI controller I0406 14:48:21.333228 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0406 14:48:21.333333 1 naming_controller.go:291] Starting NamingConditionController I0406 14:48:21.333362 1 establishing_controller.go:76] Starting EstablishingController I0406 14:48:21.333442 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController I0406 14:48:21.333533 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0406 14:48:21.333571 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0406 14:48:21.333644 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister I0406 14:48:21.333720 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0406 14:48:21.333785 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt E0406 14:48:21.337634 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.64.40, ResourceVersion: 0, AdditionalErrorMsg: I0406 14:48:21.373119 1 cache.go:39] Caches are synced for AvailableConditionController controller I0406 14:48:21.373186 1 cache.go:39] Caches are synced for autoregister controller I0406 14:48:21.376864 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0406 14:48:21.371206 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller I0406 14:48:21.438510 1 shared_informer.go:230] Caches are synced for crd-autoregister I0406 14:48:22.264631 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0406 14:48:22.265810 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0406 14:48:22.276133 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 I0406 14:48:22.285301 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 I0406 14:48:22.285334 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I0406 14:48:22.738227 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0406 14:48:22.791397 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0406 14:48:22.904834 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.64.40] I0406 14:48:22.906681 1 controller.go:606] quota admission added evaluator for: endpoints I0406 14:48:22.914577 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io I0406 14:48:24.493963 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0406 14:48:24.570216 1 controller.go:606] quota admission added evaluator for: serviceaccounts I0406 14:48:24.586740 1 controller.go:606] quota admission added evaluator for: deployments.apps I0406 14:48:24.687832 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I0406 14:48:31.780212 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps I0406 14:48:31.783498 1 controller.go:606] quota admission added evaluator for: replicasets.apps ==> kube-controller-manager [eeca3f798035] <== I0406 14:48:30.651586 1 disruption.go:331] Starting disruption controller I0406 14:48:30.651613 1 shared_informer.go:223] Waiting for caches to sync for disruption I0406 14:48:30.767935 1 controllermanager.go:533] Started "ttl" I0406 14:48:30.769161 1 ttl_controller.go:118] Starting TTL controller I0406 14:48:30.769328 1 shared_informer.go:223] Waiting for caches to sync for TTL I0406 14:48:31.016111 1 controllermanager.go:533] Started "attachdetach" I0406 14:48:31.016215 1 attach_detach_controller.go:338] Starting attach detach controller I0406 14:48:31.016358 1 shared_informer.go:223] Waiting for caches to sync for attach detach I0406 14:48:31.266895 1 controllermanager.go:533] Started "endpoint" I0406 14:48:31.267350 1 endpoints_controller.go:182] Starting endpoint controller I0406 14:48:31.267390 1 shared_informer.go:223] Waiting for caches to sync for endpoint I0406 14:48:31.516578 1 controllermanager.go:533] Started "podgc" I0406 14:48:31.516768 1 gc_controller.go:89] Starting GC controller I0406 14:48:31.517061 1 shared_informer.go:223] Waiting for caches to sync for GC I0406 14:48:31.667295 1 controllermanager.go:533] Started "csrcleaner" I0406 14:48:31.668220 1 cleaner.go:82] Starting CSR cleaner controller I0406 14:48:31.670363 1 shared_informer.go:223] Waiting for caches to sync for resource quota W0406 14:48:31.693527 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0406 14:48:31.716051 1 shared_informer.go:230] Caches are synced for PV protection I0406 14:48:31.717499 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I0406 14:48:31.719331 1 shared_informer.go:230] Caches are synced for endpoint_slice I0406 14:48:31.722212 1 shared_informer.go:230] Caches are synced for ReplicationController I0406 14:48:31.722806 1 shared_informer.go:230] Caches are synced for GC I0406 14:48:31.733562 1 shared_informer.go:230] Caches are synced for HPA E0406 14:48:31.734897 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again E0406 14:48:31.738269 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again I0406 14:48:31.745390 1 shared_informer.go:230] Caches are synced for job I0406 14:48:31.766504 1 shared_informer.go:230] Caches are synced for bootstrap_signer I0406 14:48:31.767691 1 shared_informer.go:230] Caches are synced for daemon sets I0406 14:48:31.769965 1 shared_informer.go:230] Caches are synced for endpoint I0406 14:48:31.773312 1 shared_informer.go:230] Caches are synced for deployment I0406 14:48:31.778575 1 shared_informer.go:230] Caches are synced for TTL I0406 14:48:31.792567 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"4b9d8846-9eb0-4f05-ab5e-c1b4585938d9", APIVersion:"apps/v1", ResourceVersion:"196", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2 I0406 14:48:31.806680 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"e7917662-d829-49db-ab54-5bbddae30642", APIVersion:"apps/v1", ResourceVersion:"201", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-cf2g2 I0406 14:48:31.818980 1 shared_informer.go:230] Caches are synced for ReplicaSet I0406 14:48:31.823593 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I0406 14:48:31.824630 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I0406 14:48:31.887782 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"5bb31257-a1b1-4c76-a40b-4af70d6a72f7", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-8dpr6 I0406 14:48:31.938447 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"5bb31257-a1b1-4c76-a40b-4af70d6a72f7", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-t58jx I0406 14:48:31.969670 1 shared_informer.go:230] Caches are synced for stateful set I0406 14:48:31.970131 1 shared_informer.go:230] Caches are synced for persistent volume I0406 14:48:31.970380 1 shared_informer.go:230] Caches are synced for PVC protection I0406 14:48:31.989248 1 shared_informer.go:230] Caches are synced for expand I0406 14:48:31.994003 1 shared_informer.go:230] Caches are synced for namespace I0406 14:48:32.016597 1 shared_informer.go:230] Caches are synced for attach detach I0406 14:48:32.018286 1 shared_informer.go:230] Caches are synced for service account I0406 14:48:32.124832 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I0406 14:48:32.169207 1 shared_informer.go:230] Caches are synced for taint I0406 14:48:32.169444 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: W0406 14:48:32.169508 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. I0406 14:48:32.169573 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. I0406 14:48:32.169989 1 taint_manager.go:187] Starting NoExecuteTaintManager I0406 14:48:32.170317 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"2452e59c-4c42-401f-823b-f3b18f1548d0", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I0406 14:48:32.251857 1 shared_informer.go:230] Caches are synced for disruption I0406 14:48:32.251943 1 disruption.go:339] Sending events to api server. I0406 14:48:32.270533 1 shared_informer.go:230] Caches are synced for resource quota I0406 14:48:32.325248 1 shared_informer.go:230] Caches are synced for garbage collector I0406 14:48:32.325308 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0406 14:48:32.326346 1 shared_informer.go:230] Caches are synced for garbage collector I0406 14:48:32.373931 1 shared_informer.go:230] Caches are synced for resource quota ==> kube-proxy [a777c84f420a] <== W0406 14:48:32.759663 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy I0406 14:48:32.769129 1 node.go:136] Successfully retrieved node IP: 192.168.64.40 I0406 14:48:32.769151 1 server_others.go:186] Using iptables Proxier. W0406 14:48:32.769158 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I0406 14:48:32.769161 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I0406 14:48:32.769559 1 server.go:583] Version: v1.18.0 I0406 14:48:32.769980 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I0406 14:48:32.769999 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0406 14:48:32.770656 1 conntrack.go:83] Setting conntrack hashsize to 32768 I0406 14:48:32.775571 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0406 14:48:32.778175 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0406 14:48:32.778995 1 config.go:133] Starting endpoints config controller I0406 14:48:32.779010 1 shared_informer.go:223] Waiting for caches to sync for endpoints config I0406 14:48:32.779046 1 config.go:315] Starting service config controller I0406 14:48:32.779051 1 shared_informer.go:223] Waiting for caches to sync for service config I0406 14:48:32.879377 1 shared_informer.go:230] Caches are synced for endpoints config I0406 14:48:32.879422 1 shared_informer.go:230] Caches are synced for service config ==> kube-scheduler [90b108586bda] <== I0406 14:48:16.748643 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0406 14:48:16.748747 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0406 14:48:17.465725 1 serving.go:313] Generated self-signed cert in-memory W0406 14:48:21.352122 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0406 14:48:21.352156 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0406 14:48:21.352202 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. W0406 14:48:21.352373 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0406 14:48:21.369004 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0406 14:48:21.369050 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0406 14:48:21.370936 1 authorization.go:47] Authorization is disabled W0406 14:48:21.370976 1 authentication.go:40] Authentication is disabled I0406 14:48:21.370988 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0406 14:48:21.382596 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0406 14:48:21.382633 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0406 14:48:21.384207 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0406 14:48:21.384503 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0406 14:48:21.386797 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0406 14:48:21.387454 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0406 14:48:21.387934 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0406 14:48:21.388202 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0406 14:48:21.388365 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0406 14:48:21.388682 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0406 14:48:21.388840 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0406 14:48:21.389026 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0406 14:48:21.389249 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0406 14:48:21.389591 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0406 14:48:21.389595 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0406 14:48:21.390781 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0406 14:48:21.392262 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0406 14:48:21.393490 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0406 14:48:21.394737 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0406 14:48:21.396061 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0406 14:48:21.396996 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0406 14:48:21.399742 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope I0406 14:48:23.283136 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0406 14:48:24.485199 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I0406 14:48:24.498700 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler E0406 14:48:27.087399 1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue E0406 14:48:31.998411 1 factory.go:503] pod: kube-system/coredns-66bff467f8-8dpr6 is already present in the active queue ==> kubelet <== -- Logs begin at Mon 2020-04-06 14:47:41 UTC, end at Mon 2020-04-06 14:49:05 UTC. -- Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.051728 3601 kubelet.go:317] Watching apiserver Apr 06 14:48:25 minikube kubelet[3601]: E0406 14:48:25.382235 3601 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated. Apr 06 14:48:25 minikube kubelet[3601]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.401790 3601 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.8, apiVersion: 1.40.0 Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.402297 3601 server.go:1125] Started kubelet Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.403475 3601 server.go:145] Starting to listen on 0.0.0.0:10250 Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.404017 3601 server.go:393] Adding debug handlers to kubelet server. Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.408385 3601 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.409153 3601 volume_manager.go:265] Starting Kubelet Volume Manager Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.412722 3601 desired_state_of_world_populator.go:139] Desired state populator starts to run Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.480804 3601 status_manager.go:158] Starting to sync pod status with apiserver Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.481119 3601 kubelet.go:1821] Starting kubelet main sync loop. Apr 06 14:48:25 minikube kubelet[3601]: E0406 14:48:25.481273 3601 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.508969 3601 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.570098 3601 kubelet_node_status.go:70] Attempting to register node minikube Apr 06 14:48:25 minikube kubelet[3601]: E0406 14:48:25.584826 3601 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.585102 3601 kubelet_node_status.go:112] Node minikube was previously registered Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.585172 3601 kubelet_node_status.go:73] Successfully registered node minikube Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.714694 3601 setters.go:559] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-04-06 14:48:25.714671228 +0000 UTC m=+1.175937615 LastTransitionTime:2020-04-06 14:48:25.714671228 +0000 UTC m=+1.175937615 Reason:KubeletNotReady Message:container runtime status check may not have completed yet} Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.749950 3601 cpu_manager.go:184] [cpumanager] starting with none policy Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.749983 3601 cpu_manager.go:185] [cpumanager] reconciling every 10s Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.750000 3601 state_mem.go:36] [cpumanager] initializing new in-memory state store Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.750156 3601 state_mem.go:88] [cpumanager] updated default cpuset: "" Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.750184 3601 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]" Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.750193 3601 policy_none.go:43] [cpumanager] none policy: Start Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.751295 3601 plugin_manager.go:114] Starting Kubelet Plugin Manager Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.785114 3601 topology_manager.go:233] [topologymanager] Topology Admit Handler Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.787161 3601 topology_manager.go:233] [topologymanager] Topology Admit Handler Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.788614 3601 topology_manager.go:233] [topologymanager] Topology Admit Handler Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.790077 3601 topology_manager.go:233] [topologymanager] Topology Admit Handler Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.920361 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/3016593d20758bbfe68aba26604a8e3d-k8s-certs") pod "kube-controller-manager-minikube" (UID: "3016593d20758bbfe68aba26604a8e3d") Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.920458 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/3016593d20758bbfe68aba26604a8e3d-kubeconfig") pod "kube-controller-manager-minikube" (UID: "3016593d20758bbfe68aba26604a8e3d") Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.920482 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/5795d0c442cb997ff93c49feeb9f6386-kubeconfig") pod "kube-scheduler-minikube" (UID: "5795d0c442cb997ff93c49feeb9f6386") Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.920528 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/303e84ea42578b4e465cd6fdd6e231c0-etcd-certs") pod "etcd-minikube" (UID: "303e84ea42578b4e465cd6fdd6e231c0") Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.920577 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/1c0955b1c245f671b5567e13df552338-ca-certs") pod "kube-apiserver-minikube" (UID: "1c0955b1c245f671b5567e13df552338") Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.920601 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/1c0955b1c245f671b5567e13df552338-k8s-certs") pod "kube-apiserver-minikube" (UID: "1c0955b1c245f671b5567e13df552338") Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.920623 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/3016593d20758bbfe68aba26604a8e3d-ca-certs") pod "kube-controller-manager-minikube" (UID: "3016593d20758bbfe68aba26604a8e3d") Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.920658 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/3016593d20758bbfe68aba26604a8e3d-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "3016593d20758bbfe68aba26604a8e3d") Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.920686 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/303e84ea42578b4e465cd6fdd6e231c0-etcd-data") pod "etcd-minikube" (UID: "303e84ea42578b4e465cd6fdd6e231c0") Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.920708 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/1c0955b1c245f671b5567e13df552338-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "1c0955b1c245f671b5567e13df552338") Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.920732 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/3016593d20758bbfe68aba26604a8e3d-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "3016593d20758bbfe68aba26604a8e3d") Apr 06 14:48:25 minikube kubelet[3601]: I0406 14:48:25.920751 3601 reconciler.go:157] Reconciler: start to sync state Apr 06 14:48:31 minikube kubelet[3601]: I0406 14:48:31.819569 3601 topology_manager.go:233] [topologymanager] Topology Admit Handler Apr 06 14:48:31 minikube kubelet[3601]: I0406 14:48:31.866493 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/85aa6cfa-0a6b-4eea-bac7-b3219c1eb4bc-xtables-lock") pod "kube-proxy-cf2g2" (UID: "85aa6cfa-0a6b-4eea-bac7-b3219c1eb4bc") Apr 06 14:48:31 minikube kubelet[3601]: I0406 14:48:31.866707 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/85aa6cfa-0a6b-4eea-bac7-b3219c1eb4bc-lib-modules") pod "kube-proxy-cf2g2" (UID: "85aa6cfa-0a6b-4eea-bac7-b3219c1eb4bc") Apr 06 14:48:31 minikube kubelet[3601]: I0406 14:48:31.866799 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/85aa6cfa-0a6b-4eea-bac7-b3219c1eb4bc-kube-proxy") pod "kube-proxy-cf2g2" (UID: "85aa6cfa-0a6b-4eea-bac7-b3219c1eb4bc") Apr 06 14:48:31 minikube kubelet[3601]: I0406 14:48:31.866878 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-pkh45" (UniqueName: "kubernetes.io/secret/85aa6cfa-0a6b-4eea-bac7-b3219c1eb4bc-kube-proxy-token-pkh45") pod "kube-proxy-cf2g2" (UID: "85aa6cfa-0a6b-4eea-bac7-b3219c1eb4bc") Apr 06 14:48:32 minikube kubelet[3601]: I0406 14:48:32.192598 3601 topology_manager.go:233] [topologymanager] Topology Admit Handler Apr 06 14:48:32 minikube kubelet[3601]: I0406 14:48:32.268553 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/4e452596-519b-4c24-ae81-39d5725464cc-tmp") pod "storage-provisioner" (UID: "4e452596-519b-4c24-ae81-39d5725464cc") Apr 06 14:48:32 minikube kubelet[3601]: I0406 14:48:32.268614 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-d9xgc" (UniqueName: "kubernetes.io/secret/4e452596-519b-4c24-ae81-39d5725464cc-storage-provisioner-token-d9xgc") pod "storage-provisioner" (UID: "4e452596-519b-4c24-ae81-39d5725464cc") Apr 06 14:48:33 minikube kubelet[3601]: I0406 14:48:33.514557 3601 topology_manager.go:233] [topologymanager] Topology Admit Handler Apr 06 14:48:33 minikube kubelet[3601]: I0406 14:48:33.574878 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d11ba4fd-afc6-41da-bf25-046be538c20b-config-volume") pod "coredns-66bff467f8-8dpr6" (UID: "d11ba4fd-afc6-41da-bf25-046be538c20b") Apr 06 14:48:33 minikube kubelet[3601]: I0406 14:48:33.575237 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-4t4f4" (UniqueName: "kubernetes.io/secret/d11ba4fd-afc6-41da-bf25-046be538c20b-coredns-token-4t4f4") pod "coredns-66bff467f8-8dpr6" (UID: "d11ba4fd-afc6-41da-bf25-046be538c20b") Apr 06 14:48:34 minikube kubelet[3601]: W0406 14:48:34.113971 3601 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-8dpr6 through plugin: invalid network status for Apr 06 14:48:34 minikube kubelet[3601]: I0406 14:48:34.517883 3601 topology_manager.go:233] [topologymanager] Topology Admit Handler Apr 06 14:48:34 minikube kubelet[3601]: W0406 14:48:34.644807 3601 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-8dpr6 through plugin: invalid network status for Apr 06 14:48:34 minikube kubelet[3601]: I0406 14:48:34.679522 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/404b731a-65b9-4dd3-86d8-ffb8c989774e-config-volume") pod "coredns-66bff467f8-t58jx" (UID: "404b731a-65b9-4dd3-86d8-ffb8c989774e") Apr 06 14:48:34 minikube kubelet[3601]: I0406 14:48:34.679577 3601 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-4t4f4" (UniqueName: "kubernetes.io/secret/404b731a-65b9-4dd3-86d8-ffb8c989774e-coredns-token-4t4f4") pod "coredns-66bff467f8-t58jx" (UID: "404b731a-65b9-4dd3-86d8-ffb8c989774e") Apr 06 14:48:35 minikube kubelet[3601]: W0406 14:48:35.281697 3601 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-t58jx through plugin: invalid network status for Apr 06 14:48:35 minikube kubelet[3601]: W0406 14:48:35.662001 3601 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-t58jx through plugin: invalid network status for ==> storage-provisioner [8e35a0c118a5] <== ```
govargo commented 4 years ago

/kind bug

/assign

govargo commented 4 years ago

I doubt which of minikube or m01 is correct.

When minikube start, the output node name is m01. Is this right?

👍  Starting control plane node m01 in cluster minikube
🔥  Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...