kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.34k stars 4.88k forks source link

On v1.17.1 `minikube kubectl config view` fails for a v1.15.12 k8s cluster with `error: converting to : type names don't match (Unknown, RawExtension)` #10663

Open attilapiros opened 3 years ago

attilapiros commented 3 years ago

Steps to reproduce the issue:

  1. install v1.17.1: wget -O minikube-darwin-amd64 https://github.com/kubernetes/minikube/releases/download/v1.17.1/minikube-darwin-amd64 and sudo install minikube-darwin-amd64 /usr/local/bin/minikube
  2. start the cluster along with v1.15.12 for k8s version: minikube start --memory 8192 --cpus 8 --kubernetes-version=v1.15.12
  3. check the config: minikube kubectl -- config view which fails with the error below.

Meanwhile this v1.15.12 version should be supported based upon the oldest and newest kubernetes versions: https://github.com/kubernetes/minikube/blob/043bdca07e54ab6e4fc0457e3064048f34133d7e/pkg/minikube/constants/constants.go#L30-L36

Full output of failed command:

$ minikube kubectl -- config view
error: converting  to : type names don't match (Unknown, RawExtension), and no conversion 'func (runtime.Unknown, runtime.RawExtension) error' registered.

$ minikube kubectl --alsologtostderr -- config view
I0301 11:35:26.274569   56034 out.go:229] Setting OutFile to fd 1 ...
I0301 11:35:26.276466   56034 out.go:281] isatty.IsTerminal(1) = true
I0301 11:35:26.276473   56034 out.go:242] Setting ErrFile to fd 2...
I0301 11:35:26.276478   56034 out.go:281] isatty.IsTerminal(2) = true
I0301 11:35:26.276572   56034 root.go:291] Updating PATH: /Users/attilazsoltpiros/.minikube/bin
W0301 11:35:26.276824   56034 root.go:266] Error reading config file at /Users/attilazsoltpiros/.minikube/config/config.json: open /Users/attilazsoltpiros/.minikube/config/config.json: no such file or directory
I0301 11:35:26.278659   56034 mustload.go:66] Loading cluster: minikube
I0301 11:35:26.281772   56034 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0301 11:35:26.607270   56034 host.go:66] Checking if "minikube" exists ...
I0301 11:35:26.607891   56034 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0301 11:35:26.770441   56034 api_server.go:146] Checking apiserver status ...
I0301 11:35:26.771628   56034 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0301 11:35:26.771782   56034 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0301 11:35:26.935724   56034 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:55092 SSHKeyPath:/Users/attilazsoltpiros/.minikube/machines/minikube/id_rsa Username:docker}
I0301 11:35:27.056966   56034 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2689/cgroup
I0301 11:35:27.066430   56034 api_server.go:162] apiserver freezer: "7:freezer:/docker/b78264842caf2090a0face49e1e2ba1ce02c616f67e718c3bce3cb0348f7db8e/kubepods/burstable/pod666ce9bec0789b4a8b1de9bb993a1588/e717e4e3716cf47cd6a86c96e4bad2660ad1bb1e6796f58f2c442732e395c364"
I0301 11:35:27.066592   56034 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/b78264842caf2090a0face49e1e2ba1ce02c616f67e718c3bce3cb0348f7db8e/kubepods/burstable/pod666ce9bec0789b4a8b1de9bb993a1588/e717e4e3716cf47cd6a86c96e4bad2660ad1bb1e6796f58f2c442732e395c364/freezer.state
I0301 11:35:27.080089   56034 api_server.go:184] freezer state: "THAWED"
I0301 11:35:27.080141   56034 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55089/healthz ...
I0301 11:35:27.094989   56034 api_server.go:241] https://127.0.0.1:55089/healthz returned 200:
ok
I0301 11:35:27.095968   56034 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.15.12/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.15.12/bin/darwin/amd64/kubectl.sha1
I0301 11:35:27.096233   56034 kubectl.go:51] Running /Users/attilazsoltpiros/.minikube/cache/darwin/v1.15.12/kubectl [config view]
error: converting  to : type names don't match (Unknown, RawExtension), and no conversion 'func (runtime.Unknown, runtime.RawExtension) error' registered.

Full output of minikube start command used, if not already included:

$ minikube  start --memory 8192 --cpus 8 --kubernetes-version=v1.15.12
😄  minikube v1.17.1 on Darwin 10.15.7
✨  Automatically selected the docker driver. Other choices: hyperkit, virtualbox, ssh
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=8, Memory=8192MB) ...
    > kubeadm.sha1: 41 B / 41 B [----------------------------] 100.00% ? p/s 0s
    > kubectl.sha1: 41 B / 41 B [----------------------------] 100.00% ? p/s 0s
    > kubelet.sha1: 41 B / 41 B [----------------------------] 100.00% ? p/s 0s
    > kubeadm: 38.34 MiB / 38.34 MiB [----------------] 100.00% 6.34 MiB p/s 7s
    > kubectl: 41.06 MiB / 41.06 MiB [----------------] 100.00% 6.49 MiB p/s 7s
    > kubelet: 114.22 MiB / 114.22 MiB [-------------] 100.00% 7.73 MiB p/s 15s

    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Mon 2021-03-01 10:27:53 UTC, end at Mon 2021-03-01 10:33:15 UTC. -- Mar 01 10:27:53 minikube systemd[1]: Starting Docker Application Container Engine... Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.559296279Z" level=info msg="Starting up" Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.561679756Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.561729638Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.561752447Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.561774567Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.568877945Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.568940713Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.568975744Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.568990908Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.599550020Z" level=info msg="Loading containers: start." Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.676447987Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.721181546Z" level=info msg="Loading containers: done." Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.742696962Z" level=info msg="Docker daemon" commit=8891c58 graphdriver(s)=overlay2 version=20.10.2 Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.742841220Z" level=info msg="Daemon has completed initialization" Mar 01 10:27:53 minikube systemd[1]: Started Docker Application Container Engine. Mar 01 10:27:53 minikube dockerd[172]: time="2021-03-01T10:27:53.771374614Z" level=info msg="API listen on /run/docker.sock" Mar 01 10:27:56 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. Mar 01 10:27:56 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Mar 01 10:27:56 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. Mar 01 10:27:56 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. Mar 01 10:27:56 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. Mar 01 10:27:56 minikube systemd[1]: Stopping Docker Application Container Engine... Mar 01 10:27:56 minikube dockerd[172]: time="2021-03-01T10:27:56.774417740Z" level=info msg="Processing signal 'terminated'" Mar 01 10:27:56 minikube dockerd[172]: time="2021-03-01T10:27:56.776542423Z" level=info msg="Daemon shutdown complete" Mar 01 10:27:56 minikube systemd[1]: docker.service: Succeeded. Mar 01 10:27:56 minikube systemd[1]: Stopped Docker Application Container Engine. Mar 01 10:27:56 minikube systemd[1]: Starting Docker Application Container Engine... Mar 01 10:27:56 minikube dockerd[435]: time="2021-03-01T10:27:56.837208511Z" level=info msg="Starting up" Mar 01 10:27:56 minikube dockerd[435]: time="2021-03-01T10:27:56.839480213Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 01 10:27:56 minikube dockerd[435]: time="2021-03-01T10:27:56.839524081Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 01 10:27:56 minikube dockerd[435]: time="2021-03-01T10:27:56.839547026Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 01 10:27:56 minikube dockerd[435]: time="2021-03-01T10:27:56.839557786Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 01 10:27:56 minikube dockerd[435]: time="2021-03-01T10:27:56.840728539Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 01 10:27:56 minikube dockerd[435]: time="2021-03-01T10:27:56.840763471Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 01 10:27:56 minikube dockerd[435]: time="2021-03-01T10:27:56.840778029Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 01 10:27:56 minikube dockerd[435]: time="2021-03-01T10:27:56.840788225Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 01 10:27:56 minikube dockerd[435]: time="2021-03-01T10:27:56.852837888Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Mar 01 10:27:56 minikube dockerd[435]: time="2021-03-01T10:27:56.855675884Z" level=info msg="Loading containers: start." Mar 01 10:27:56 minikube dockerd[435]: time="2021-03-01T10:27:56.955021718Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 01 10:27:57 minikube dockerd[435]: time="2021-03-01T10:27:57.001234837Z" level=info msg="Loading containers: done." Mar 01 10:27:57 minikube dockerd[435]: time="2021-03-01T10:27:57.028621875Z" level=info msg="Docker daemon" commit=8891c58 graphdriver(s)=overlay2 version=20.10.2 Mar 01 10:27:57 minikube dockerd[435]: time="2021-03-01T10:27:57.028732575Z" level=info msg="Daemon has completed initialization" Mar 01 10:27:57 minikube systemd[1]: Started Docker Application Container Engine. Mar 01 10:27:57 minikube dockerd[435]: time="2021-03-01T10:27:57.055825228Z" level=info msg="API listen on [::]:2376" Mar 01 10:27:57 minikube dockerd[435]: time="2021-03-01T10:27:57.060138273Z" level=info msg="API listen on /var/run/docker.sock" Mar 01 10:27:59 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. Mar 01 10:28:42 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 8fe21cd4a0d73 eb516548c180f 4 minutes ago Running coredns 0 f263d5b9a9a89 110abf0b6b615 85069258b98ac 4 minutes ago Running storage-provisioner 0 82478921936fe 2ffbf0fd9eb76 00206e1127f2a 4 minutes ago Running kube-proxy 0 ec50938dabb63 f5566d6f05dd6 2c4adeb21b4ff 4 minutes ago Running etcd 0 a271a0a2bec8a e064c1cb5f06b 7b4d4985877a5 4 minutes ago Running kube-controller-manager 0 0127842872ff1 5c515d0794d20 196d53938faab 4 minutes ago Running kube-scheduler 0 b299a67f5afc1 e717e4e3716cf c81971987f04a 4 minutes ago Running kube-apiserver 0 8e8d8cc2fe39f ==> coredns [8fe21cd4a0d7] <== .:53 2021-03-01T10:29:17.489Z [INFO] CoreDNS-1.3.1 2021-03-01T10:29:17.489Z [INFO] linux/amd64, go1.11.4, 6b56a9c CoreDNS-1.3.1 linux/amd64, go1.11.4, 6b56a9c 2021-03-01T10:29:17.490Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843 ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=043bdca07e54ab6e4fc0457e3064048f34133d7e minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_03_01T11_28_59_0700 minikube.k8s.io/version=v1.17.1 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 01 Mar 2021 10:28:56 +0000 Taints: Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 01 Mar 2021 10:32:57 +0000 Mon, 01 Mar 2021 10:28:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 01 Mar 2021 10:32:57 +0000 Mon, 01 Mar 2021 10:28:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 01 Mar 2021 10:32:57 +0000 Mon, 01 Mar 2021 10:28:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 01 Mar 2021 10:32:57 +0000 Mon, 01 Mar 2021 10:28:52 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 8 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 9433304Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 9433304Ki pods: 110 System Info: Machine ID: f76b025c8b0549ef8df4937ad5eab263 System UUID: fa867ee7-c635-4206-88f1-a86664370a79 Boot ID: fac65739-eaa5-4062-9758-54570fe92bfc Kernel Version: 4.19.121-linuxkit OS Image: Ubuntu 20.04.1 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.2 Kubelet Version: v1.15.12 Kube-Proxy Version: v1.15.12 PodCIDR: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-5d4dd4b4db-9l72m 100m (1%) 0 (0%) 70Mi (0%) 170Mi (1%) 4m3s kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m55s kube-system kube-apiserver-minikube 250m (3%) 0 (0%) 0 (0%) 0 (0%) 3m3s kube-system kube-controller-manager-minikube 200m (2%) 0 (0%) 0 (0%) 0 (0%) 2m55s kube-system kube-proxy-mlrh7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m2s kube-system kube-scheduler-minikube 100m (1%) 0 (0%) 0 (0%) 0 (0%) 3m22s kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m17s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (8%) 0 (0%) memory 70Mi (0%) 170Mi (1%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 4m27s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 4m26s (x8 over 4m26s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 4m26s (x8 over 4m26s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 4m26s (x7 over 4m26s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 4m26s kubelet, minikube Updated Node Allocatable limit across pods Normal Starting 4m2s kube-proxy, minikube Starting kube-proxy. ==> dmesg <== [Mar 1 03:47] ERROR: earlyprintk= earlyser already used [ +0.000000] ERROR: earlyprintk= earlyser already used [ +0.000000] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0x7E, should be 0xDB (20180810/tbprint-173) [ +2.600775] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds). [ +0.029367] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-182) [ +0.001705] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-618) [ +7.050994] grpcfuse: loading out-of-tree module taints kernel. [Mar 1 05:27] hrtimer: interrupt took 7481658 ns [Mar 1 10:09] tee (184303): /proc/183496/oom_adj is deprecated, please use /proc/183496/oom_score_adj instead. ==> etcd [f5566d6f05dd] <== 2021-03-01 10:28:52.342225 I | etcdmain: etcd Version: 3.3.10 2021-03-01 10:28:52.342328 I | etcdmain: Git SHA: 27fc7e2 2021-03-01 10:28:52.342334 I | etcdmain: Go Version: go1.10.4 2021-03-01 10:28:52.342337 I | etcdmain: Go OS/Arch: linux/amd64 2021-03-01 10:28:52.342339 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8 2021-03-01 10:28:52.342450 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2021-03-01 10:28:52.343063 I | embed: listening for peers on https://192.168.49.2:2380 2021-03-01 10:28:52.343162 I | embed: listening for client requests on 127.0.0.1:2379 2021-03-01 10:28:52.343185 I | embed: listening for client requests on 192.168.49.2:2379 2021-03-01 10:28:52.346011 I | etcdserver: name = minikube 2021-03-01 10:28:52.346049 I | etcdserver: data dir = /var/lib/minikube/etcd 2021-03-01 10:28:52.346055 I | etcdserver: member dir = /var/lib/minikube/etcd/member 2021-03-01 10:28:52.346058 I | etcdserver: heartbeat = 100ms 2021-03-01 10:28:52.346060 I | etcdserver: election = 1000ms 2021-03-01 10:28:52.346062 I | etcdserver: snapshot count = 10000 2021-03-01 10:28:52.346068 I | etcdserver: advertise client URLs = https://192.168.49.2:2379 2021-03-01 10:28:52.346071 I | etcdserver: initial advertise peer URLs = https://192.168.49.2:2380 2021-03-01 10:28:52.346076 I | etcdserver: initial cluster = minikube=https://192.168.49.2:2380 2021-03-01 10:28:52.349510 I | etcdserver: starting member aec36adc501070cc in cluster fa54960ea34d58be 2021-03-01 10:28:52.349652 I | raft: aec36adc501070cc became follower at term 0 2021-03-01 10:28:52.349668 I | raft: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] 2021-03-01 10:28:52.349673 I | raft: aec36adc501070cc became follower at term 1 2021-03-01 10:28:52.355876 W | auth: simple token is not cryptographically signed 2021-03-01 10:28:52.358943 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided] 2021-03-01 10:28:52.361595 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2021-03-01 10:28:52.361710 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10) 2021-03-01 10:28:52.361760 I | embed: listening for metrics on http://192.168.49.2:2381 2021-03-01 10:28:52.362115 I | embed: listening for metrics on http://127.0.0.1:2381 2021-03-01 10:28:52.362485 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be 2021-03-01 10:28:52.550635 I | raft: aec36adc501070cc is starting a new election at term 1 2021-03-01 10:28:52.550682 I | raft: aec36adc501070cc became candidate at term 2 2021-03-01 10:28:52.550701 I | raft: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2 2021-03-01 10:28:52.550708 I | raft: aec36adc501070cc became leader at term 2 2021-03-01 10:28:52.550712 I | raft: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2 2021-03-01 10:28:52.550991 I | etcdserver: setting up the initial cluster version to 3.3 2021-03-01 10:28:52.551539 N | etcdserver/membership: set the initial cluster version to 3.3 2021-03-01 10:28:52.551603 I | etcdserver/api: enabled capabilities for version 3.3 2021-03-01 10:28:52.551626 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be 2021-03-01 10:28:52.551798 I | embed: ready to serve client requests 2021-03-01 10:28:52.551999 I | embed: ready to serve client requests 2021-03-01 10:28:52.553171 I | embed: serving client requests on 127.0.0.1:2379 2021-03-01 10:28:52.554352 I | embed: serving client requests on 192.168.49.2:2379 proto: no coders for int proto: no encoder for ValueSize int [GetProperties] ==> kernel <== 10:33:22 up 6:46, 0 users, load average: 0.62, 0.66, 0.63 Linux minikube 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.1 LTS" ==> kube-apiserver [e717e4e3716c] <== E0301 10:28:54.459323 1 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted E0301 10:28:54.459581 1 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted E0301 10:28:54.459743 1 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted E0301 10:28:54.459874 1 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted E0301 10:28:54.460058 1 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted E0301 10:28:54.460229 1 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted E0301 10:28:54.460436 1 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted E0301 10:28:54.460654 1 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted E0301 10:28:54.460764 1 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted E0301 10:28:54.460797 1 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted E0301 10:28:54.460922 1 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted I0301 10:28:54.461117 1 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook. I0301 10:28:54.461203 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. I0301 10:28:54.462905 1 client.go:354] parsed scheme: "" I0301 10:28:54.463061 1 client.go:354] scheme "" not registered, fallback to default scheme I0301 10:28:54.463160 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 }] I0301 10:28:54.463449 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }] I0301 10:28:54.472352 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }] I0301 10:28:54.473152 1 client.go:354] parsed scheme: "" I0301 10:28:54.473266 1 client.go:354] scheme "" not registered, fallback to default scheme I0301 10:28:54.473312 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 }] I0301 10:28:54.473544 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }] I0301 10:28:54.481945 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }] I0301 10:28:55.903922 1 secure_serving.go:116] Serving securely on [::]:8443 I0301 10:28:55.903987 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I0301 10:28:55.903998 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0301 10:28:55.904657 1 autoregister_controller.go:140] Starting autoregister controller I0301 10:28:55.904699 1 cache.go:32] Waiting for caches to sync for autoregister controller I0301 10:28:55.904849 1 crd_finalizer.go:255] Starting CRDFinalizer I0301 10:28:55.904997 1 crdregistration_controller.go:112] Starting crd-autoregister controller I0301 10:28:55.905081 1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller I0301 10:28:55.905127 1 controller.go:83] Starting OpenAPI controller I0301 10:28:55.905187 1 customresource_discovery_controller.go:208] Starting DiscoveryController I0301 10:28:55.905232 1 naming_controller.go:288] Starting NamingConditionController I0301 10:28:55.905300 1 establishing_controller.go:73] Starting EstablishingController I0301 10:28:55.905331 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController I0301 10:28:55.907217 1 available_controller.go:376] Starting AvailableConditionController I0301 10:28:55.907296 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0301 10:28:55.907489 1 controller.go:81] Starting OpenAPI AggregationController E0301 10:28:55.908194 1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: I0301 10:28:55.992350 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0301 10:28:56.004658 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0301 10:28:56.004969 1 cache.go:39] Caches are synced for autoregister controller I0301 10:28:56.005257 1 controller_utils.go:1036] Caches are synced for crd-autoregister controller I0301 10:28:56.022243 1 cache.go:39] Caches are synced for AvailableConditionController controller I0301 10:28:56.903339 1 controller.go:107] OpenAPI AggregationController: Processing item I0301 10:28:56.903698 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0301 10:28:56.903735 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0301 10:28:56.912770 1 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000 I0301 10:28:56.916723 1 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000 I0301 10:28:56.916763 1 storage_scheduling.go:128] all system priority classes are created successfully or already exist. I0301 10:28:57.172156 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0301 10:28:57.205683 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0301 10:28:57.267204 1 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0301 10:28:57.267781 1 controller.go:606] quota admission added evaluator for: endpoints I0301 10:28:58.153122 1 controller.go:606] quota admission added evaluator for: serviceaccounts I0301 10:28:58.740416 1 controller.go:606] quota admission added evaluator for: deployments.apps I0301 10:28:59.081248 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I0301 10:29:14.792989 1 controller.go:606] quota admission added evaluator for: replicasets.apps I0301 10:29:15.046645 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps ==> kube-controller-manager [e064c1cb5f06] <== I0301 10:29:14.090547 1 controller_utils.go:1029] Waiting for caches to sync for TTL controller I0301 10:29:14.238693 1 node_lifecycle_controller.go:393] Sending events to api server. I0301 10:29:14.239205 1 node_lifecycle_controller.go:429] Controller is using taint based evictions. I0301 10:29:14.239445 1 taint_manager.go:158] Sending events to api server. I0301 10:29:14.239873 1 node_lifecycle_controller.go:526] Controller will reconcile labels. I0301 10:29:14.239967 1 node_lifecycle_controller.go:545] Controller will taint node by condition. I0301 10:29:14.240033 1 controllermanager.go:532] Started "nodelifecycle" I0301 10:29:14.240134 1 node_lifecycle_controller.go:569] Starting node controller I0301 10:29:14.240181 1 controller_utils.go:1029] Waiting for caches to sync for taint controller I0301 10:29:14.388326 1 node_lifecycle_controller.go:77] Sending events to api server E0301 10:29:14.388549 1 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided W0301 10:29:14.388569 1 controllermanager.go:524] Skipping "cloud-node-lifecycle" I0301 10:29:14.389816 1 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller I0301 10:29:14.391873 1 controller_utils.go:1029] Waiting for caches to sync for resource quota controller W0301 10:29:14.392963 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0301 10:29:14.417589 1 controller_utils.go:1036] Caches are synced for certificate controller I0301 10:29:14.437315 1 controller_utils.go:1036] Caches are synced for node controller I0301 10:29:14.437378 1 range_allocator.go:157] Starting range CIDR allocator I0301 10:29:14.437458 1 controller_utils.go:1029] Waiting for caches to sync for cidrallocator controller I0301 10:29:14.440584 1 controller_utils.go:1036] Caches are synced for bootstrap_signer controller I0301 10:29:14.441333 1 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller I0301 10:29:14.442135 1 controller_utils.go:1036] Caches are synced for certificate controller I0301 10:29:14.446157 1 controller_utils.go:1036] Caches are synced for namespace controller E0301 10:29:14.459956 1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again I0301 10:29:14.460243 1 log.go:172] [INFO] signed certificate with serial number 392965831567322858961424691659307660538279978023 I0301 10:29:14.462543 1 controller_utils.go:1036] Caches are synced for PVC protection controller I0301 10:29:14.485480 1 controller_utils.go:1036] Caches are synced for job controller I0301 10:29:14.488489 1 controller_utils.go:1036] Caches are synced for HPA controller I0301 10:29:14.490350 1 controller_utils.go:1036] Caches are synced for GC controller I0301 10:29:14.490495 1 controller_utils.go:1036] Caches are synced for service account controller I0301 10:29:14.490965 1 controller_utils.go:1036] Caches are synced for TTL controller I0301 10:29:14.537732 1 controller_utils.go:1036] Caches are synced for cidrallocator controller I0301 10:29:14.539653 1 controller_utils.go:1036] Caches are synced for ReplicationController controller I0301 10:29:14.539996 1 controller_utils.go:1036] Caches are synced for stateful set controller I0301 10:29:14.540945 1 range_allocator.go:310] Set node minikube PodCIDR to 10.244.0.0/24 I0301 10:29:14.791124 1 controller_utils.go:1036] Caches are synced for deployment controller I0301 10:29:14.796721 1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"61b6da84-6c0b-4f14-b053-1c5d083deb74", APIVersion:"apps/v1", ResourceVersion:"197", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5d4dd4b4db to 1 I0301 10:29:14.802498 1 controller_utils.go:1036] Caches are synced for ReplicaSet controller I0301 10:29:14.806926 1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5d4dd4b4db", UID:"5714b2a7-37a6-433e-afd7-a5defb2efdc6", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5d4dd4b4db-9l72m I0301 10:29:15.043515 1 controller_utils.go:1036] Caches are synced for daemon sets controller I0301 10:29:15.051792 1 event.go:258] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"338b6f83-6e0d-4d8d-b251-ce68b9d10370", APIVersion:"apps/v1", ResourceVersion:"191", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-mlrh7 I0301 10:29:15.060540 1 controller_utils.go:1036] Caches are synced for endpoint controller E0301 10:29:15.061447 1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"338b6f83-6e0d-4d8d-b251-ce68b9d10370", ResourceVersion:"191", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63750191339, loc:(*time.Location)(0x7340ba0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0020b0340), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc002040d40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0020b0360), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0020b0380), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.15.12", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0020b03c0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001fe4eb0), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f65fa8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fe9020), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000edb0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001f65fe8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again I0301 10:29:15.091574 1 controller_utils.go:1036] Caches are synced for attach detach controller I0301 10:29:15.130786 1 controller_utils.go:1036] Caches are synced for PV protection controller I0301 10:29:15.140177 1 controller_utils.go:1036] Caches are synced for persistent volume controller I0301 10:29:15.189877 1 controller_utils.go:1036] Caches are synced for expand controller I0301 10:29:15.192165 1 controller_utils.go:1036] Caches are synced for resource quota controller I0301 10:29:15.216380 1 controller_utils.go:1036] Caches are synced for resource quota controller I0301 10:29:15.240533 1 controller_utils.go:1036] Caches are synced for taint controller I0301 10:29:15.240885 1 taint_manager.go:182] Starting NoExecuteTaintManager I0301 10:29:15.241348 1 node_lifecycle_controller.go:1424] Initializing eviction metric for zone: I0301 10:29:15.241680 1 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"eaf257aa-a998-4326-aa50-730dd48d4f7a", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller W0301 10:29:15.241743 1 node_lifecycle_controller.go:1036] Missing timestamp for Node minikube. Assuming now as a timestamp. I0301 10:29:15.241893 1 node_lifecycle_controller.go:1240] Controller detected that zone is now in state Normal. I0301 10:29:15.289085 1 controller_utils.go:1036] Caches are synced for garbage collector controller I0301 10:29:15.289209 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0301 10:29:15.289468 1 controller_utils.go:1036] Caches are synced for disruption controller I0301 10:29:15.289486 1 disruption.go:338] Sending events to api server. I0301 10:29:15.290315 1 controller_utils.go:1036] Caches are synced for garbage collector controller ==> kube-proxy [2ffbf0fd9eb7] <== W0301 10:29:15.737947 1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy I0301 10:29:15.744779 1 server_others.go:143] Using iptables Proxier. I0301 10:29:15.745029 1 server.go:534] Version: v1.15.12 I0301 10:29:15.752021 1 conntrack.go:52] Setting nf_conntrack_max to 262144 I0301 10:29:15.752219 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0301 10:29:15.752555 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0301 10:29:15.752700 1 config.go:187] Starting service config controller I0301 10:29:15.752729 1 controller_utils.go:1029] Waiting for caches to sync for service config controller I0301 10:29:15.753043 1 config.go:96] Starting endpoints config controller I0301 10:29:15.753078 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller I0301 10:29:15.853011 1 controller_utils.go:1036] Caches are synced for service config controller I0301 10:29:15.854147 1 controller_utils.go:1036] Caches are synced for endpoints config controller ==> kube-scheduler [5c515d0794d2] <== I0301 10:28:52.923331 1 serving.go:319] Generated self-signed cert in-memory W0301 10:28:53.523755 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work. W0301 10:28:53.523788 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work. W0301 10:28:53.523804 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work. I0301 10:28:53.525357 1 server.go:142] Version: v1.15.12 I0301 10:28:53.525416 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory W0301 10:28:53.528077 1 authorization.go:47] Authorization is disabled W0301 10:28:53.528090 1 authentication.go:55] Authentication is disabled I0301 10:28:53.528115 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0301 10:28:53.528746 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259 E0301 10:28:55.935508 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0301 10:28:55.935848 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0301 10:28:55.940545 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0301 10:28:55.940637 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0301 10:28:55.941105 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0301 10:28:55.941873 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0301 10:28:55.942293 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0301 10:28:55.943456 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0301 10:28:55.944141 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0301 10:28:55.946152 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0301 10:28:56.936738 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0301 10:28:56.939202 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0301 10:28:56.941565 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0301 10:28:56.943332 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0301 10:28:56.943994 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0301 10:28:56.945285 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0301 10:28:56.946456 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0301 10:28:56.948355 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0301 10:28:56.948936 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0301 10:28:56.950063 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0301 10:29:00.311624 1 factory.go:702] pod is already present in the activeQ ==> kubelet <== -- Logs begin at Mon 2021-03-01 10:27:53 UTC, end at Mon 2021-03-01 10:33:28 UTC. -- Mar 01 10:28:53 minikube kubelet[1854]: E0301 10:28:53.572646 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:53 minikube kubelet[1854]: E0301 10:28:53.672943 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:53 minikube kubelet[1854]: E0301 10:28:53.773348 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:53 minikube kubelet[1854]: E0301 10:28:53.873610 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:53 minikube kubelet[1854]: E0301 10:28:53.973991 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:54 minikube kubelet[1854]: E0301 10:28:54.074255 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:54 minikube kubelet[1854]: I0301 10:28:54.091900 1854 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach Mar 01 10:28:54 minikube kubelet[1854]: I0301 10:28:54.092196 1854 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach Mar 01 10:28:54 minikube kubelet[1854]: I0301 10:28:54.092442 1854 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach Mar 01 10:28:54 minikube kubelet[1854]: I0301 10:28:54.092738 1854 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach Mar 01 10:28:54 minikube kubelet[1854]: E0301 10:28:54.175719 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:54 minikube kubelet[1854]: E0301 10:28:54.275978 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:54 minikube kubelet[1854]: E0301 10:28:54.376337 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:54 minikube kubelet[1854]: E0301 10:28:54.476733 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:54 minikube kubelet[1854]: E0301 10:28:54.577053 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:54 minikube kubelet[1854]: E0301 10:28:54.677615 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:54 minikube kubelet[1854]: E0301 10:28:54.777889 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:54 minikube kubelet[1854]: E0301 10:28:54.878202 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:54 minikube kubelet[1854]: E0301 10:28:54.978665 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:55 minikube kubelet[1854]: E0301 10:28:55.078823 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:55 minikube kubelet[1854]: E0301 10:28:55.179277 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:55 minikube kubelet[1854]: E0301 10:28:55.280225 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:55 minikube kubelet[1854]: E0301 10:28:55.380477 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:55 minikube kubelet[1854]: E0301 10:28:55.481461 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:55 minikube kubelet[1854]: E0301 10:28:55.581694 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:55 minikube kubelet[1854]: E0301 10:28:55.682043 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:55 minikube kubelet[1854]: E0301 10:28:55.782325 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:55 minikube kubelet[1854]: E0301 10:28:55.882789 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:55 minikube kubelet[1854]: E0301 10:28:55.940600 1854 controller.go:204] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found Mar 01 10:28:55 minikube kubelet[1854]: I0301 10:28:55.983122 1854 reconciler.go:150] Reconciler: start to sync state Mar 01 10:28:55 minikube kubelet[1854]: E0301 10:28:55.983174 1854 kubelet.go:2252] node "minikube" not found Mar 01 10:28:56 minikube kubelet[1854]: I0301 10:28:56.035481 1854 kubelet_node_status.go:75] Successfully registered node minikube Mar 01 10:29:01 minikube kubelet[1854]: E0301 10:29:01.125579 1854 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Mar 01 10:29:01 minikube kubelet[1854]: E0301 10:29:01.125636 1854 helpers.go:712] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Mar 01 10:29:11 minikube kubelet[1854]: E0301 10:29:11.136703 1854 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Mar 01 10:29:11 minikube kubelet[1854]: E0301 10:29:11.136790 1854 helpers.go:712] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Mar 01 10:29:14 minikube kubelet[1854]: I0301 10:29:14.588888 1854 kuberuntime_manager.go:928] updating runtime config through cri with podcidr 10.244.0.0/24 Mar 01 10:29:14 minikube kubelet[1854]: I0301 10:29:14.589135 1854 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},} Mar 01 10:29:14 minikube kubelet[1854]: I0301 10:29:14.589362 1854 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24 Mar 01 10:29:15 minikube kubelet[1854]: I0301 10:29:15.191157 1854 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-nfzx6" (UniqueName: "kubernetes.io/secret/5450a297-2e49-4f55-ab49-44cb86a0b87f-kube-proxy-token-nfzx6") pod "kube-proxy-mlrh7" (UID: "5450a297-2e49-4f55-ab49-44cb86a0b87f") Mar 01 10:29:15 minikube kubelet[1854]: I0301 10:29:15.191300 1854 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/5450a297-2e49-4f55-ab49-44cb86a0b87f-kube-proxy") pod "kube-proxy-mlrh7" (UID: "5450a297-2e49-4f55-ab49-44cb86a0b87f") Mar 01 10:29:15 minikube kubelet[1854]: I0301 10:29:15.191357 1854 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/5450a297-2e49-4f55-ab49-44cb86a0b87f-xtables-lock") pod "kube-proxy-mlrh7" (UID: "5450a297-2e49-4f55-ab49-44cb86a0b87f") Mar 01 10:29:15 minikube kubelet[1854]: I0301 10:29:15.191486 1854 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/5450a297-2e49-4f55-ab49-44cb86a0b87f-lib-modules") pod "kube-proxy-mlrh7" (UID: "5450a297-2e49-4f55-ab49-44cb86a0b87f") Mar 01 10:29:15 minikube kubelet[1854]: I0301 10:29:15.291931 1854 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-gphxs" (UniqueName: "kubernetes.io/secret/bbce9e9d-f6f2-4713-a46b-fb62b4dc44ff-storage-provisioner-token-gphxs") pod "storage-provisioner" (UID: "bbce9e9d-f6f2-4713-a46b-fb62b4dc44ff") Mar 01 10:29:15 minikube kubelet[1854]: I0301 10:29:15.292270 1854 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/bbce9e9d-f6f2-4713-a46b-fb62b4dc44ff-tmp") pod "storage-provisioner" (UID: "bbce9e9d-f6f2-4713-a46b-fb62b4dc44ff") Mar 01 10:29:15 minikube kubelet[1854]: I0301 10:29:15.432957 1854 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials Mar 01 10:29:15 minikube kubelet[1854]: W0301 10:29:15.433334 1854 reflector.go:304] object-"kube-system"/"kube-proxy-token-nfzx6": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"kube-proxy-token-nfzx6": Unexpected watch close - watch lasted less than a second and no items received Mar 01 10:29:15 minikube kubelet[1854]: W0301 10:29:15.433520 1854 reflector.go:304] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"kube-proxy": Unexpected watch close - watch lasted less than a second and no items received Mar 01 10:29:15 minikube kubelet[1854]: W0301 10:29:15.433544 1854 reflector.go:304] object-"kube-system"/"storage-provisioner-token-gphxs": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"storage-provisioner-token-gphxs": Unexpected watch close - watch lasted less than a second and no items received Mar 01 10:29:16 minikube kubelet[1854]: I0301 10:29:16.598169 1854 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/empty-dir/8d157406-e774-4595-b0c0-f19793c4d5eb-tmp") pod "coredns-5d4dd4b4db-9l72m" (UID: "8d157406-e774-4595-b0c0-f19793c4d5eb") Mar 01 10:29:16 minikube kubelet[1854]: I0301 10:29:16.598270 1854 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-fk9s8" (UniqueName: "kubernetes.io/secret/8d157406-e774-4595-b0c0-f19793c4d5eb-coredns-token-fk9s8") pod "coredns-5d4dd4b4db-9l72m" (UID: "8d157406-e774-4595-b0c0-f19793c4d5eb") Mar 01 10:29:16 minikube kubelet[1854]: I0301 10:29:16.598476 1854 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8d157406-e774-4595-b0c0-f19793c4d5eb-config-volume") pod "coredns-5d4dd4b4db-9l72m" (UID: "8d157406-e774-4595-b0c0-f19793c4d5eb") Mar 01 10:29:17 minikube kubelet[1854]: E0301 10:29:17.220552 1854 remote_runtime.go:295] ContainerStatus "8fe21cd4a0d737942c2f643fbc16413e6f13958588a6a58a29c267fad0f7b147" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 8fe21cd4a0d737942c2f643fbc16413e6f13958588a6a58a29c267fad0f7b147 Mar 01 10:29:17 minikube kubelet[1854]: E0301 10:29:17.220610 1854 kuberuntime_manager.go:902] getPodContainerStatuses for pod "coredns-5d4dd4b4db-9l72m_kube-system(8d157406-e774-4595-b0c0-f19793c4d5eb)" failed: rpc error: code = Unknown desc = Error: No such container: 8fe21cd4a0d737942c2f643fbc16413e6f13958588a6a58a29c267fad0f7b147 Mar 01 10:29:21 minikube kubelet[1854]: E0301 10:29:21.147115 1854 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Mar 01 10:29:21 minikube kubelet[1854]: E0301 10:29:21.147219 1854 helpers.go:712] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Mar 01 10:29:31 minikube kubelet[1854]: E0301 10:29:31.157904 1854 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Mar 01 10:29:31 minikube kubelet[1854]: E0301 10:29:31.157997 1854 helpers.go:712] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Mar 01 10:29:41 minikube kubelet[1854]: E0301 10:29:41.169006 1854 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Mar 01 10:29:41 minikube kubelet[1854]: E0301 10:29:41.169066 1854 helpers.go:712] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics ==> storage-provisioner [110abf0b6b61] <== I0301 10:29:15.901468 1 storage_provisioner.go:115] Initializing the minikube storage provisioner... I0301 10:29:15.909784 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service! I0301 10:29:15.909857 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0301 10:29:15.915613 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0301 10:29:15.915712 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_1a5f2786-ec6d-4b9a-89aa-46d67be50db1! I0301 10:29:15.916091 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7b02ed27-72a2-4ac8-b662-84e7631772f7", APIVersion:"v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_1a5f2786-ec6d-4b9a-89aa-46d67be50db1 became leader I0301 10:29:16.015827 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_1a5f2786-ec6d-4b9a-89aa-46d67be50db1!
attilapiros commented 3 years ago

It was closed by accident the problem is still there.

sharifelgamal commented 3 years ago

This is clearly a problem with kubectl itself, and how minikube is writing its kubeconfig with different versions of kubernetes. Doing some more digging, it seems to be because of the extensions we added our kubeconfig for auditabilty a couple of versions of minikube ago. These work perfectly in k8s 1.17+ but will break kubectl config view in k8s 1.16 and earlier.

The solution here is to only write the extensions into kubeconfig with appropriate versions of kubernetes present.

It's worth nothing that I can reproduce this with minikube kubectl and with just the kubectl binary in my path.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

nyurik commented 2 years ago

I just hit this issue with a production cluster and would have never guessed it was the minikube installation that caused it if it wasn't for my coworker pointing out this issue. Any updates on this? I am currently forced to rename the ~/.kube/config into something else in order to use kubectl in production.

Andy2003 commented 2 years ago

For me the problem appeared after I installed minikube.

Previously the client version 1.16. of kubectl was installed (from gcloud). By installing minikube, which uses a more recent version of kubectl, the minikube cluster was added in ~/.kube/config as follows:

apiVersion: v1
clusters:
  - cluster:
      certificate-authority: /home/user/.minikube/ca.crt
      extensions:
        - extension:
            last-update: Wed, 09 Feb 2022 11:07:55 CET
            provider: minikube.sigs.k8s.io
            version: v1.25.1
          name: cluster_info
      server: https://192.168.49.2:8443
    name: minikube
contexts:
  - context:
      cluster: minikube
      extensions:
        - extension:
            last-update: Wed, 09 Feb 2022 11:07:55 CET
            provider: minikube.sigs.k8s.io
            version: v1.25.1
          name: context_info
      namespace: default
      user: minikube
    name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
  - name: minikube
    user:
      client-certificate: /home/user/.minikube/profiles/minikube/client.crt
      client-key: /home/user/.minikube/profiles/minikube/client.key

The extensions section in the yaml could not be parsed correctly by the old kubctl. This causes also the kubectx and kubens commands to fail with:

error: converting  to : type names don't match (Unknown, RawExtension), and no conversion 'func (runtime.Unknown, runtime.RawExtension) error' registered.
error getting current context

I solved the issue by updating all my gcloud components via:

gcloud components update

which updated also the binaries of the kubectl to version 1.21

sharifelgamal commented 2 years ago

Upgrading to k8s 1.17 and above will always be a solution here, but since minikube still supports k8s 1.16 as of v1.25.2, we should fix the bug itself. Help wanted!