kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
28.77k stars 4.81k forks source link

Minikube doesn't start with custom kubelet options #6430

Closed irizzant closed 4 years ago

irizzant commented 4 years ago

The exact command to reproduce the issue: minikube start --memory 12000 --cpus 8 --disk-size=80g --extra-config=apiserver.authorization-mode=RBAC --insecure-registry=maven-repo.sdb.it:18081 --insecure-registry=maven-repo.sdb.it:18080 --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0 --extra-config=apiserver.authorization-mode=RBAC --network-plugin=cni --enable-default-cni

The full output of the command that failed:

minikube start --memory 12000 --cpus 8 --disk-size=80g --extra-config=apiserver.authorization-mode=RBAC --insecure-registry=maven-repo.sdb.it:18081 --insecure-registry=maven-repo.sdb.it:18080 --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0 --extra-config=apiserver.authorization-mode=RBAC --network-plugin=cni --enable-default-cni 😄 minikube v1.6.2 on Ubuntu 18.04 ✨ Automatically selected the 'virtualbox' driver (alternates: [none]) 🔥 Creating virtualbox VM (CPUs=8, Memory=12000MB, Disk=80000MB) ... 🐳 Preparing Kubernetes v1.17.0 on Docker '19.03.5' ... ▪ apiserver.authorization-mode=RBAC ▪ kubelet.authentication-token-webhook=true ▪ kubelet.authorization-mode=Webhook ▪ scheduler.address=0.0.0.0 ▪ controller-manager.address=0.0.0.0 ▪ apiserver.authorization-mode=RBAC 🚜 Pulling images ... 🚀 Launching Kubernetes ...

💣 Error starting cluster: init failed. cmd: "/bin/bash -c \"sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap\"": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.17.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.99.100 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.99.100 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 10.009526 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-check] Initial timeout of 40s passed.

stderr: W0130 13:43:29.540552 3914 common.go:77] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta1". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version. W0130 13:43:29.541157 3914 common.go:77] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta1". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version. W0130 13:43:29.542999 3914 validation.go:28] Cannot validate kube-proxy config - no validator is available W0130 13:43:29.543026 3914 validation.go:28] Cannot validate kubelet config - no validator is available [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0130 13:43:32.976813 3914 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "RBAC" W0130 13:43:32.982493 3914 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "RBAC" W0130 13:43:32.983438 3914 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "RBAC" error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition To see the stack trace of this error execute with --v=5 or higher

😿 minikube is exiting due to an error. If the above message is not useful, open an issue: 👉 https://github.com/kubernetes/minikube/issues/new/choose

The output of the minikube logs command:

==> Docker <== -- Logs begin at Thu 2020-01-30 13:42:42 UTC, end at Thu 2020-01-30 13:47:28 UTC. -- Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.293829841Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.293838703Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.293845587Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.293853261Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.293860747Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.293867821Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.293874333Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.293886267Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.293976666Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294045165Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294328551Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294363630Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294395137Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294403978Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294411303Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294417996Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294424741Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294431663Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294438315Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294444795Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294451151Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294484435Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294493098Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294499943Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294507811Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294613749Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294662261Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.294690913Z" level=info msg="containerd successfully booted in 0.007455s" Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.305235925Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.305276106Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.305306825Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.305330405Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.306686628Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.306711444Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.306732942Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.306750049Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.320356218Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.320383932Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.320389874Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.320394227Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.320398537Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.320402851Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.320542409Z" level=info msg="Loading containers: start." Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.368644421Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.399923240Z" level=info msg="Loading containers: done." Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.419646190Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5 Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.419801723Z" level=info msg="Daemon has completed initialization" Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.435070217Z" level=info msg="API listen on [::]:2376" Jan 30 13:42:58 minikube dockerd[2656]: time="2020-01-30T13:42:58.435141087Z" level=info msg="API listen on /var/run/docker.sock" Jan 30 13:42:58 minikube systemd[1]: Started Docker Application Container Engine. Jan 30 13:43:37 minikube dockerd[2656]: time="2020-01-30T13:43:37.604497573Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/285d2ccab3a1cb84b13adb767c38fa202b4a68b616bf0beadf1eba96c596f4b7/shim.sock" debug=false pid=4710 Jan 30 13:43:37 minikube dockerd[2656]: time="2020-01-30T13:43:37.605377032Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7974baa95083ef16ec92e36de8be1cd03e241414d2ba6e0a9f9998ec448dd467/shim.sock" debug=false pid=4711 Jan 30 13:43:37 minikube dockerd[2656]: time="2020-01-30T13:43:37.608326777Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8a18dd231a0fdac89a9cb035bfa2b62a308d3752f10fa8c41b7a478735f8b083/shim.sock" debug=false pid=4722 Jan 30 13:43:37 minikube dockerd[2656]: time="2020-01-30T13:43:37.611866821Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f1b909d144508f08836bd074ba89686f878481aa9879a61efdec034c2531dfcd/shim.sock" debug=false pid=4730 Jan 30 13:43:37 minikube dockerd[2656]: time="2020-01-30T13:43:37.612700126Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/87e047419132010ce856ca98700615b43bd9cb72b83477f7b92a1da738a46e29/shim.sock" debug=false pid=4742 Jan 30 13:43:37 minikube dockerd[2656]: time="2020-01-30T13:43:37.889068785Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5e45922e1fb623123cba633f8bda028261beb63918d800ef8f6d36479e6a0812/shim.sock" debug=false pid=4934 Jan 30 13:43:37 minikube dockerd[2656]: time="2020-01-30T13:43:37.915028991Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e74cbdac469ffa4a187159b4f3d267da267d4939c02870dfcf0f981b07987255/shim.sock" debug=false pid=4951 Jan 30 13:43:37 minikube dockerd[2656]: time="2020-01-30T13:43:37.940478321Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/daa07d622f7be7b2208e59885faba8267723bdb1320ea58002212ed434270d3b/shim.sock" debug=false pid=4975 Jan 30 13:43:37 minikube dockerd[2656]: time="2020-01-30T13:43:37.943669836Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c7b53aab1e202607d51526b1158e69752eb071a7ae56398ac55cd921b2959e8d/shim.sock" debug=false pid=4981 Jan 30 13:43:37 minikube dockerd[2656]: time="2020-01-30T13:43:37.968437181Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/291791a2736af7dd0926a4fa8ba814a8225157c44e08153ff3d55703668a8959/shim.sock" debug=false pid=5012

==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 291791a2736af 5eb3b74868724 3 minutes ago Running kube-controller-manager 0 8a18dd231a0fd daa07d622f7be 0cae8d5cc64c7 3 minutes ago Running kube-apiserver 0 f1b909d144508 c7b53aab1e202 bd12a212f9dcb 3 minutes ago Running kube-addon-manager 0 87e0474191320 e74cbdac469ff 78c190f736b11 3 minutes ago Running kube-scheduler 0 7974baa95083e 5e45922e1fb62 303ce5db0e90d 3 minutes ago Running etcd 0 285d2ccab3a1c

==> dmesg <== [ +0.325103] hpet1: lost 5 rtc interrupts [ +0.099120] hpet1: lost 2 rtc interrupts [ +1.328182] hpet1: lost 81 rtc interrupts [ +0.098042] hpet1: lost 3 rtc interrupts [ +0.083759] hpet1: lost 1 rtc interrupts [ +0.427311] hpet1: lost 26 rtc interrupts [ +0.097877] hpet1: lost 3 rtc interrupts [ +1.344997] hpet1: lost 83 rtc interrupts [ +0.177327] hpet1: lost 3 rtc interrupts [ +0.043339] hpet1: lost 2 rtc interrupts [ +1.345743] hpet_rtc_timer_reinit: 1 callbacks suppressed [ +0.000033] hpet1: lost 10 rtc interrupts [ +0.446475] hpet1: lost 22 rtc interrupts [ +0.074224] hpet1: lost 1 rtc interrupts [ +0.138192] hpet1: lost 5 rtc interrupts [ +0.383544] hpet1: lost 24 rtc interrupts [ +0.083496] hpet1: lost 1 rtc interrupts [Jan30 13:47] hpet1: lost 82 rtc interrupts [ +0.231029] hpet1: lost 4 rtc interrupts [ +0.374452] hpet1: lost 23 rtc interrupts [ +0.230038] hpet1: lost 7 rtc interrupts [ +1.787772] hpet1: lost 9 rtc interrupts [ +0.383884] hpet1: lost 14 rtc interrupts [ +1.092253] hpet1: lost 68 rtc interrupts [ +0.140067] hpet1: lost 1 rtc interrupts [ +0.096431] hpet1: lost 5 rtc interrupts [ +0.318954] hpet1: lost 19 rtc interrupts [ +0.068655] hpet1: lost 1 rtc interrupts [ +0.873207] hpet1: lost 52 rtc interrupts [ +1.752231] hpet1: lost 32 rtc interrupts [ +0.193354] hpet1: lost 10 rtc interrupts [ +0.600979] hpet1: lost 33 rtc interrupts [ +0.280040] hpet1: lost 9 rtc interrupts [ +0.289519] hpet1: lost 18 rtc interrupts [ +1.440696] hpet1: lost 84 rtc interrupts [ +0.318001] hpet1: lost 10 rtc interrupts [ +0.285728] hpet1: lost 17 rtc interrupts [ +0.471772] hpet1: lost 21 rtc interrupts [ +1.290196] hpet1: lost 3 rtc interrupts [ +0.128114] hpet1: lost 7 rtc interrupts [ +0.185553] hpet1: lost 11 rtc interrupts [ +1.415747] hpet_rtc_timer_reinit: 2 callbacks suppressed [ +0.000038] hpet1: lost 77 rtc interrupts [ +0.083571] hpet1: lost 2 rtc interrupts [ +0.235790] hpet1: lost 11 rtc interrupts [ +0.304367] hpet1: lost 18 rtc interrupts [ +0.069862] hpet1: lost 1 rtc interrupts [ +0.995868] hpet1: lost 59 rtc interrupts [ +1.413526] hpet1: lost 13 rtc interrupts [ +0.208126] hpet1: lost 11 rtc interrupts [ +0.738161] hpet1: lost 41 rtc interrupts [ +0.365685] hpet1: lost 14 rtc interrupts [ +1.675439] hpet_rtc_timer_reinit: 2 callbacks suppressed [ +0.000000] hpet1: lost 82 rtc interrupts [ +0.280472] hpet1: lost 9 rtc interrupts [ +1.148344] hpet1: lost 4 rtc interrupts [ +4.501640] hpet1: lost 41 rtc interrupts [ +1.309435] hpet1: lost 5 rtc interrupts [ +0.101425] hpet1: lost 5 rtc interrupts [ +0.327316] hpet1: lost 17 rtc interrupts

==> kernel <== 13:47:28 up 5 min, 0 users, load average: 0.68, 0.88, 0.46 Linux minikube 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2019.02.7"

==> kube-addon-manager ["c7b53aab1e20"] <== error: no objects passed to apply error: no objects passed to apply error: no objects passed to apply error: no objects passed to apply replicationcontroller/registry unchanged service/registry unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2020-01-30T13:47:07+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2020-01-30T13:47:11+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == deployment.apps/nginx-ingress-controller unchanged serviceaccount/nginx-ingress unchanged clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged daemonset.apps/registry-proxy unchanged replicationcontroller/registry unchanged service/registry unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2020-01-30T13:47:13+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2020-01-30T13:47:16+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == deployment.apps/nginx-ingress-controller unchanged serviceaccount/nginx-ingress unchanged clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged daemonset.apps/registry-proxy unchanged replicationcontroller/registry unchanged service/registry unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2020-01-30T13:47:18+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2020-01-30T13:47:21+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == deployment.apps/nginx-ingress-controller unchanged serviceaccount/nginx-ingress unchanged clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged daemonset.apps/registry-proxy unchanged replicationcontroller/registry unchanged service/registry unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2020-01-30T13:47:22+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2020-01-30T13:47:27+00:00 == INFO: == Reconciling with deprecated label == INFO: == Reconciling with addon-manager label == deployment.apps/nginx-ingress-controller unchanged serviceaccount/nginx-ingress unchanged clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged daemonset.apps/registry-proxy unchanged replicationcontroller/registry unchanged service/registry unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2020-01-30T13:47:28+00:00 ==

==> kube-apiserver ["daa07d622f7b"] <== W0130 13:43:39.567050 1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W0130 13:43:39.574525 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0130 13:43:39.589826 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0130 13:43:39.593266 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0130 13:43:39.605809 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0130 13:43:39.623948 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources. W0130 13:43:39.624047 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources. I0130 13:43:39.635234 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass. I0130 13:43:39.635331 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota. I0130 13:43:39.636652 1 client.go:361] parsed scheme: "endpoint" I0130 13:43:39.636716 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0130 13:43:39.643735 1 client.go:361] parsed scheme: "endpoint" I0130 13:43:39.643774 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0130 13:43:41.201812 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0130 13:43:41.201875 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0130 13:43:41.201906 1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I0130 13:43:41.201995 1 secure_serving.go:178] Serving securely on [::]:8443 I0130 13:43:41.202068 1 controller.go:81] Starting OpenAPI AggregationController I0130 13:43:41.202085 1 crd_finalizer.go:263] Starting CRDFinalizer I0130 13:43:41.202094 1 tlsconfig.go:219] Starting DynamicServingCertificateController I0130 13:43:41.202099 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I0130 13:43:41.202111 1 controller.go:85] Starting OpenAPI controller I0130 13:43:41.202111 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0130 13:43:41.202126 1 customresource_discovery_controller.go:208] Starting DiscoveryController I0130 13:43:41.202135 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController I0130 13:43:41.202149 1 naming_controller.go:288] Starting NamingConditionController I0130 13:43:41.202161 1 establishing_controller.go:73] Starting EstablishingController I0130 13:43:41.202138 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController I0130 13:43:41.202071 1 autoregister_controller.go:140] Starting autoregister controller I0130 13:43:41.202179 1 cache.go:32] Waiting for caches to sync for autoregister controller I0130 13:43:41.202277 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0130 13:43:41.202283 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister I0130 13:43:41.202659 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0130 13:43:41.202809 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller I0130 13:43:41.202871 1 available_controller.go:386] Starting AvailableConditionController I0130 13:43:41.202876 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0130 13:43:41.202929 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0130 13:43:41.202950 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt E0130 13:43:41.226531 1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.100, ResourceVersion: 0, AdditionalErrorMsg: I0130 13:43:41.302361 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0130 13:43:41.302389 1 cache.go:39] Caches are synced for autoregister controller I0130 13:43:41.302427 1 shared_informer.go:204] Caches are synced for crd-autoregister I0130 13:43:41.303472 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller I0130 13:43:41.303547 1 cache.go:39] Caches are synced for AvailableConditionController controller I0130 13:43:42.201972 1 controller.go:107] OpenAPI AggregationController: Processing item I0130 13:43:42.202105 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0130 13:43:42.202141 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0130 13:43:42.240437 1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000 I0130 13:43:42.316943 1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000 I0130 13:43:42.320875 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist. I0130 13:43:42.622669 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0130 13:43:42.651551 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0130 13:43:42.825035 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.99.100] I0130 13:43:42.828722 1 controller.go:606] quota admission added evaluator for: endpoints I0130 13:43:43.467397 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0130 13:43:44.309765 1 controller.go:606] quota admission added evaluator for: serviceaccounts I0130 13:43:51.774155 1 controller.go:606] quota admission added evaluator for: deployments.apps I0130 13:43:51.777368 1 controller.go:606] quota admission added evaluator for: replicasets.apps I0130 13:43:51.812037 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I0130 13:43:51.815799 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps

==> kube-controller-manager ["291791a2736a"] <== I0130 13:43:48.700822 1 controllermanager.go:533] Started "serviceaccount" I0130 13:43:48.700947 1 serviceaccounts_controller.go:116] Starting service account controller I0130 13:43:48.700960 1 shared_informer.go:197] Waiting for caches to sync for service account I0130 13:43:49.740568 1 controllermanager.go:533] Started "garbagecollector" I0130 13:43:49.742119 1 garbagecollector.go:129] Starting garbage collector controller I0130 13:43:49.742155 1 shared_informer.go:197] Waiting for caches to sync for garbage collector I0130 13:43:49.742271 1 graph_builder.go:282] GraphBuilder running I0130 13:43:49.859042 1 controllermanager.go:533] Started "daemonset" I0130 13:43:49.859244 1 daemon_controller.go:255] Starting daemon sets controller I0130 13:43:49.859261 1 shared_informer.go:197] Waiting for caches to sync for daemon sets I0130 13:43:49.876977 1 controllermanager.go:533] Started "bootstrapsigner" I0130 13:43:49.876999 1 shared_informer.go:197] Waiting for caches to sync for bootstrap_signer I0130 13:43:49.890505 1 controllermanager.go:533] Started "pvc-protection" I0130 13:43:49.890561 1 pvc_protection_controller.go:100] Starting PVC protection controller I0130 13:43:49.890568 1 shared_informer.go:197] Waiting for caches to sync for PVC protection I0130 13:43:50.245202 1 controllermanager.go:533] Started "disruption" I0130 13:43:50.245253 1 disruption.go:330] Starting disruption controller I0130 13:43:50.245537 1 shared_informer.go:197] Waiting for caches to sync for disruption I0130 13:43:50.519971 1 controllermanager.go:533] Started "statefulset" I0130 13:43:50.520050 1 stateful_set.go:145] Starting stateful set controller I0130 13:43:50.520056 1 shared_informer.go:197] Waiting for caches to sync for stateful set I0130 13:43:50.745763 1 controllermanager.go:533] Started "cronjob" I0130 13:43:50.746006 1 shared_informer.go:197] Waiting for caches to sync for resource quota I0130 13:43:50.747391 1 cronjob_controller.go:97] Starting CronJob Manager I0130 13:43:50.776019 1 shared_informer.go:204] Caches are synced for taint I0130 13:43:50.776152 1 taint_manager.go:186] Starting NoExecuteTaintManager I0130 13:43:50.790795 1 shared_informer.go:204] Caches are synced for PVC protection I0130 13:43:50.796931 1 shared_informer.go:204] Caches are synced for HPA I0130 13:43:50.801158 1 shared_informer.go:204] Caches are synced for service account I0130 13:43:50.806571 1 shared_informer.go:204] Caches are synced for deployment I0130 13:43:50.834383 1 shared_informer.go:204] Caches are synced for GC I0130 13:43:50.847112 1 shared_informer.go:204] Caches are synced for job I0130 13:43:50.847522 1 shared_informer.go:204] Caches are synced for PV protection I0130 13:43:50.849055 1 shared_informer.go:204] Caches are synced for TTL I0130 13:43:50.849409 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator I0130 13:43:50.851273 1 shared_informer.go:204] Caches are synced for ReplicaSet I0130 13:43:50.860465 1 shared_informer.go:204] Caches are synced for namespace I0130 13:43:50.865472 1 shared_informer.go:204] Caches are synced for certificate-csrapproving I0130 13:43:50.896829 1 shared_informer.go:204] Caches are synced for certificate-csrsigning I0130 13:43:50.960651 1 shared_informer.go:204] Caches are synced for daemon sets I0130 13:43:51.021133 1 shared_informer.go:204] Caches are synced for stateful set I0130 13:43:51.046607 1 shared_informer.go:204] Caches are synced for persistent volume I0130 13:43:51.080180 1 shared_informer.go:204] Caches are synced for expand I0130 13:43:51.114248 1 shared_informer.go:197] Waiting for caches to sync for garbage collector I0130 13:43:51.209777 1 shared_informer.go:204] Caches are synced for attach detach I0130 13:43:51.246926 1 shared_informer.go:204] Caches are synced for endpoint I0130 13:43:51.277224 1 shared_informer.go:204] Caches are synced for bootstrap_signer I0130 13:43:51.299762 1 shared_informer.go:204] Caches are synced for ReplicationController I0130 13:43:51.315008 1 shared_informer.go:204] Caches are synced for garbage collector I0130 13:43:51.343237 1 shared_informer.go:204] Caches are synced for garbage collector I0130 13:43:51.343264 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0130 13:43:51.345904 1 shared_informer.go:204] Caches are synced for disruption I0130 13:43:51.345932 1 disruption.go:338] Sending events to api server. I0130 13:43:51.346273 1 shared_informer.go:204] Caches are synced for resource quota I0130 13:43:51.366887 1 shared_informer.go:204] Caches are synced for resource quota I0130 13:43:51.779053 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"nginx-ingress-controller", UID:"e68b7a66-2b16-4421-9a68-0b5c618c609d", APIVersion:"apps/v1", ResourceVersion:"284", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-ingress-controller-6fc5bcc8c9 to 1 I0130 13:43:51.785105 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-6fc5bcc8c9", UID:"dbe6ecae-373d-4d10-b46c-79dec945b58e", APIVersion:"apps/v1", ResourceVersion:"285", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "nginx-ingress-controller-6fc5bcc8c9-" is forbidden: error looking up service account kube-system/nginx-ingress: serviceaccount "nginx-ingress" not found E0130 13:43:51.789278 1 replica_set.go:534] sync "kube-system/nginx-ingress-controller-6fc5bcc8c9" failed with pods "nginx-ingress-controller-6fc5bcc8c9-" is forbidden: error looking up service account kube-system/nginx-ingress: serviceaccount "nginx-ingress" not found I0130 13:43:51.826321 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"registry", UID:"41542017-1f05-498d-ba5f-f2d86a8b709d", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-g6dfr I0130 13:43:52.838316 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-6fc5bcc8c9", UID:"dbe6ecae-373d-4d10-b46c-79dec945b58e", APIVersion:"apps/v1", ResourceVersion:"290", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-ingress-controller-6fc5bcc8c9-2cwk6

==> kube-scheduler ["e74cbdac469f"] <== I0130 13:43:38.603248 1 serving.go:312] Generated self-signed cert in-memory W0130 13:43:38.846851 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found W0130 13:43:38.846916 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found W0130 13:43:41.246295 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0130 13:43:41.246374 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0130 13:43:41.246382 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous. W0130 13:43:41.246388 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false W0130 13:43:41.259784 1 authorization.go:47] Authorization is disabled W0130 13:43:41.259808 1 authentication.go:92] Authentication is disabled I0130 13:43:41.259815 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0130 13:43:41.260827 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0130 13:43:41.260849 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0130 13:43:41.261269 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0130 13:43:41.261319 1 tlsconfig.go:219] Starting DynamicServingCertificateController E0130 13:43:41.262122 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0130 13:43:41.262316 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0130 13:43:41.262644 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0130 13:43:41.263906 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0130 13:43:41.264078 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0130 13:43:41.264135 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0130 13:43:41.264474 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0130 13:43:41.264942 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0130 13:43:41.265140 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0130 13:43:41.265260 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0130 13:43:41.265282 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0130 13:43:41.266020 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0130 13:43:42.318043 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0130 13:43:42.359062 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0130 13:43:42.359065 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0130 13:43:42.359183 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0130 13:43:42.359368 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0130 13:43:42.359368 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0130 13:43:42.359370 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0130 13:43:42.359473 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0130 13:43:42.359492 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0130 13:43:42.359118 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0130 13:43:42.359672 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0130 13:43:42.360088 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope I0130 13:43:43.360945 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0130 13:43:43.461554 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I0130 13:43:43.469056 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler E0130 13:43:51.830719 1 scheduler.go:638] error selecting node for pod: no nodes available to schedule pods E0130 13:43:51.852405 1 scheduler.go:638] error selecting node for pod: no nodes available to schedule pods E0130 13:43:51.854887 1 scheduler.go:638] error selecting node for pod: no nodes available to schedule pods E0130 13:43:51.857972 1 factory.go:494] pod is already present in unschedulableQ E0130 13:43:52.842960 1 scheduler.go:638] error selecting node for pod: no nodes available to schedule pods E0130 13:43:52.843262 1 scheduler.go:638] error selecting node for pod: no nodes available to schedule pods E0130 13:43:52.845988 1 factory.go:494] pod is already present in unschedulableQ E0130 13:43:53.284865 1 scheduler.go:638] error selecting node for pod: no nodes available to schedule pods E0130 13:45:11.265147 1 scheduler.go:440] Error updating the condition of the pod kube-system/storage-provisioner: Operation cannot be fulfilled on pods "storage-provisioner": the object has been modified; please apply your changes to the latest version and try again E0130 13:45:11.265176 1 scheduler.go:638] error selecting node for pod: no nodes available to schedule pods E0130 13:45:11.266502 1 scheduler.go:638] error selecting node for pod: no nodes available to schedule pods E0130 13:45:11.267833 1 scheduler.go:638] error selecting node for pod: no nodes available to schedule pods E0130 13:46:41.262214 1 scheduler.go:638] error selecting node for pod: no nodes available to schedule pods E0130 13:46:41.265285 1 scheduler.go:638] error selecting node for pod: no nodes available to schedule pods E0130 13:46:41.270571 1 scheduler.go:638] error selecting node for pod: no nodes available to schedule pods

==> kubelet <== -- Logs begin at Thu 2020-01-30 13:42:42 UTC, end at Thu 2020-01-30 13:47:29 UTC. -- Jan 30 13:47:25 minikube kubelet[11555]: E0130 13:47:25.313878 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:25 minikube kubelet[11555]: E0130 13:47:25.387549 11555 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list v1.Pod: pods is forbidden: User "system:node:minikube" cannot list resource "pods" in API group "" at the cluster scope Jan 30 13:47:25 minikube kubelet[11555]: E0130 13:47:25.414631 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:25 minikube kubelet[11555]: E0130 13:47:25.515249 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:25 minikube kubelet[11555]: E0130 13:47:25.586543 11555 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list v1.Service: services is forbidden: User "system:node:minikube" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:47:25 minikube kubelet[11555]: E0130 13:47:25.615956 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:25 minikube kubelet[11555]: E0130 13:47:25.716484 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:25 minikube kubelet[11555]: E0130 13:47:25.785622 11555 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list v1.Node: nodes "minikube" is forbidden: User "system:node:minikube" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:47:25 minikube kubelet[11555]: E0130 13:47:25.819489 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:25 minikube kubelet[11555]: E0130 13:47:25.919959 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:25 minikube kubelet[11555]: E0130 13:47:25.991994 11555 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:node:minikube" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.021196 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.121636 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:26 minikube kubelet[11555]: I0130 13:47:26.166711 11555 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.195173 11555 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:node:minikube" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.222050 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:26 minikube kubelet[11555]: I0130 13:47:26.226832 11555 kubelet_node_status.go:70] Attempting to register node minikube Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.322867 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.366336 11555 kubelet_node_status.go:92] Unable to register node "minikube" with API server: nodes is forbidden: User "system:node:minikube" cannot create resource "nodes" in API group "" at the cluster scope Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.424054 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.524346 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.565647 11555 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list v1.Pod: pods is forbidden: User "system:node:minikube" cannot list resource "pods" in API group "" at the cluster scope Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.625011 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.725419 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.766421 11555 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list v1.Service: services is forbidden: User "system:node:minikube" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.826193 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.926881 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:26 minikube kubelet[11555]: E0130 13:47:26.964576 11555 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list v1.Node: nodes "minikube" is forbidden: User "system:node:minikube" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.027253 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.127425 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.164452 11555 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:node:minikube" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.227578 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.327805 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.365449 11555 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:node:minikube" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.428216 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.538867 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.569687 11555 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list v1.Pod: pods is forbidden: User "system:node:minikube" cannot list resource "pods" in API group "" at the cluster scope Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.639467 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.740118 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.770483 11555 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list v1.Service: services is forbidden: User "system:node:minikube" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.841028 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.941794 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:27 minikube kubelet[11555]: E0130 13:47:27.966820 11555 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list v1.Node: nodes "minikube" is forbidden: User "system:node:minikube" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.042175 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.142330 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.165828 11555 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:node:minikube" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.242930 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.344060 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.370269 11555 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:node:minikube" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.444992 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.486199 11555 controller.go:135] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "minikube" is forbidden: User "system:node:minikube" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.545675 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.572202 11555 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list v1.Pod: pods is forbidden: User "system:node:minikube" cannot list resource "pods" in API group "" at the cluster scope Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.646271 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.746466 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.771812 11555 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list v1.Service: services is forbidden: User "system:node:minikube" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.847294 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.947449 11555 kubelet.go:2263] node "minikube" not found Jan 30 13:47:28 minikube kubelet[11555]: E0130 13:47:28.967863 11555 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list v1.Node: nodes "minikube" is forbidden: User "system:node:minikube" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:47:29 minikube kubelet[11555]: E0130 13:47:29.047795 11555 kubelet.go:2263] node "minikube" not found

The operating system version: Ubuntu uname -a Linux pclnxdev22 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

srntqn commented 4 years ago

The same errors when I try to start minikube with --extra-config=apiserver.authorization-mode=RBAC option.

minikube version

minikube version: v1.7.3
commit: 436667c819c324e35d7e839f8116b968a2d0a3ff

minikube start --v=7 --memory=4Gb --cpus=6 --extra-config=apiserver.authorization-mode=RBAC --vm-driver=virtualbox
Logs

```sh Error starting cluster: init failed. output: "-- stdout --\n[init] Using Kubernetes version: v1.17.3\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.99.113 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.99.113 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\n[apiclient] All control plane components are healthy after 16.511920 seconds\n[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n[kubelet] Creating a ConfigMap \"kubelet-config-1.17\" in namespace kube-system with the configuration for the kubelets in the cluster\n[kubelet-check] Initial timeout of 40s passed.\n\n-- /stdout --\n** stderr ** \nW0223 20:08:01.852930 3496 validation.go:28] Cannot validate kube-proxy config - no validator is available\nW0223 20:08:01.853143 3496 validation.go:28] Cannot validate kubelet config - no validator is available\n\t[WARNING IsDockerSystemdCheck]: detected \"cgroupfs\" as the Docker cgroup driver. The recommended driver is \"systemd\". Please follow the guide at https://kubernetes.io/docs/setup/cri/\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nW0223 20:08:05.630875 3496 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"RBAC\"\nW0223 20:08:05.639273 3496 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"RBAC\"\nW0223 20:08:05.640371 3496 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"RBAC\"\nerror execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition\nTo see the stack trace of this error execute with --v=5 or higher\n\n** /stderr **": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.17.3 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.99.113 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.99.113 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 16.511920 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-check] Initial timeout of 40s passed. stderr: W0223 20:08:01.852930 3496 validation.go:28] Cannot validate kube-proxy config - no validator is available W0223 20:08:01.853143 3496 validation.go:28] Cannot validate kubelet config - no validator is available [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0223 20:08:05.630875 3496 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "RBAC" W0223 20:08:05.639273 3496 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "RBAC" W0223 20:08:05.640371 3496 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "RBAC" error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition To see the stack trace of this error execute with --v=5 or higher 😿 minikube is exiting due to an error. If the above message is not useful, open an issue: 👉 https://github.com/kubernetes/minikube/issues/new/choose ```

irizzant commented 4 years ago

I think the error is due to the flag --extra-config=apiserver.authorization-mode=RBAC, changing the flag to --extra-config=apiserver.authorization-mode=Node,RBAC makes minikube start without problems. @srntqn please confirm that the above is working

srntqn commented 4 years ago

@irizzant yep, it works. Thank you.

If I get it right, when we use --extra-config=apiserver.authorization-mode=RBAC we enable only RBAC authorizer and kubelet can't start because of lack of authorization to api server. If so, I guess, everything is okay and there is no unexpected behaviour.

irizzant commented 4 years ago

ok @srntqn then I suppose I can close this because it's working for both of us.

Yes kubelet needs permission to perform basic operations with the API server and this is why the default authorization should always contain both Node and RBAC values.