Open Lavie526 opened 4 years ago
Anything wrong? How should I config to make it able to run minikube without sudo? It installed everything to /root/.kube /root/.minikube. But not /scratch/$USER/.minikube, and /scratch/$USER/.kube. After i manully move then, it is not able to use. I remember the previous minikube version doesn't have this issue, is it a new issue for the latest version?
Some updates-------------: while using: sudo -E minikube start --vm-driver=none --docker-opt="default-ulimit=core=-1" --alsologtostderr --extra-config=kubelet.cgroups-per-qos=false --extra-config=kubelet.enforce-node-allocatable=""
It is able to start under /scratch/$USER/, and it shows minikube start succefully.
However while i try to run kubectl get node with sudo or without sudo, it shows: Unable to connect to the server: net/http: TLS handshake timeout
What's the problem? Why not able to use the kubectl after start minikube?
-- /stdout -- I0708 04:51:02.063069 23833 docker.go:384] kubernetesui/dashboard:v2.0.0 wasn't preloaded I0708 04:51:02.063135 23833 exec_runner.go:49] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0708 04:51:02.173554 23833 store.go:62] repositories.json doesn't exist: sudo cat /var/lib/docker/image/overlay2/repositories.json: exit status 1 stdout:
stderr: cat: /var/lib/docker/image/overlay2/repositories.json: No such file or directory I0708 04:51:02.174013 23833 exec_runner.go:49] Run: which lz4 I0708 04:51:02.175049 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4 I0708 04:51:02.175169 23833 kubeadm.go:719] prelaoding failed, will try to load cached images: getting file asset: open: open /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4: no such file or directory I0708 04:51:02.175335 23833 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:10.88.105.73 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:den03fyu DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.88.105.73"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:10.88.105.73 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0708 04:51:02.175623 23833 kubeadm.go:128] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 10.88.105.73 bindPort: 8443 bootstrapTokens:
apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "" metricsBindAddress: 10.88.105.73:10249
I0708 04:51:02.176712 23833 exec_runner.go:49] Run: docker info --format {{.CgroupDriver}} I0708 04:51:02.306243 23833 kubeadm.go:755] kubelet [Unit] Wants=docker.socket
[Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=den03fyu --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.88.105.73 --pod-manifest-path=/etc/kubernetes/manifests
[Install] config: {KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} I0708 04:51:02.306964 23833 exec_runner.go:49] Run: sudo ls /var/lib/minikube/binaries/v1.18.3 I0708 04:51:02.420730 23833 binaries.go:46] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.18.3: exit status 2 stdout:
stderr: ls: cannot access /var/lib/minikube/binaries/v1.18.3: No such file or directory
Initiating transfer...
I0708 04:51:02.421292 23833 exec_runner.go:49] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.18.3
I0708 04:51:02.529645 23833 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubectl.sha256
I0708 04:51:02.529868 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/cache/linux/v1.18.3/kubectl -> /var/lib/minikube/binaries/v1.18.3/kubectl
I0708 04:51:02.530034 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/cache/linux/v1.18.3/kubectl --> /var/lib/minikube/binaries/v1.18.3/kubectl (44032000 bytes)
I0708 04:51:02.529702 23833 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubelet.sha256
I0708 04:51:02.530323 23833 exec_runner.go:49] Run: sudo systemctl is-active --quiet service kubelet
I0708 04:51:02.529724 23833 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubeadm.sha256
I0708 04:51:02.530660 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/cache/linux/v1.18.3/kubeadm -> /var/lib/minikube/binaries/v1.18.3/kubeadm
I0708 04:51:02.530727 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/cache/linux/v1.18.3/kubeadm --> /var/lib/minikube/binaries/v1.18.3/kubeadm (39813120 bytes)
I0708 04:51:02.657218 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/cache/linux/v1.18.3/kubelet -> /var/lib/minikube/binaries/v1.18.3/kubelet
I0708 04:51:02.657524 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/cache/linux/v1.18.3/kubelet --> /var/lib/minikube/binaries/v1.18.3/kubelet (113283800 bytes)
I0708 04:51:02.918152 23833 exec_runner.go:49] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0708 04:51:03.031935 23833 exec_runner.go:91] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0708 04:51:03.032202 23833 exec_runner.go:98] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (534 bytes)
I0708 04:51:03.032429 23833 exec_runner.go:91] found /lib/systemd/system/kubelet.service, removing ...
I0708 04:51:03.032605 23833 exec_runner.go:98] cp: memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0708 04:51:03.032782 23833 exec_runner.go:98] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (1440 bytes)
I0708 04:51:03.032963 23833 start.go:268] checking
I0708 04:51:03.033126 23833 exec_runner.go:49] Run: grep 10.88.105.73 control-plane.minikube.internal$ /etc/hosts
I0708 04:51:03.034879 23833 exec_runner.go:49] Run: sudo systemctl daemon-reload
I0708 04:51:03.208210 23833 exec_runner.go:49] Run: sudo systemctl start kubelet
I0708 04:51:03.368188 23833 certs.go:52] Setting up /scratch/jiekong/.minikube/profiles/minikube for IP: 10.88.105.73
I0708 04:51:03.368411 23833 certs.go:169] skipping minikubeCA CA generation: /scratch/jiekong/.minikube/ca.key
I0708 04:51:03.368509 23833 certs.go:169] skipping proxyClientCA CA generation: /scratch/jiekong/.minikube/proxy-client-ca.key
I0708 04:51:03.368709 23833 certs.go:273] generating minikube-user signed cert: /scratch/jiekong/.minikube/profiles/minikube/client.key
I0708 04:51:03.368794 23833 crypto.go:69] Generating cert /scratch/jiekong/.minikube/profiles/minikube/client.crt with IP's: []
I0708 04:51:03.568353 23833 crypto.go:157] Writing cert to /scratch/jiekong/.minikube/profiles/minikube/client.crt ...
I0708 04:51:03.568520 23833 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/client.crt: {Name:mk102f7d86706185740d9bc9a57fc1d55716aadc Clock:{} Delay:500ms Timeout:1m0s Cancel:
stderr:
ls: cannot access /etc/kubernetes/admin.conf: No such file or directory
ls: cannot access /etc/kubernetes/kubelet.conf: No such file or directory
ls: cannot access /etc/kubernetes/controller-manager.conf: No such file or directory
ls: cannot access /etc/kubernetes/scheduler.conf: No such file or directory
I0708 04:51:04.906999 23833 exec_runner.go:49] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
I0708 04:51:24.990980 23833 exec_runner.go:78] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": (20.083826345s)
I0708 04:51:24.991236 23833 exec_runner.go:49] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0708 04:51:24.991362 23833 exec_runner.go:49] Run: sudo /var/lib/minikube/binaries/v1.18.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0708 04:51:24.991467 23833 exec_runner.go:49] Run: sudo /var/lib/minikube/binaries/v1.18.3/kubectl label nodes minikube.k8s.io/version=v1.11.0 minikube.k8s.io/commit=57e2f55f47effe9ce396cea42a1e0eb4f611ebbd minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_07_08T04_51_24_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0708 04:51:25.009728 23833 ops.go:35] apiserver oom_adj: -16
I0708 04:51:25.305621 23833 kubeadm.go:890] duration metric: took 314.347286ms to wait for elevateKubeSystemPrivileges.
I0708 04:51:25.308559 23833 kubeadm.go:295] StartCluster complete in 20.863855543s
I0708 04:51:25.308636 23833 settings.go:123] acquiring lock: {Name:mk6f220c874ab31ad6cc0cf9a6c90f7ab17dd518 Clock:{} Delay:500ms Timeout:1m0s Cancel:
โ The 'none' driver is designed for experts who need to integrate with an existing VM ๐ก Most users should use the newer 'docker' driver instead, which does not require root! ๐ For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
I0708 04:51:25.311027 23833 addons.go:320] enableAddons start: toEnable=map[], additional=[]
I0708 04:51:25.311925 23833 addons.go:50] Setting storage-provisioner=true in profile "minikube"
I0708 04:51:25.311964 23833 addons.go:126] Setting addon storage-provisioner=true in "minikube"
W0708 04:51:25.311982 23833 addons.go:135] addon storage-provisioner should already be in state true
I0708 04:51:25.312003 23833 host.go:65] Checking if "minikube" exists ...
๐ Verifying Kubernetes components...
I0708 04:51:25.312675 23833 kubeconfig.go:93] found "minikube" server: "https://10.88.105.73:8443"
I0708 04:51:25.313868 23833 api_server.go:145] Checking apiserver status ...
I0708 04:51:25.313937 23833 exec_runner.go:49] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0708 04:51:25.312693 23833 addons.go:50] Setting default-storageclass=true in profile "minikube"
I0708 04:51:25.314281 23833 kapi.go:58] client config for minikube: &rest.Config{Host:"https://10.88.105.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:
kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Unable to connect to the server: net/http: TLS handshake timeout
Doesn't --driver=none
require root? in which case you would have to transfer the config and certs to the normal user.
I know this is terrible but I do the following....
tar -cf client.tar .kube/config .minikube/profiles/minikube .minikube/ca.* .minikube/cert*
scp root@host:/root/client.tar .
tar -xf client.tar
OR configure client via kubectl config
@mazzystr --driver=none require root, so I use sudo before start minikube. The current issue is with "sudo -E start minikube" it is able to install to the /scratch/$USER/ folder. And it seems minikube start succefully from the log i have paste above. However while I use kubectl get nodes, it will show me :
Unable to connect to the server: net/http: TLS handshake timeout
I guess maybe the start is not real success, there is some information in the output log:
๐ Enabled addons: default-storageclass, storage-provisioner I0708 04:51:26.376795 23833 addons.go:322] enableAddons completed in 1.065764106s I0708 04:51:26.475663 23833 system_pods.go:61] 1 kube-system pods found I0708 04:51:26.475871 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:26.975162 23833 system_pods.go:61] 1 kube-system pods found I0708 04:51:26.975384 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:27.475106 23833 system_pods.go:61] 1 kube-system pods found I0708 04:51:27.475161 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:27.975099 23833 system_pods.go:61] 1 kube-system pods found I0708 04:51:27.975138 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:28.475038 23833 system_pods.go:61] 1 kube-system pods found I0708 04:51:28.475086 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:28.974897 23833 system_pods.go:61] 1 kube-system pods found I0708 04:51:28.975125 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:29.474969 23833 system_pods.go:61] 1 kube-system pods found I0708 04:51:29.475023 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:29.976265 23833 system_pods.go:61] 1 kube-system pods found I0708 04:51:29.976329 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:30.478012 23833 system_pods.go:61] 1 kube-system pods found I0708 04:51:30.478248 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:30.975085 23833 system_pods.go:61] 1 kube-system pods found I0708 04:51:30.975134 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:31.475057 23833 system_pods.go:61] 1 kube-system pods found I0708 04:51:31.475119 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:31.976585 23833 system_pods.go:61] 4 kube-system pods found I0708 04:51:31.976640 23833 system_pods.go:63] "coredns-66bff467f8-7tfkv" [19c5ca58-63f0-4726-8c22-66e5b3beb41c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:31.976654 23833 system_pods.go:63] "coredns-66bff467f8-8pxx8" [6bb588d6-6149-416c-9bdf-40a3506efd17] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:31.976664 23833 system_pods.go:63] "kube-proxy-lzp2j" [c7def367-ba25-4bcd-9f97-a509b89110a5] Pending I0708 04:51:31.976674 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0708 04:51:31.976685 23833 system_pods.go:74] duration metric: took 6.514094256s to wait for pod list to return data ... I0708 04:51:31.976697 23833 kubeadm.go:449] duration metric: took 6.664173023s to wait for : map[apiserver:true system_pods:true] ...
Not sure if it is the issue while i can't use kubectl commands after start?
I tried to do the trasfer, after transter. It only works with sudo, like "sudo kubectl get nodes", without sudo like "kubectl get nodes" it will report: Unable to connect to the server: net/http: TLS handshake timeout
...and that makes sense. Everything is going to be owned by root...all the config, all the certs, all the images, everything in ~/.minikube
. There's a lot of junk packed in that directory. If you try to run kubectl as a normal user it will fail.
Try running minikube start
directly as root. See if that works any better.
Or sudo chown -R user ~/.minikube
Yup, just as suspected. Ensure crio is installed and running. Ensure kubelet is installed and running.
Then run minikube start directly as root... (I add a couple extra parameters to make my env usable)
# minikube start --driver=none --container-runtime=cri-o --disk-size=50g --memory=8096m --apiserver-ips=10.88.0.1,10.88.0.2,10.88.0.3,10.88.0.4,10.88.0.5,10.88.0.6,10.88.0.7,10.88.0.8 --apiserver-name=k8s.octacube.co --apiserver-names=k8s.octacube.co
๐ minikube v1.11.0 on Fedora 32
โจ Using the none driver based on user configuration
โ The 'none' driver does not respect the --memory flag
โ Using the 'cri-o' runtime with the 'none' driver is an untested configuration!
๐ Starting control plane node minikube in cluster minikube
๐คน Running on localhost (CPUs=4, Memory=15886MB, Disk=102350MB) ...
โน๏ธ OS release is Fedora 32 (Thirty Two)
๐ Preparing Kubernetes v1.18.3 on CRI-O ...
> kubelet.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
> kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
> kubeadm.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
> kubectl: 41.99 MiB / 41.99 MiB [---------------] 100.00% 50.52 MiB p/s 1s
> kubelet: 108.04 MiB / 108.04 MiB [-------------] 100.00% 65.79 MiB p/s 2s
> kubeadm: 37.97 MiB / 37.97 MiB [---------------] 100.00% 22.37 MiB p/s 2s
๐คน Configuring local host environment ...
โ The 'none' driver is designed for experts who need to integrate with an existing VM
๐ก Most users should use the newer 'docker' driver instead, which does not require root!
๐ For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
โ kubectl and minikube configuration will be stored in /root
โ To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
โช sudo mv /root/.kube /root/.minikube $HOME
โช sudo chown -R $USER $HOME/.kube $HOME/.minikube
๐ก This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
๐ Verifying Kubernetes components...
๐ Enabled addons: default-storageclass, storage-provisioner
๐ Done! kubectl is now configured to use "minikube"
[root@cube0 ~]# kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28s
[root@cube0 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
cube0 Ready master 51s v1.18.3
@mazzystr However, my expectation is to run kubectl as a normal user, not as root. How should I make it works?
Documentation is perfectly clear on the root requirement for --driver=none
. Link is here
@Lavie526 none driver is only supported with root, however I recommend using our newest driver, Docker Driver.
I recommend delting the other one sudo minikube delete --all then switch to normal user and do
minikube start --driver=docker
@Lavie526 does that solve your problem?
meanwhile we do have a issue to implement none driver as non root, but it is not in our priority since docker driver is our preferred new driver.
@medyagh I tried to use docker driver("minikube start --vm-driver=docker --docker-opt="default-ulimit=core=-1" --alsologtostderr --extra-config=kubelet.cgroups-per-qos=false --extra-config=kubelet.enforce-node-allocatable="" --extra-config=kubelet.cgroup-driver=systemd") as you have suggedsted above, however, there are still issues to start up:
๐ Found network options: โช NO_PROXY=localhost,127.0.0.1,172.17.0.3,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24 โช http_proxy=http://www-proxy-brmdc.us.*.com:80/ โช https_proxy=http://www-proxy-brmdc.us.*.com:80/ โช no_proxy=10.88.105.73,localhost,127.0.0.1,172.17.0.3 I0719 19:10:15.272657 82881 ssh_runner.go:148] Run: systemctl --version I0719 19:10:15.272719 82881 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/ I0719 19:10:15.273027 82881 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0719 19:10:15.272746 82881 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0719 19:10:15.334467 82881 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:9031 SSHKeyPath:/scratch/jiekong/.minikube/machines/minikube/id_rsa Username:docker} I0719 19:10:15.338242 82881 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:9031 SSHKeyPath:/scratch/jiekong/.minikube/machines/minikube/id_rsa Username:docker} I0719 19:10:15.417613 82881 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd I0719 19:10:16.630977 82881 ssh_runner.go:188] Completed: sudo systemctl is-active --quiet service containerd: (1.21305259s) I0719 19:10:16.631374 82881 ssh_runner.go:148] Run: sudo systemctl cat docker.service I0719 19:10:16.631196 82881 ssh_runner.go:188] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.358313967s) W0719 19:10:16.631736 82881 start.go:504] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 7 stdout:
stderr: curl: (7) Failed to connect to k8s.gcr.io port 443: Connection timed out โ This container is having trouble accessing https://k8s.gcr.io ๐ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0719 19:10:16.645337 82881 cruntime.go:192] skipping containerd shutdown because we are bound to it I0719 19:10:16.645418 82881 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio I0719 19:10:16.659966 82881 ssh_runner.go:148] Run: sudo systemctl cat docker.service I0719 19:10:16.672358 82881 ssh_runner.go:148] Run: sudo systemctl daemon-reload I0719 19:10:16.732206 82881 ssh_runner.go:148] Run: sudo systemctl start docker I0719 19:10:16.744449 82881 ssh_runner.go:148] Run: docker version --format {{.Server.Version}} ๐ณ Preparing Kubernetes v1.18.3 on Docker 19.03.2 ... โช opt default-ulimit=core=-1 โช env NO_PROXY=localhost,127.0.0.1,172.17.0.3,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24 โช env HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/ โช env HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/ โช env NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3 I0719 19:10:16.816869 82881 cli_runner.go:109] Run: docker network ls --filter name=bridge --format {{.ID}} I0719 19:10:16.868461 82881 cli_runner.go:109] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" f58908572170 b5c831a6ae16 E0719 19:10:16.920665 82881 start.go:96] Unable to get host IP: inspect IP bridge network "f58908572170\nb5c831a6ae16".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" f58908572170 b5c831a6ae16: exit status 1 stdout:
stderr: Error: No such network: f58908572170 b5c831a6ae16 I0719 19:10:16.921025 82881 exit.go:58] WithError(failed to start node)=startup failed: Failed to setup kubeconfig: inspect IP bridge network "f58908572170\nb5c831a6ae16".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" f58908572170 b5c831a6ae16: exit status 1 stdout:
stderr: Error: No such network: f58908572170 b5c831a6ae16 called from: goroutine 1 [running]: runtime/debug.Stack(0x0, 0x0, 0xc0002c0480) /usr/local/go/src/runtime/debug/stack.go:24 +0x9d k8s.io/minikube/pkg/minikube/exit.WithError(0x1baebdd, 0x14, 0x1ea7cc0, 0xc000fc1de0) /app/pkg/minikube/exit/exit.go:58 +0x34 k8s.io/minikube/cmd/minikube/cmd.runStart(0x2c85020, 0xc000836300, 0x0, 0x6) /app/cmd/minikube/cmd/start.go:198 +0x40f github.com/spf13/cobra.(Command).execute(0x2c85020, 0xc0008362a0, 0x6, 0x6, 0x2c85020, 0xc0008362a0) /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2aa github.com/spf13/cobra.(Command).ExecuteC(0x2c84060, 0x0, 0x1, 0xc0006a5d20) /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349 github.com/spf13/cobra.(*Command).Execute(...) /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887 k8s.io/minikube/cmd/minikube/cmd.Execute() /app/cmd/minikube/cmd/root.go:106 +0x747 main.main() /app/cmd/minikube/main.go:71 +0x143 W0719 19:10:16.921363 82881 out.go:201] failed to start node: startup failed: Failed to setup kubeconfig: inspect IP bridge network "f58908572170\nb5c831a6ae16".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" f58908572170 b5c831a6ae16: exit status 1 stdout:
stderr: Error: No such network: f58908572170 b5c831a6ae16
๐ฃ failed to start node: startup failed: Failed to setup kubeconfig: inspect IP bridge network "f58908572170\nb5c831a6ae16".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" f58908572170 b5c831a6ae16: exit status 1 stdout:
stderr: Error: No such network: f58908572170 b5c831a6ae16
๐ฟ minikube is exiting due to an error. If the above message is not useful, open an issue: ๐ https://github.com/kubernetes/minikube/issues/new/choose
If i use "minikube start --vm-driver=docker", [jiekong@den03fyu ~]$ minikube start --vm-driver=docker ๐ minikube v1.12.0 on Oracle 7.4 (xen/amd64) โช KUBECONFIG=/scratch/jiekong/.kube/config โช MINIKUBE_HOME=/scratch/jiekong โจ Using the docker driver based on user configuration ๐ Starting control plane node minikube in cluster minikube ๐ฅ Creating docker container (CPUs=2, Memory=14600MB) ... ๐ Found network options: โช NO_PROXY=localhost,127.0.0.1,172.17.0.3,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24 โช http_proxy=http://www-proxy-brmdc.us.*.com:80/ โช https_proxy=http://www-proxy-brmdc.us.*.com:80/ โช no_proxy=10.88.105.73,localhost,127.0.0.1,172.17.0.3 โ This container is having trouble accessing https://k8s.gcr.io ๐ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ ๐ณ Preparing Kubernetes v1.18.3 on Docker 19.03.2 ... โช env NO_PROXY=localhost,127.0.0.1,172.17.0.3,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24 โช env HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/ โช env HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/ โช env NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3 ๐ Verifying Kubernetes components... ๐ Enabled addons: default-storageclass, storage-provisioner ๐ Done! kubectl is now configured to use "minikube" ///////////////////////////////////////////////////////////////////////////////////////// It seems status successfully, however while I use kubectl get nodes, its status is "NotReady": kubectl get nodes NAME STATUS ROLES AGE VERSION minikube NotReady master 2m35s v1.18.3
Any suggestions?
@medyagh Do you have any suggetions about the failure using docker driver in the above? Also, is there any prerequest using this driver? I have already installed the latest docker and configure the cgroups to systemd. Also please refer to the logs info below.
[jiekong@den03fyu tmp]$ minikube logs
==> Docker <==
-- Logs begin at Tue 2020-07-21 09:10:55 UTC, end at Tue 2020-07-21 09:14:34 UTC. --
Jul 21 09:11:02 minikube dockerd[80]: time="2020-07-21T09:11:02.765700925Z" level=info msg="Daemon shutdown complete"
Jul 21 09:11:02 minikube systemd[1]: docker.service: Succeeded.
Jul 21 09:11:02 minikube systemd[1]: Stopped Docker Application Container Engine.
Jul 21 09:11:02 minikube systemd[1]: Starting Docker Application Container Engine...
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.846036110Z" level=info msg="Starting up"
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.848246031Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.848284744Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.848309277Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0
==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID b765c7383ac2e 74060cea7f704 3 minutes ago Running kube-apiserver 0 d964074fa72ba 0d753c127dc63 303ce5db0e90d 3 minutes ago Running etcd 0 c98ab429dcd05 924b96c9a517c a31f78c7c8ce1 3 minutes ago Running kube-scheduler 0 0b78bfadca933 bc1da187d9749 d3e55153f52fb 3 minutes ago Running kube-controller-manager 0 dfe824c5264d0
==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=d8747aec7ebf8332ddae276d5f8fb42d3152b5a1
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_07_21T02_11_36_0700
minikube.k8s.io/version=v1.9.1
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 21 Jul 2020 09:11:32 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
MemoryPressure False Tue, 21 Jul 2020 09:14:34 +0000 Tue, 21 Jul 2020 09:11:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 21 Jul 2020 09:14:34 +0000 Tue, 21 Jul 2020 09:11:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 21 Jul 2020 09:14:34 +0000 Tue, 21 Jul 2020 09:11:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready False Tue, 21 Jul 2020 09:14:34 +0000 Tue, 21 Jul 2020 09:11:27 +0000 KubeletNotReady container runtime status check may not have completed yet Addresses: InternalIP: 172.17.0.2 Hostname: minikube Capacity: cpu: 16 ephemeral-storage: 804139352Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 60111844Ki pods: 110 Allocatable: cpu: 16 ephemeral-storage: 804139352Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 60111844Ki pods: 110 System Info: Machine ID: e83acec14442432b86b3e77b6bbcfe03 System UUID: c4d95ffe-70c0-4660-806f-a43891c87d6b Boot ID: 55a28076-973d-4fd3-9b32-b25e77bad388 Kernel Version: 4.1.12-124.39.5.1.el7uek.x86_64 OS Image: Ubuntu 19.10 Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.2 Kubelet Version: v1.18.0 Kube-Proxy Version: v1.18.0 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (2 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
kube-system kindnet-cv7kb 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2m45s kube-system kube-proxy-vjnqz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m45s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits
cpu 100m (0%) 100m (0%) memory 50Mi (0%) 50Mi (0%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message
Normal NodeHasSufficientMemory 3m10s (x5 over 3m10s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 3m10s (x5 over 3m10s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 3m10s (x4 over 3m10s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Warning FailedNodeAllocatableEnforcement 3m10s kubelet, minikube Failed to update Node Allocatable Limits ["kubepods"]: failed to set supported cgroup subsystems for cgroup [kubepods]: failed to find subsystem mount for required subsystem: pids Normal Starting 2m54s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 2m54s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m54s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m54s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 2m46s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 2m46s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m46s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m46s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientPID 2m39s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 2m39s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m39s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal Starting 2m39s kubelet, minikube Starting kubelet. Normal Starting 2m32s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 2m32s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m32s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m32s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 2m25s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 2m25s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m25s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m25s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientPID 2m17s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 2m17s kubelet, minikube Starting kubelet. Normal NodeHasNoDiskPressure 2m17s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientMemory 2m17s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal Starting 2m10s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 2m10s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m10s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m10s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 2m2s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 2m2s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m2s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m2s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 115s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 115s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasSufficientPID 115s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasNoDiskPressure 115s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal Starting 107s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 107s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 107s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 107s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 100s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 100s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 100s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 100s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 92s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 92s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasSufficientPID 92s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasNoDiskPressure 92s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal Starting 85s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 85s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 85s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 85s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 77s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 77s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 77s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 77s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 70s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 70s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 70s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 70s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientPID 62s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 62s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 62s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal Starting 62s kubelet, minikube Starting kubelet. Normal Starting 55s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 55s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 55s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 55s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 47s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 47s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 47s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 47s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientPID 40s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 40s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 40s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal Starting 40s kubelet, minikube Starting kubelet. Normal Starting 32s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 32s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 32s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 32s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 25s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 25s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 25s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 25s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 17s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 17s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasSufficientPID 17s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasNoDiskPressure 17s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientMemory 10s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal Starting 10s kubelet, minikube Starting kubelet. Normal NodeHasNoDiskPressure 10s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 10s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 2s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 2s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
==> dmesg <== [Jul18 05:37] systemd-fstab-generator[52440]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul18 11:37] systemd-fstab-generator[50154]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul18 17:37] systemd-fstab-generator[44990]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul18 23:37] systemd-fstab-generator[40881]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul19 05:37] systemd-fstab-generator[36344]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul19 11:37] systemd-fstab-generator[34144]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul19 17:37] systemd-fstab-generator[28939]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul19 23:37] systemd-fstab-generator[24785]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul20 01:35] systemd-fstab-generator[65022]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul20 05:22] systemd-fstab-generator[110191]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul20 05:23] systemd-fstab-generator[110364]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul20 05:24] systemd-fstab-generator[110483]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +2.796808] systemd-fstab-generator[110870]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +18.140954] systemd-fstab-generator[112205]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul20 05:26] systemd-fstab-generator[117323]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul20 05:37] systemd-fstab-generator[123141]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul20 11:37] systemd-fstab-generator[4023]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul20 17:37] systemd-fstab-generator[63123]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul20 23:37] systemd-fstab-generator[31942]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 05:37] systemd-fstab-generator[5139]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:28] systemd-fstab-generator[84894]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:30] systemd-fstab-generator[85161]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +13.594292] systemd-fstab-generator[85297]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +6.794363] systemd-fstab-generator[85367]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +1.429949] systemd-fstab-generator[85572]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +2.556154] systemd-fstab-generator[85950]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:32] systemd-fstab-generator[89986]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:35] systemd-fstab-generator[95248]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +7.413234] systemd-fstab-generator[95589]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +1.795417] systemd-fstab-generator[95786]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +2.981647] systemd-fstab-generator[96146]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:37] systemd-fstab-generator[100059]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:41] systemd-fstab-generator[106619]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +27.338319] systemd-fstab-generator[107758]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +15.088659] systemd-fstab-generator[108197]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:42] systemd-fstab-generator[108448]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:44] systemd-fstab-generator[111004]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:46] systemd-fstab-generator[112673]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:48] systemd-fstab-generator[115320]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:56] systemd-fstab-generator[122877]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:57] systemd-fstab-generator[123507]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 07:58] systemd-fstab-generator[127690]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:07] systemd-fstab-generator[30225]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:09] systemd-fstab-generator[30698]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +4.108791] systemd-fstab-generator[31109]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +19.365822] systemd-fstab-generator[31768]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:16] systemd-fstab-generator[38093]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:19] systemd-fstab-generator[39833]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +3.431086] systemd-fstab-generator[40246]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:21] systemd-fstab-generator[42489]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:25] systemd-fstab-generator[46138]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:27] systemd-fstab-generator[48231]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:49] systemd-fstab-generator[67085]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:50] systemd-fstab-generator[73846]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:51] systemd-fstab-generator[75593]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +54.567049] systemd-fstab-generator[81482]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:52] systemd-fstab-generator[81819]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [ +3.618974] systemd-fstab-generator[82220]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:54] systemd-fstab-generator[86445]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab? [Jul21 08:57] systemd-fstab-generator[92167]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
==> etcd [0d753c127dc6] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-07-21 09:11:27.518533 I | etcdmain: etcd Version: 3.4.3 2020-07-21 09:11:27.518604 I | etcdmain: Git SHA: 3cf2f69b5 2020-07-21 09:11:27.518611 I | etcdmain: Go Version: go1.12.12 2020-07-21 09:11:27.518634 I | etcdmain: Go OS/Arch: linux/amd64 2020-07-21 09:11:27.518641 I | etcdmain: setting maximum number of CPUs to 16, total number of available CPUs is 16 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-07-21 09:11:27.518751 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-07-21 09:11:27.520080 I | embed: name = minikube 2020-07-21 09:11:27.520099 I | embed: data dir = /var/lib/minikube/etcd 2020-07-21 09:11:27.520106 I | embed: member dir = /var/lib/minikube/etcd/member 2020-07-21 09:11:27.520112 I | embed: heartbeat = 100ms 2020-07-21 09:11:27.520117 I | embed: election = 1000ms 2020-07-21 09:11:27.520123 I | embed: snapshot count = 10000 2020-07-21 09:11:27.520133 I | embed: advertise client URLs = https://172.17.0.2:2379 2020-07-21 09:11:27.587400 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 switched to configuration voters=() raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 became follower at term 0 raft2020/07/21 09:11:27 INFO: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 became follower at term 1 raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620) 2020-07-21 09:11:27.591343 W | auth: simple token is not cryptographically signed 2020-07-21 09:11:27.593987 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] 2020-07-21 09:11:27.594135 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10) raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620) 2020-07-21 09:11:27.594800 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f 2020-07-21 09:11:27.596285 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-07-21 09:11:27.596379 I | embed: listening for peers on 172.17.0.2:2380 2020-07-21 09:11:27.596674 I | embed: listening for metrics on http://127.0.0.1:2381 raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 is starting a new election at term 1 raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 became candidate at term 2 raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2 raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 became leader at term 2 raft2020/07/21 09:11:27 INFO: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2 2020-07-21 09:11:27.788950 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f 2020-07-21 09:11:27.788966 I | embed: ready to serve client requests 2020-07-21 09:11:27.790561 I | embed: serving client requests on 127.0.0.1:2379 2020-07-21 09:11:27.790663 I | etcdserver: setting up the initial cluster version to 3.4 2020-07-21 09:11:27.797692 N | etcdserver/membership: set the initial cluster version to 3.4 2020-07-21 09:11:27.797784 I | etcdserver/api: enabled capabilities for version 3.4 2020-07-21 09:11:27.797809 I | embed: ready to serve client requests 2020-07-21 09:11:27.799240 I | embed: serving client requests on 172.17.0.2:2379
==> kernel <== 09:14:37 up 12 days, 1:08, 0 users, load average: 0.11, 0.22, 0.23 Linux minikube 4.1.12-124.39.5.1.el7uek.x86_64 #2 SMP Tue Jun 9 20:03:37 PDT 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 19.10"
==> kube-apiserver [b765c7383ac2] <==
W0721 09:11:30.299644 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0721 09:11:30.310507 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0721 09:11:30.327597 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0721 09:11:30.331227 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0721 09:11:30.348568 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0721 09:11:30.371735 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0721 09:11:30.371757 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0721 09:11:30.383362 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0721 09:11:30.383383 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0721 09:11:30.385380 1 client.go:361] parsed scheme: "endpoint"
I0721 09:11:30.385443 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379
==> kube-controller-manager [bc1da187d974] <== I0721 09:11:50.725530 1 shared_informer.go:223] Waiting for caches to sync for deployment I0721 09:11:50.744247 1 controllermanager.go:533] Started "cronjob" I0721 09:11:50.744379 1 cronjob_controller.go:97] Starting CronJob Manager E0721 09:11:50.765983 1 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W0721 09:11:50.766211 1 controllermanager.go:525] Skipping "service" I0721 09:11:50.784315 1 controllermanager.go:533] Started "endpoint" I0721 09:11:50.785227 1 endpoints_controller.go:182] Starting endpoint controller I0721 09:11:50.785402 1 shared_informer.go:223] Waiting for caches to sync for endpoint I0721 09:11:50.785729 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I0721 09:11:50.809299 1 shared_informer.go:230] Caches are synced for ReplicaSet I0721 09:11:50.809382 1 shared_informer.go:230] Caches are synced for service account W0721 09:11:50.811036 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0721 09:11:50.811888 1 shared_informer.go:230] Caches are synced for PV protection I0721 09:11:50.812001 1 shared_informer.go:230] Caches are synced for HPA I0721 09:11:50.812533 1 shared_informer.go:230] Caches are synced for node I0721 09:11:50.812557 1 range_allocator.go:172] Starting range CIDR allocator I0721 09:11:50.812564 1 shared_informer.go:223] Waiting for caches to sync for cidrallocator I0721 09:11:50.812572 1 shared_informer.go:230] Caches are synced for cidrallocator I0721 09:11:50.822224 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I0721 09:11:50.825613 1 shared_informer.go:230] Caches are synced for deployment I0721 09:11:50.827843 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24] I0721 09:11:50.838063 1 shared_informer.go:230] Caches are synced for bootstrap_signer I0721 09:11:50.838155 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I0721 09:11:50.845151 1 shared_informer.go:230] Caches are synced for ReplicationController I0721 09:11:50.855142 1 shared_informer.go:230] Caches are synced for namespace I0721 09:11:50.860092 1 shared_informer.go:230] Caches are synced for GC I0721 09:11:50.882726 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I0721 09:11:50.882726 1 shared_informer.go:230] Caches are synced for TTL I0721 09:11:50.883030 1 shared_informer.go:230] Caches are synced for endpoint_slice I0721 09:11:50.883389 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"f6782f7d-c5d3-46a3-a878-7f252702ed61", APIVersion:"apps/v1", ResourceVersion:"228", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2 I0721 09:11:50.885735 1 shared_informer.go:230] Caches are synced for endpoint I0721 09:11:50.902659 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"dc13fb0c-fe12-4a69-b66a-2ba00467016d", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-ztb7p E0721 09:11:50.905734 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again I0721 09:11:50.914200 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"dc13fb0c-fe12-4a69-b66a-2ba00467016d", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-7fb5p I0721 09:11:50.917628 1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"", Name:"kube-dns", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FailedToCreateEndpoint' Failed to create endpoint for service kube-system/kube-dns: endpoints "kube-dns" already exists I0721 09:11:50.999159 1 shared_informer.go:230] Caches are synced for job I0721 09:11:51.259215 1 shared_informer.go:230] Caches are synced for daemon sets I0721 09:11:51.276935 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"a871297b-34f9-4b0f-931c-c1ed00ecf3e0", APIVersion:"apps/v1", ResourceVersion:"252", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-cv7kb I0721 09:11:51.277343 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"f67beda3-2d3c-4448-bd68-d638fc6b96cd", APIVersion:"apps/v1", ResourceVersion:"238", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-vjnqz I0721 09:11:51.286349 1 shared_informer.go:230] Caches are synced for taint I0721 09:11:51.286438 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: I0721 09:11:51.286589 1 taint_manager.go:187] Starting NoExecuteTaintManager I0721 09:11:51.286881 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"87e6749e-dec9-4a3d-98cd-00aa8b21f727", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller W0721 09:11:51.293415 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. I0721 09:11:51.293519 1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode. E0721 09:11:51.296157 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"f67beda3-2d3c-4448-bd68-d638fc6b96cd", ResourceVersion:"238", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730919496, loc:(time.Location)(0x6d021e0)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(v1.Time)(0xc001800a80), FieldsType:"FieldsV1", FieldsV1:(v1.FieldsV1)(0xc001800aa0)}}}, Spec:v1.DaemonSetSpec{Selector:(v1.LabelSelector)(0xc001800ae0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(nil), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(0xc001ebdc40), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc001800b40), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc001800b60), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(v1.EnvVarSource)(0xc001800ba0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(v1.Probe)(nil), ReadinessProbe:(v1.Probe)(nil), StartupProbe:(v1.Probe)(nil), Lifecycle:(v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(v1.SecurityContext)(0xc001857cc0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(int64)(0xc001fa03d8), ActiveDeadlineSeconds:(int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(bool)(nil), SecurityContext:(v1.PodSecurityContext)(0xc0004056c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(int32)(nil), DNSConfig:(v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(string)(nil), EnableServiceLinks:(bool)(nil), PreemptionPolicy:(v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(v1.RollingUpdateDaemonSet)(0xc0013ea980)}, MinReadySeconds:0, RevisionHistoryLimit:(int32)(0xc001fa0428)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again I0721 09:11:51.309097 1 shared_informer.go:230] Caches are synced for disruption I0721 09:11:51.309121 1 disruption.go:339] Sending events to api server. I0721 09:11:51.402259 1 shared_informer.go:230] Caches are synced for persistent volume I0721 09:11:51.408459 1 shared_informer.go:230] Caches are synced for expand I0721 09:11:51.408478 1 shared_informer.go:230] Caches are synced for resource quota I0721 09:11:51.409709 1 shared_informer.go:230] Caches are synced for stateful set I0721 09:11:51.413972 1 shared_informer.go:230] Caches are synced for garbage collector I0721 09:11:51.413992 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0721 09:11:51.424852 1 shared_informer.go:230] Caches are synced for PVC protection I0721 09:11:51.460570 1 shared_informer.go:230] Caches are synced for attach detach I0721 09:11:51.485948 1 shared_informer.go:230] Caches are synced for garbage collector I0721 09:11:51.855834 1 request.go:621] Throttling request took 1.039122077s, request: GET:https://172.17.0.2:8443/apis/authorization.k8s.io/v1?timeout=32s I0721 09:11:52.457014 1 shared_informer.go:223] Waiting for caches to sync for resource quota I0721 09:11:52.457068 1 shared_informer.go:230] Caches are synced for resource quota
==> kube-scheduler [924b96c9a517] <== I0721 09:11:27.719775 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0721 09:11:27.719857 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0721 09:11:28.408220 1 serving.go:313] Generated self-signed cert in-memory W0721 09:11:32.892639 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0721 09:11:32.892670 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0721 09:11:32.892681 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. W0721 09:11:32.892690 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0721 09:11:32.907030 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0721 09:11:32.907076 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0721 09:11:32.908724 1 authorization.go:47] Authorization is disabled W0721 09:11:32.908741 1 authentication.go:40] Authentication is disabled I0721 09:11:32.908757 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0721 09:11:32.910447 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0721 09:11:32.910521 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0721 09:11:32.911630 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0721 09:11:32.911742 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0721 09:11:32.914113 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0721 09:11:32.914281 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0721 09:11:32.918825 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0721 09:11:32.920267 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0721 09:11:32.920974 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0721 09:11:32.921497 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0721 09:11:32.921521 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0721 09:11:32.921765 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0721 09:11:32.921802 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0721 09:11:32.921918 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0721 09:11:32.923390 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0721 09:11:32.923711 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0721 09:11:32.924804 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0721 09:11:32.987774 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0721 09:11:32.987887 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0721 09:11:32.988038 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0721 09:11:32.988140 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0721 09:11:32.989048 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope I0721 09:11:35.010771 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0721 09:11:35.812647 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I0721 09:11:35.822540 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler E0721 09:11:39.347157 1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue E0721 09:13:05.852131 1 scheduler.go:385] Error updating the condition of the pod kube-system/storage-provisioner: Operation cannot be fulfilled on pods "storage-provisioner": the object has been modified; please apply your changes to the latest version and try again
==> kubelet <==
-- Logs begin at Tue 2020-07-21 09:10:55 UTC, end at Tue 2020-07-21 09:14:38 UTC. --
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.203505 9938 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.203695 9938 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.238373 9938 kubelet_node_status.go:70] Attempting to register node minikube
Jul 21 09:14:34 minikube kubelet[9938]: E0721 09:14:34.250009 9938 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.256390 9938 kubelet_node_status.go:112] Node minikube was previously registered
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.256474 9938 kubelet_node_status.go:73] Successfully registered node minikube
Jul 21 09:14:34 minikube kubelet[9938]: E0721 09:14:34.450219 9938 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.454795 9938 cpu_manager.go:184] [cpumanager] starting with none policy
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.454830 9938 cpu_manager.go:185] [cpumanager] reconciling every 10s
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.454853 9938 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.455065 9938 state_mem.go:88] [cpumanager] updated default cpuset: ""
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.455079 9938 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.455095 9938 policy_none.go:43] [cpumanager] none policy: Start
Jul 21 09:14:34 minikube kubelet[9938]: F0721 09:14:34.456245 9938 kubelet.go:1383] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids
Jul 21 09:14:34 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Jul 21 09:14:34 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul 21 09:14:35 minikube systemd[1]: kubelet.service: Service RestartSec=600ms expired, scheduling restart.
Jul 21 09:14:35 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24.
Jul 21 09:14:35 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jul 21 09:14:35 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.207188 10414 server.go:417] Version: v1.18.0
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.207531 10414 plugins.go:100] No cloud provider specified.
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.207601 10414 server.go:837] Client rotation is on, will bootstrap in background
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.210224 10414 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.291879 10414 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294423 10414 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294456 10414 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294527 10414 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294563 10414 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294573 10414 container_manager_linux.go:306] Creating device plugin manager: true
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294648 10414 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294671 10414 client.go:92] Start docker client with request timeout=2m0s
Jul 21 09:14:35 minikube kubelet[10414]: W0721 09:14:35.302281 10414 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.302325 10414 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jul 21 09:14:35 minikube kubelet[10414]: W0721 09:14:35.309729 10414 plugins.go:193] can't set sysctl net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.309815 10414 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.317122 10414 docker_service.go:258] Docker Info: &{ID:KX6N:MJFK:QB5C:TQXV:R2SR:HYOP:2TNZ:BOVD:KVVT:L2FL:OVN7:FTK3 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:69 SystemTime:2020-07-21T09:14:35.310786544Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.1.12-124.39.5.1.el7uek.x86_64 OperatingSystem:Ubuntu 19.10 (containerized) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0001f0f50 NCPU:16 MemTotal:61554528256 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy:http://www-proxy-brmdc.us.oracle.com:80/ HTTPSProxy:http://www-proxy-brmdc.us.oracle.com:80/ NoProxy:10.88.105.73,localhost,127.0.0.1,.us.oracle.com,.oraclecorp.com,172.17.0.3 Name:minikube Labels:[provider=docker] ExperimentalBuild:false ServerVersion:19.03.2 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:
Hey @Lavie526 -- could you please upgrade to minikube v1.12.2 and then run the following:
minikube delete
minikube start --driver docker
if that fails, please provide the output of:
This sounds like a duplicate of #3760
Using sudo kubectl
is correct (for now)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Steps to reproduce the issue:
W0708 01:55:39.604315 22957 loader.go:223] Config not found: /scratch/jiekong/.kube/config The connection to the server localhost:8080 was refused - did you specify the right host or port?
I have already set export CHANGE_MINIKUBE_NONE_USER=true when start minikube.
Full output of failed command:
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command: