Closed eliu closed 6 years ago
@eliu execute this command curl https://k8s-master:6443 -k
on k8s-node01 to check firewall not drop the request
@vinkdong all nodes have disabled firewalld or iptables services.
Here is the response from curl https://k8s-master:6443 -k
curl https://k8s-master:6443 -k
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
I found kube-proxy and kube-flannel remained Pending status for worker nodes:
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
default-http-backend-6dd4d5b7c9-v7f9m 1/1 Running 0 1d
heapster-746d67c7b9-dwnk9 1/1 Running 0 1d
kube-apiserver-k8s-master 1/1 Running 0 1d
kube-controller-manager-k8s-master 1/1 Running 0 1d
kube-dns-79d99555df-jhwmh 3/3 Running 0 1d
kube-flannel-d7vzv 1/1 Running 0 1d
kube-flannel-ll9sj 0/1 Pending 0 1d
kube-flannel-zqvkw 0/1 Pending 0 1d
kube-lego-6f45757db7-65cjb 1/1 Running 0 1d
kube-proxy-5g7hv 0/1 Pending 0 1d
kube-proxy-899nh 0/1 Pending 0 1d
kube-proxy-hhlrc 1/1 Running 0 1d
kube-scheduler-k8s-master 1/1 Running 0 1d
kubernetes-dashboard-dc8fcdbc5-mxnx2 1/1 Running 0 1d
nginx-ingress-controller-5d77d4945d-z9hc9 1/1 Running 0 1d
nginx-proxy-k8s-node01 1/1 Running 0 1d
nginx-proxy-k8s-node02 1/1 Running 0 1d
This always happens when GC disk or memory failed. Check your disk and memory (disk > 10G, memory > 512m by default).
Using latest kubeadm-ansible scripts to deploy a 3-node k8s cluster for several times, it keeps getting the
NotReady
status on the other 2 worker nodes.Node Status
k8s-node01 details:
kubelet status on k8s-node01