Closed aruruka closed 5 years ago
It's weird, check_apiserver.sh only check the apiserver is available or not, is nothing to do with the kubernetes version. Maybe you should check your create-config.sh file, is it correct in your different environments.
nginx upstream set to master-vip.local:6443 is not a good idea, it means nginx do nothing. nginx is a load-balancer of 3 masters, if you set master-vip.local:6443, it will always proxy the traffic to only one node.
- It's weird, check_apiserver.sh only check the apiserver is available or not, is nothing to do with the kubernetes version. Maybe you should check your create-config.sh file, is it correct in your different environments.
- nginx upstream set to master-vip.local:6443 is not a good idea, it means nginx do nothing. nginx is a load-balancer of 3 masters, if you set master-vip.local:6443, it will always proxy the traffic to only one node.
Yes, you were right. I just found out the problem that vip not switching to another node is because the ip address was taken by another server in the vpc. It was a simple mistake... And about the nginx lb, I just realized that setting the upstream to vip does make nonsense to use nginx, just like you said. Appreciate for your reply.
Hi cookeem, i used your guide v1.14 and v1.11 to deploy 2 clusters in different environment. but i have some questions about keepalived and nginx-lb.
Just some questions around keepalived and nginx-lb, if you happen to have time to answer, I'll really appreciate it.
check_apiserver.sh
was 1(apiserver error), but the vip just don't move to another master node. Meanwhile in v1.11, I did the same test and vip moved to another master node.Question: which appearance is supposed to be expected and why?
curl -k https://master-vip.local:16443
and got normal response in v1.14 cluster, but failed response in v1.11 cluster. Then I changed the upstream fromto
and got normal response in v1.11 cluster.
Question: Is it better to set "upstream" to "master-vip.local:6443" in nginx-lb conf and why?