Closed smenon78 closed 5 years ago
/area HA /sig cluster-lifecycle
looks like you are using on-site load balancing. since load balancing is very specific it's hard to provide generic help here...
if you look at this (On Site) there is a keepalived example: https://v1-9.docs.kubernetes.io/docs/setup/independent/high-availability/#set-up-master-load-balancer (just in case you missed that)
Should this be there in each master nodes (src ip will be IP of that node and peer other masters) so it knows all the IPs in the pool.
a keepalived config should be on all mater nodes. here is a good tutorial on keepalived: https://www.digitalocean.com/community/tutorials/how-to-set-up-highly-available-web-servers-with-keepalived-and-floating-ips-on-ubuntu-14-04
Add master1 and master2 to load balancer Once kubeadm has provisioned the other masters, you can add them to the load balancer pool.
i think this is about knowing the IPs of the masters once they are provisioned. adding them and then restarting the LB processes.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Following these documentation https://v1-9.docs.kubernetes.io/docs/setup/independent/high-availability/
But its not clear what it means by Adding master1 and master2 to load balancer /etc/keepalived/keepalived.conf in the doc doesn't have a section to add master nodes to the pool. Is this section probably missing? I also checked https://velotio.com/blog/2018/6/15/kubernetes-high-availability-kubeadm but didn't seem to find here as well.
Keepalived.conf from here https://icicimov.github.io/blog/kubernetes/Kubernetes-cluster-step-by-step-Part5/ has this in config: unicast_src_ip 192.168.0.147 unicast_peer { 192.168.0.148 192.168.0.149 } Should this be there in each master nodes (src ip will be IP of that node and peer other masters) so it knows all the IPs in the pool.