kubernetes / website

Kubernetes website and documentation repo:
https://kubernetes.io
Creative Commons Attribution 4.0 International
4.6k stars 14.49k forks source link

Doc not clear: Adding master1 and master2 to load balancer #9867

Closed smenon78 closed 5 years ago

smenon78 commented 6 years ago

Following these documentation https://v1-9.docs.kubernetes.io/docs/setup/independent/high-availability/

But its not clear what it means by Adding master1 and master2 to load balancer /etc/keepalived/keepalived.conf in the doc doesn't have a section to add master nodes to the pool. Is this section probably missing? I also checked https://velotio.com/blog/2018/6/15/kubernetes-high-availability-kubeadm but didn't seem to find here as well.

Keepalived.conf from here https://icicimov.github.io/blog/kubernetes/Kubernetes-cluster-step-by-step-Part5/ has this in config: unicast_src_ip 192.168.0.147 unicast_peer { 192.168.0.148 192.168.0.149 } Should this be there in each master nodes (src ip will be IP of that node and peer other masters) so it knows all the IPs in the pool.

neolit123 commented 6 years ago

/area HA /sig cluster-lifecycle

neolit123 commented 6 years ago

looks like you are using on-site load balancing. since load balancing is very specific it's hard to provide generic help here...

if you look at this (On Site) there is a keepalived example: https://v1-9.docs.kubernetes.io/docs/setup/independent/high-availability/#set-up-master-load-balancer (just in case you missed that)

Should this be there in each master nodes (src ip will be IP of that node and peer other masters) so it knows all the IPs in the pool.

a keepalived config should be on all mater nodes. here is a good tutorial on keepalived: https://www.digitalocean.com/community/tutorials/how-to-set-up-highly-available-web-servers-with-keepalived-and-floating-ips-on-ubuntu-14-04

Add master1 and master2 to load balancer Once kubeadm has provisioned the other masters, you can add them to the load balancer pool.

i think this is about knowing the IPs of the masters once they are provisioned. adding them and then restarting the LB processes.

fejta-bot commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 5 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 5 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/website/issues/9867#issuecomment-453779081): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.