Closed moonek closed 3 years ago
I am very happy with the removal of LB VM resource and SPOF. The first request is sent to the VM with VIP ownership, but since then it isn't load balancing to each apiserver?
@moonek no, it won't be load balancing because the control plane endpoint -- the URL that clients use to talk to the apiserver -- is the VIP, and the VIP is just another IP that is assigned to one and only one VM at a time, and there is nothing acting as a load balancer. If you want a load balancer, for now you would need to come up with a recipe for deploying one (and make it HA, presumably), and then you'd manually set cluster.spec.controlPlaneEndpoint to the load balancer's IP/DNS name.
Once traffic flow to the VIP, can kube-vip redirect it to kubernetes service IP? If so, then kube-proxy will spread it to all apiserver instances.
@dhawal55 I imagine you could configure iptables to do that. I don't think kube-vip does that by default. You could file an issue there if you wanted?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
/kind feature
Describe the solution you'd like I fount it difficult to controlplane HA because the built-in load balancer is not in vsphere. And I also found some attempts to solve this. https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/pull/705 (a separate LB) https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/pull/722 (nsx-t LB)
However, a separate LB do not guarantee HA for LB(haproxy vm failure) and need additional resource for this. And nsx-t LB can only be configured in certain supported environments.
I want a load balancer with no SPOF due to LB failure, no additional resources and no environmental dependencies.
Anything else you would like to add: This is for reference and it's actually the way I'm using it on-prem. Run
haproxy
andkeepalived
on the controlplane to make internal LB. It only requires to allocatevip
that can be routed on all cluster nodes. The concept architecture is shown below and has been tested for node failure.In my on-prem, I wrote static ip manually in
haproxy.cfg
andkeepalived.conf
and then deployed thehaproxy+keepalived
container with this file mounted as docker. (with --restart=always) Thehaproxy+keepalived
container internally monitors the process for each module.I install the control planes after installing the architecture described above.
I don't know much about capv, but I think it's possible to automate it using capv.