Closed sflxn closed 4 years ago
An alternative is to setup client-side load balancing of the api-server. I have an implementation using consul, consul-template and nginx running on every node in the cluster, and masters registering as services.
/assign @andrewsykim /assign @moshloop
@moshloop I don't think client-side loadbalancing is an option for CAPV mainly because we can't assume anything about an external caller to the cluster aside from the fact that it has a kubeconfig we generated. As you mentioned here https://github.com/kubernetes-sigs/cluster-api-bootstrap-provider-kubeadm/issues/125#issuecomment-521377966, a VIP bound to localhost only fixes loadbalancing for machines we can control, but even that I would argue is not necessary because we can already loadbalance to the control plane vai the kubernetes
Service + kube-proxy. I might be missing context here though so feel free to question some of my assumptions here.
@andrewsykim kube-proxy isn't an option as it needs a control plane endpoint passed to it in order to start.
For external clients, once the cluster is bootstrapped things like MetalLB and ExternalDNS can be used to provide a stable endpoint.
@andrewsykim kube-proxy isn't an option as it needs a control plane endpoint passed to it in order to start.
kube-proxy needs a endpoint to start, but once it starts it creates a kubernetes
Service which provides a VIP (via ClusterIP) for internal clients to use which will direct traffic to all control plane nodes. My point earlier was that the "internal client" case is already solved - binding VIPs to localhost like you mentioned wouldn't provide much more value than what kube-proxy is already doing. We need to address external clients for the most part and we can't assume much about their enviroments - whether it's consul, binded VIPs on localhost, etc.
For external clients, once the cluster is bootstrapped things like MetalLB and ExternalDNS can be used to provide a stable endpoint.
I think MetalLB and ExternalDNS are worth digging into, but I'm skeptical because both need a cluster to start and for the external client case we need endpoints before a cluster is even created.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
kubeadm init
is run.If the above is correct, perhaps a rudermentry linux based aproach would be sufficient? E.g. heartbeat
or pacemaker
kind: KubeadmConfig
| preKubeadmCommands
ControlPlaneEndPoint
entry as input and generate the above template with it embeddedpacemaker
packages are part of most distributions (e.g. CentOS7 Base) and could easily be added to ImageBuilder OVAsExample install:
yum install pacemaker pcs resource-agents
systemctl start pcsd.service
echo CHANGEME | passwd --stdin hacluster
pcs cluster auth $server1 $server2 $server3 -u hacluster -p CHANGEME --force
pcs cluster setup --force --name kubernetes $server1 $server2 $server3
pcs cluster start --all
pcs property set stonith-enabled=false
pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=$virtual_ip cidr_netmask=32 nic=id0 op monitor interval=30s
@andrewsykim @akutz has this approach already been considered? Are there any constraints or requirements that we are missing?
/remove-lifecycle stale
Hi @MnrGreg,
I'm closing this issue. Please see this doc for more information and tracking links.
/close
@akutz: Closing this issue.
In environments that have load balancers, we need to add some programmatic support to set that up before we create HA clusters. For environments that do not have load balancers, we need to set up separate load balancers (e.g. running in a separate VMs).