Closed kkimdev closed 5 years ago
Hey, thanks for feedback! I'm not really sure if this issue should be labeled as bug (in the documentation) or just closed. Let me explain why.
Those LBs created in terraform for cloud providers which doesn't support LB natively (like hetzner), is just a small hack for convenience of the users. Real setup should ofcourse contain at least 2 LBs, and have (for example) DNS pointing to both of them. But anyway, it's not kubeone's business to manage apiserver LBs. Kubeone itself has nothing to do with those "custom made" loadbalancers. It will happily use whatever user will provide in terraform outputs kubeone_api.value.endpoint
or directly in config.yaml as
apiEndpoint:
host: ...
port: 6443
The problem here is probably lack of explicit documentation about user-managed "LBs". I'll create a follow issue and close this one a bit later.
P.S. LBs are here to loadbalace only kube-apiserver endpoints, that's it, it's convenient to have them as you could replace control-plane nodes without loosing "identity" of what whole cluster sees as "kube-apiserver endpoint".
OK, #459 is bug issue. Closing this one.
What feature would you like to be added? To my understanding, Kubeone dedicates one node for load balancing if LB is not supported on the target cloud provider. E.g., https://github.com/kubermatic/kubeone/pull/414 . This is not necessarily ideal as the entire cluster availability depends on a single LB node.
An alternative approach is installing load balancers on all nodes as a daemonset. .e.g.,
What are use cases of the feature? When high availability load balancer is desired for better reliability.