Closed qw1mb0 closed 5 years ago
kubeadm is supposed to use the filed ControlPlaneEndpoint in ClusterConfiguration which should be a load balancer in front of your API servers. so you need to setup the topology correctly first. you then join seconday control plane nodes to this LB ip:port.
by hardcoding a bind address in the ClusterConfiguration you will get
error: failed to create listener: failed to listen on 10.135.71.30:6443: listen tcp 10.135.71.30:6443: bind: cannot assign requested address
on the secondary CP nodes, because they will try to bind to the same address as the already existing control plane node.
And replace IP-address for: ClusterConfiguration.apiServer.extraArgs.advertise-address
this is still not how it should work, but what value do you replace it to, before joining the second CP node?
If you want to specify the bind-address of the newly added control-plane node, you can try kubeadm join --config <KUBEADM-CONFIG FILE>
. @qw1mb0
Maybe we should ignore the bind-address information in kubeadm-config
ConfigMap for new joined control-plane node ? @neolit123
Like this :
// fetchInitConfiguration reads the cluster configuration from the kubeadm-admin configMap
func fetchInitConfiguration(tlsBootstrapCfg *clientcmdapi.Config) (*kubeadmapi.InitConfiguration, error) {
// creates a client to access the cluster using the bootstrap token identity
tlsClient, err := kubeconfigutil.ToClientSet(tlsBootstrapCfg)
if err != nil {
return nil, errors.Wrap(err, "unable to access the cluster")
}
// Fetches the init configuration
initConfiguration, err := configutil.FetchInitConfigurationFromCluster(tlsClient, os.Stdout, "preflight", true)
if err != nil {
return nil, errors.Wrap(err, "unable to fetch the kubeadm-config ConfigMap")
}
// ignore the bind-address info in `kubeadm-config` ConfigMap for new joined control-plane node
delete(initConfiguration.ClusterConfiguration.APIServer.ExtraArgs, "advertise-address")
delete(initConfiguration.ClusterConfiguration.APIServer.ExtraArgs, "bind-address")
delete(initConfiguration.ClusterConfiguration.ControllerManager.ExtraArgs, "address")
delete(initConfiguration.ClusterConfiguration.ControllerManager.ExtraArgs, "bind-address")
delete(initConfiguration.ClusterConfiguration.Scheduler.ExtraArgs, "address")
delete(initConfiguration.ClusterConfiguration.Scheduler.ExtraArgs, "bind-address")
return initConfiguration, nil
}
What do you think about it?
/assign
this is still not how it should work, but what value do you replace it to, before joining the second CP node?
I replaced the addresses of the first master with the address of the second master
If you want to specify the bind-address of the newly added control-plane node, you can try kubeadm join --config
. @qw1mb0
Do we have an example of a config for a join control plane?
Maybe we should ignore the bind-address information in kubeadm-config ConfigMap for new joined control-plane node ? @neolit123
This is a good idea.
you still have to use a load-balancer. please have a look here https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/
Maybe we should ignore the bind-address information in kubeadm-config ConfigMap for new joined control-plane node ? @neolit123
perhaps, but i don't think we should delete
. commented on the PR.
closing as related to user setup and not a kubeadm bug.
please see the comment i made here, for further details on controlPlaneEndpoint
:
https://github.com/kubernetes/kubeadm/issues/1611#issuecomment-517942714
Is this a request for help?
Yes
If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.
What keywords did you search in kubeadm issues before filing this one?
kubeadm private network, multi master specific network interface
Versions
kubeadm version (use
kubeadm version
):Environment:
kubectl version
):1.15.0
droplets on digital ocean
Ubuntu 18.04.2 LTS (Bionic Beaver)
uname -a
):4.18.0-25-generic
What happened?
I have 3 nodes with default gateway through the public network and a second network interface for the internal network.
I want to install Kubernetes 1.15 and what would etcd and the whole control plane communicate with each other over a private network. I tried this: I have 3 masters:
kube-master-01
- 10.135.71.30 (private ip)kube-master-02
- 10.135.131.33 (private ip)kube-master-03
- 10.135.169.182 (private ip)On the first master I create this config:
after which started init:
Init log:
I see that everything correctly appeared on the necessary network interface:
Then I tried to join the second master with command:
kubeadm join log output:
I see that everything correctly appeared on the necessary network interface:
It all works out as correct, but only the first master is on the endpoint list:
In apiserver on kube-master-02 logs i see error:
If i before join second master edit configmap:
And replace IP-address for:
ClusterConfiguration.apiServer.extraArgs.advertise-address
ClusterConfiguration.apiServer.extraArgs.bind-address
ClusterConfiguration.controllerManager.extraArgs.address
ClusterConfiguration.controllerManager.extraArgs.bind-address
ClusterConfiguration.scheduler.extraArgs.address
Then I will join the second master:
Everything starts to work correctly. How much is the right way?
What you expected to happen?