kubernetes-sigs / kubespray

Deploy a Production Ready Kubernetes Cluster
Apache License 2.0
16.19k stars 6.48k forks source link

Newly added worker node not showing up #10355

Closed neo3matrix closed 10 months ago

neo3matrix commented 1 year ago

Hi everyone,

I used scale.yml with --limit="new-worker-node-hostname". The ansible command worked fine without any errors. But, on master node, when I run kubectl get nodes, I can't see the new worker node listed there. I even rebooted the worker node but no use.

Environment: My setup is on-prem (Bare metal). I have one master and 7 worker nodes in my cluster.

Kubespray version (commit) (git rev-parse --short HEAD): My Kubespray git version is locked to commit#18efdc2

Command used to invoke ansible:

On some forums, I read that instead of scale.yml, you should run cluster.yml Check NOTE section on this blog. But, as per official documentation on Kubespray git repo, cluster.yml is only used to add control plane node and scale.yml is used for adding worker node.

So, can someone please guide me on the same.

yankay commented 1 year ago

Thanks @neo3matrix

To add a worker node, we can follow the docs : https://github.com/kubernetes-sigs/kubespray/blob/master/docs/nodes.md#addingreplacing-a-worker-node

If there are errors, it may be a bug.

If you are glad to, it's welcome to investigate and provide a PR to fix it.

neo3matrix commented 1 year ago

@yankay That's the exactly same doc I followed as I mentioned above in my problem statement.

After some discussion with @nicolas-goudry on Kubespray slack channel, we found what may be the root cause: I am adding a worker node which has dual stack networking setup (IPv4 & IPv6) to my regular IPv4 based k8s cluster. Due to that, kubelet on this new node not able to connect with api-server due to invalid certificate as the SAN on api-servers does not include [::1] - no IPv6 support in default config.

My questions are:

  1. Is there anyway I can update the SANs on a running cluster? If so, how to do that?
  2. If that's not possible then for a fresh cluster setup, will things work for me just by setting enable_dual_stack_networks variable to true? Or I need to change some other variables too?
neo3matrix commented 1 year ago

I tried setting enable_dual_stack_networks to true on a fresh cluster setup but still not seeing any IPv6 support in my cluster SANs.

Someone please help. There is not enough info or steps out there about dual stack network nodes with Kubespray.

VannTen commented 11 months ago

I tried setting enable_dual_stack_networks to true on a fresh cluster setup but still not seeing any IPv6 support in my cluster SANs.

Someone please help. There is not enough info or steps out there about dual stack network nodes with Kubespray.

Could you open a new issue outlining the problem in a more detailed way ?

Regarding migrating from single to dual-stack, I don't think that's currently possible (though I haven't tried)

VannTen commented 10 months ago

/close Please open a new issue regarding the dual stack problem if necessary

k8s-ci-robot commented 10 months ago

@VannTen: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kubespray/issues/10355#issuecomment-1894026262): >/close >Please open a new issue regarding the dual stack problem if necessary Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.