Closed iLem0n closed 6 months ago
This issue has been marked 'stale' due to lack of recent activity. If there is no further activity, the issue will be closed in another 30 days. Thank you for your contribution!
Please read this blog post to see the reasons why I mark issues as stale.
I seem to have run into this issue as well. Is there a band-aid/temp solution or workaround for this?
I seem to have run into this issue as well. Is there a band-aid/temp solution or workaround for this?
I think I've solved my issues for now, as a temp fix. Leaving this info here for the next person.
A bit more context to my situation: I'm trying to get geerlingguy's raspberry-pi-dramble to work. Even though it's archived etc etc. I've changed my version of kubernetes in main.yml from 1.19.70 to 1.25.1-00
I ran sudo kubeadm init
on kube1. Which gave me a a bit of additional troubleshooting I couldn't get from doing -vvvvv in the playbook.
That told me to fix 2 settings. Both errors I googled and I found the following two commands I could run:
$ sudo sysctl -w net.ipv4.ip_forward=1
$ sudo modprobe br_netfilter
After doing this it completed and spat out a $ kubeadm join [ip address]:6443 --token [token] --discovery-token-ca-cert-hash [sha256]
This I could use on the other Kubes (2, 3 and 4)
I had to run these 3 commands on the other kubes, which I simplified by doing:
$ sudo sysctl -w net.ipv4.ip_forward=1 && sudo modprobe br_netfilter
$ sudo kubeadm join [ip address]:6443 --token [token] --discovery-token-ca-cert-hash [sha256]
They all neatly joined kube1.
To make sure I did not get stuck running the playbook, I chose for the quick and dirty 'remove from playbook'.
$ nano /home/user/.ansible/roles/geerlingguy.kubernetes/tasks/node-setup.yml
and commenting out the 'Join node to Kubernetes control plane.'
Those sysctl
commands should run within this playbook. If not, please comment to https://github.com/geerlingguy/ansible-role-kubernetes/issues/146
To run the node-setup successfully, it is neccessary to run the control-plane AND the node-setup in one run, as the kubernetes-join-command
needs the control-plane to get the command.
(It is no problem to run the control-plane-setup multiple times, e.g. to add another worker-node)
So I am not sure how to do this in vagrant, as the node-setup depends on the control-plane-setup. This is because this is done with kubeadm-commands. The token to join is not saved in a file, but is read from control-plane during the run of the playbook. @iLem0n
This issue has been marked 'stale' due to lack of recent activity. If there is no further activity, the issue will be closed in another 30 days. Thank you for your contribution!
Please read this blog post to see the reasons why I mark issues as stale.
This issue has been closed due to inactivity. If you feel this is in error, please reopen the issue or file a new issue with the relevant details.
Trying to bring up a simple k8s cluster with one master and one worker node.
Just bringing them up using vagrant brings me to the following problem: It seems that the
kubernetes-join-command
is only be set on the master node not the worker ones. Which results in failure ad worker provisioning.Versions:
Vagrant file:
master-playbook.yml
join-command setup:
nodes tries to use join-command: