Closed timothystewart6 closed 2 years ago
maybe add separate playbook\role to add nodes? there check if node are not in cluster already and do necessary steps to add them as master\workers ?
I think running the same playbook should work, just haven't tested it. I don't think I like the idea of having a separate playbook just to add nodes, especially if this works as is. I like the idea of your targets / desired state being in your hosts.ini
.
it does not work out the box for me, but i have not tested it much
trying to add 2 master nodes:
when i start playbook, at k3s/master : Verify that all nodes actually joined (check k3s-init.service if this fails) step
first i get
"Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation"
then i did on the other master node
sudo curl -k https://k3s-master-01.int.geoshapka.xyz:6443/cacerts -o /usr/local/share/ca-certificates/k3s.crt
sudo update-ca-certificates
and got
starting kubernetes: preparing server: failed to validate server configuration: https://k3s-master-01.int.geoshapka.xyz:6443/v1-k3s/config: 401 Unauthorized
have not tested much more ( i am on vacation), but it seems that some steps were missed
I actually just came here to request this as an option. I'm glad to see it being considered.
the first question here is: is the current role already idempotent? this is the first prerequisite IMO before moving forward.
I am going to test the procedure of adding new nodes and masters in the next days, and can report my experience here.
Also other 2 questions before starting:
and also:
I've successfully added a new node by running the play again.
My cluster also has just 2 masters.
I started with 2 masters, 1 node. Then added another node. Zero hiccups.
This 100% works
default settings in all.yml
hosts.ini
2 servers 1 agent
[master]
192.168.30.38
192.168.30.39
; 192.168.30.40
[node]
192.168.30.41
; 192.168.30.42
[k3s_cluster:children]
master
node
ran the playbook
➜ k3s-ansible git:(master) k get nodes
NAME STATUS ROLES AGE VERSION
k3s-01 Ready control-plane,etcd,master 109s v1.24.6+k3s1
k3s-02 Ready control-plane,etcd,master 97s v1.24.6+k3s1
k3s-04 Ready <none> 51s v1.24.6+k3s1
update hosts.ini
3 servers 2 agents
[master]
192.168.30.38
192.168.30.39
192.168.30.40
[node]
192.168.30.41
192.168.30.42
[k3s_cluster:children]
master
node
➜ k3s-ansible git:(master) k get nodes
NAME STATUS ROLES AGE VERSION
k3s-01 Ready control-plane,etcd,master 3m48s v1.24.6+k3s1
k3s-02 Ready control-plane,etcd,master 3m36s v1.24.6+k3s1
k3s-03 Ready control-plane,etcd,master 76s v1.24.6+k3s1
k3s-04 Ready <none> 2m50s v1.24.6+k3s1
k3s-05 Ready <none> 29s v1.24.6+k3s1
Tested on https://github.com/techno-tim/k3s-ansible/releases/tag/v1.24.6%2Bk3s1
If you are experiencing problems, please open an issue and fill out the required fields.
@timothystewart6 thanks for the reply.
I just added a third worker node and all I had to do was edit the hosts.ini file and re-run the playbook. I didn't have to re-scp the kubeconfig file or anything.
how about the k3s token , did you have to set that variable or was it populated automatically ?
I set the token variable myself.
On Apr 12, 2023, 5:37 AM, at 5:37 AM, dberardo-com @.***> wrote:
how about the k3s token , did you have to set that variable or was it populated automatically ?
-- Reply to this email directly or view it on GitHub: https://github.com/techno-tim/k3s-ansible/issues/107#issuecomment-1504968178 You are receiving this because you commented.
Message ID: @.***>
then maybe this was the issue i was facing ... i dont remember TBH. anyway, i hope that the fact that one needs to manually set the token has been documented in this project. in my setup i have a procedure to read the token from the existing cluster anyway.
thanks
Should we support adding nodes later using this playbook?
Considerations:
host.ini
(needs testing)