Closed filidorwiese closed 4 months ago
Okay, I found the solution, which I'll post here for posterity.
After running kubectl delete node
(and before you run hetzner-k3s create
) make sure to also manually delete the worker node password that is stored as a kubernetes secret in the kube-system
namespace with a name like your-node-k3s-ccx33-pool-v2-worker2.node-password.k3s
. It will be re-created when running hetzner-k3s create
.
This might be a good addition to the README. Keep up the good work!
Hi! I'm running a k3s cluster (version v1.29.3+k3s1) using hetzner-k3s and had to replace a failing node. As found in the README of this project I did that with the
kubectl delete node
command. Followed by deleting the worker vm in Hetzner console and then runninghetzner-k3s create --config ./hetzner-k3s.yml
. However the process hangs when trying to start thek3s-agent
on the node and never becomes available as a new worker node in k3s. I manually have to kill the hetzner-k3s script.The
systemctl status k3s-agent
logs on the partially installed worker node gives a clue:Probably the master nodes (I'm running in HA) still have the old worker password, while a new one has been generated on the worker node itself (which has the exact same hostname since I'm replacing the node). I probably need to clear the old config form the master nodes, but I'm not sure how achieve that and don't want to mess up my cluster.
Any help would be appreciated.