Closed fredrikscode closed 2 years ago
This seems to be a Debian specific issue. It works fine with Ubuntu
➜ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3s-01 Ready control-plane,etcd,master 113s v1.24.3+k3s1
k3s-02 Ready control-plane,etcd,master 95s v1.24.3+k3s1
k3s-03 Ready control-plane,etcd,master 73s v1.24.3+k3s1
k3s-04 Ready <none> 28s v1.24.3+k3s1
k3s-05 Ready <none> 29s v1.24.3+k3s1
Hm, weird. I'll try with Ubuntu in a minute or two
Tested with no more modifications other than what has been documented above and I still experience the same thing using Ubuntu 22.10. Also tested with 20.04 so I can't seem to blame this on that unfortunately.
Ended up with this error, which I've seen on Debian as well:
fatal: [k3s-master-01]: FAILED! => {"attempts": 20, "changed": false, "cmd": ["k3s", "kubectl", "get", "nodes", "-l", "node-role.kubernetes.io/master=true", "-o=jsonpath={.items[*].metadata.name}"], "delta": "0:00:00.104545", "end": "2022-08-20 22:50:14.638833", "msg": "", "rc": 0, "start": "2022-08-20 22:50:14.534288", "stderr": "", "stderr_lines": [], "stdout": "ubuntu", "stdout_lines": ["ubuntu"]}
Not sure, I can run it multiple times in my lab (just did at least 20x to fix a bug). Are you sure you're using the correct ethernet adapter name?
Expected Behavior
I run ansible-playbook site.yml -i inventory/k3s-cluster/hosts.ini and get a k3s-cluster deployed.
Current Behavior
Runs through the playbook until the verification process which it goes through without success.
Steps to Reproduce
ansible_ssh_private_key_file: ~/.ssh/ansible
ansible-playbook site.yml -i inventory/k3s-cluster/hosts.ini
Context (variables)
Operating system: Debian 11
Hardware: (CPU/RAM/Disk type)
Variables Used:
all.yml
Hosts
host.ini
Possible Solution