Closed MilenkoMarkovic closed 1 year ago
How to check if the role k3s/master
is installed?
Can you please post everything in the issue template?
I will try to reproduce it here.
Operating system:
Ubuntu 22.04
Hardware:
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 5700U with Radeon Graphics
CPU family: 23
Model: 104
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU max MHz: 4369,9209
CPU min MHz: 1400,0000
BogoMIPS: 3593.21
all.yml
---
k3s_version: v1.24.12+k3s1
ansible_user: vagrant
systemd_dir: /etc/systemd/system
system_timezone: "Europe/Berlin"
flannel_iface: "eth0"
apiserver_endpoint: "192.168.1.22"
k3s_token: "mikdsomechamipush"
k3s_node_ip: '{{ ansible_facts[flannel_iface]["ipv4"]["address"] }}'
k3s_master_taint: "{{ true if groups['node'] | default([]) | length >= 1 else false }}"
extra_args: >-
--flannel-iface={{ flannel_iface }}
--node-ip={{ k3s_node_ip }}
extra_server_args: >-
{{ extra_args }}
{{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }}
--tls-san {{ apiserver_endpoint }}
--disable servicelb
--disable traefik
extra_agent_args: >-
{{ extra_args }}
kube_vip_tag_version: "v0.5.11"
metal_lb_type: "native"
metal_lb_mode: "layer2"
metal_lb_frr_tag_version: "v7.5.1"
metal_lb_speaker_tag_version: "v0.13.9"
metal_lb_controller_tag_version: "v0.13.9"
metal_lb_ip_range: "192.168.1.80-192.168.1.90"
host.ini
[master]
192.168.1.23
192.168.1.24
192.168.1.25
[node]
192.168.1.26
192.168.1.28
host.ini
[master]
IP.ADDRESS.ONE
IP.ADDRESS.TWO
IP.ADDRESS.THREE
[node]
IP.ADDRESS.FOUR
IP.ADDRESS.FIVE
[k3s_cluster:children]
master
node
With -vvv how it failed
fatal: [192.168.1.23]: FAILED! => {
"changed": true,
"cmd": [
"systemd-run",
"-p",
"RestartSec=2",
"-p",
"Restart=on-failure",
"--unit=k3s-init",
"k3s",
"server",
"--cluster-init",
"--token",
"mikdsomechamipush"
],
"delta": "0:00:00.003196",
"end": "2023-04-28 10:09:58.272333",
"invocation": {
"module_args": {
"_raw_params": "systemd-run -p RestartSec=2 -p Restart=on-failure --unit=k3s-init k3s server --cluster-init\n --token mikdsomechamipush\n ",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": "/etc/systemd/system/k3s.service",
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2023-04-28 10:09:58.269137",
"stderr": "Failed to find executable k3s: No such file or directory",
"stderr_lines": [
"Failed to find executable k3s: No such file or directory"
],
"stdout": "",
"stdout_lines": []
}
fatal: [192.168.1.24]: FAILED! => {
"changed": true,
"cmd": [
"systemd-run",
"-p",
"RestartSec=2",
"-p",
"Restart=on-failure",
"--unit=k3s-init",
"k3s",
"server",
"--cluster-init",
"--token",
"mikdsomechamipush"
],
"delta": "0:00:00.002963",
"end": "2023-04-28 10:09:57.602753",
"invocation": {
"module_args": {
"_raw_params": "systemd-run -p RestartSec=2 -p Restart=on-failure --unit=k3s-init k3s server --cluster-init\n --token mikdsomechamipush\n ",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": "/etc/systemd/system/k3s.service",
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2023-04-28 10:09:57.599790",
"stderr": "Failed to find executable k3s: No such file or directory",
"stderr_lines": [
"Failed to find executable k3s: No such file or directory"
],
"stdout": "",
"stdout_lines": []
}
This is similar to #178. Have you tried running the reset task from reset.yml
? This is important if something fails initially.
I cloned the repo. I created VMs on my laptop using this Vagrantfile
I can ssh to all machines.
I changed ini file [master] 192.168.1.11 192.168.1.12 192.168.1.13
[node] 192.168.1.16 192.168.1.18
[k3s_cluster:children] master node
Context (variables)
Ubunut22.04
when I run
ansible-playbook site.yml -i inventory/my-cluster/hosts.ini
I got
k3s is installed
why?
Possible Solution