techno-tim / k3s-ansible

The easiest way to bootstrap a self-hosted High Availability Kubernetes cluster. A fully automated HA k3s etcd install with kube-vip, MetalLB, and more. Build. Destroy. Repeat.
https://technotim.live/posts/k3s-etcd-ansible/
Apache License 2.0
2.41k stars 1.05k forks source link

Error installing at k3s/master do to what looks to be split error - correction listed below #317

Closed SeanRiggs closed 1 year ago

SeanRiggs commented 1 year ago

Expected Behavior

When running the playbook we should get to this section and see a Changed or ok like this:

TASK [k3s/master : Init cluster inside the transient k3s-init service] ***** changed: [192.168.128.213] changed: [192.168.128.209] changed: [192.168.128.214]

Current Behavior

I get an error when getting to this section:

TASK [k3s/master : Init cluster inside the transient k3s-init service] ***** fatal: [192.168.128.214]: FAILED! => {"msg": "An unhandled exception occurred while templating '{% if groups['master'] | length > 1 %}\n {% if ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname'] %}\n --cluster-init\n {% else %}\n --server https://{{ hostvars[groups['master'][0]].k3s_node_ip | split(\",\") | first | ansible.utils.ipwrap }}:6443\n {% endif %}\n --token {{ k3s_token }}\n{% endif %} {{ extra_server_args | default('') }}'. Error was a <class 'jinja2.exceptions.TemplateRuntimeError'>, original message: No filter named 'split' found."} fatal: [192.168.128.209]: FAILED! => {"msg": "An unhandled exception occurred while templating '{% if groups['master'] | length > 1 %}\n {% if ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname'] %}\n --cluster-init\n {% else %}\n --server https://{{ hostvars[groups['master'][0]].k3s_node_ip | split(\",\") | first | ansible.utils.ipwrap }}:6443\n {% endif %}\n --token {{ k3s_token }}\n{% endif %} {{ extra_server_args | default('') }}'. Error was a <class 'jinja2.exceptions.TemplateRuntimeError'>, original message: No filter named 'split' found."} changed: [192.168.128.213]

Steps to Reproduce

  1. run ansible playbook to install - Running on Raspberry Pi 4b

Context (variables)

Operating system: Debian Bullseye arm64

Hardware: Raspberry Pi

Variables Used

all.yml

k3s_version: " v1.25.9+k3s1"
ansible_user: NA
systemd_dir: "/etc/systemd/system"

flannel_iface: "eth0"

apiserver_endpoint: "192.168.128.255"

k3s_token: "NA"

extra_server_args: "default in git"
extra_agent_args: "default in git"

kube_vip_tag_version: "v0.5.12"

metal_lb_speaker_tag_version: "v0.13.9"
metal_lb_controller_tag_version: "v0.13.9"

metal_lb_ip_range: "192.168.128.149-192.168.128.159"

Hosts

host.ini

[master]
192.168.128.213
192.168.128.214
192.168.128.209

[node]
192.168.128.207
192.168.128.193

[k3s_cluster:children]
master
node

Possible Solution

original main.yml file found under roles/k3s/master changed to this:


---
# If you want to explicitly define an interface that ALL control nodes
# should use to propagate the VIP, define it here. Otherwise, kube-vip
# will determine the right interface automatically at runtime.
kube_vip_iface: null

server_init_args: >-
  {% if groups['master'] | length > 1 %}
    {% if ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname'] %}
      --cluster-init
    {% else %}
      --server https://{{ hostvars[groups['master'][0]].k3s_node_ip.split(",") | first | ansible.utils.ipwrap }}:6443
    {% endif %}
    --token {{ k3s_token }}
  {% endif %}
  {{ extra_server_args | default('') }}

This corrected the issue for me.

old server_init_args looked like this and did not work for me:

server_init_args: >-
  {% if groups['master'] | length > 1 %}
    {% if ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname'] %}
      --cluster-init
    {% else %}
      --server https://{{ hostvars[groups['master'][0]].k3s_node_ip | split(",") | first | ansible.utils.ipwrap }}:6443
    {% endif %}
    --token {{ k3s_token }}
  {% endif %}
  {{ extra_server_args | default('') }}

notice difference in the "split"

SeanRiggs commented 1 year ago

Closing.. I ready change request in pull request. I WAS running ansible 2.10.8 and see not supported.

trouble shooting guide does spell this out. min of ansible v2.11 - upgraded on Ubuntu 22.04 installing ansible-core which takes you to v2.12 ... sorry for the spam!!!!!!

smdion commented 1 year ago

Had the same issue. Thanks for posting this! Maybe add a catch/fail in the playbooks for older version of ansible?

SeanRiggs commented 1 year ago

THAT'S a GREAT Idea!

On Tue, Jun 20, 2023 at 4:43 PM Sean Dion @.***> wrote:

Had the same issue. Thanks for posting this! Maybe add a catch/fail in the playbooks for older version of ansible?

— Reply to this email directly, view it on GitHub https://github.com/techno-tim/k3s-ansible/issues/317#issuecomment-1599672563, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2V5KXC33UEYENB2G42TDMLXMIRRFANCNFSM6AAAAAAZJZ5FMQ . You are receiving this because you modified the open/close state.Message ID: @.***>

LeducH commented 1 year ago

great post in any case