techno-tim / k3s-ansible

The easiest way to bootstrap a self-hosted High Availability Kubernetes cluster. A fully automated HA k3s etcd install with kube-vip, MetalLB, and more. Build. Destroy. Repeat.
https://technotim.live/posts/k3s-etcd-ansible/
Apache License 2.0
2.41k stars 1.05k forks source link

Invalid data passed to 'loop', it requires a list #310

Closed alexanderjacuna closed 1 year ago

alexanderjacuna commented 1 year ago

Expected Behavior

When executing the command: ansible-playbook site.yml -i inventory/my-cluster/hosts.ini I was expecting the playbook to complete as seen in the video tutorials. Instead it is getting hung up during the app armor task.

Current Behavior

Steps to Reproduce

  1. Following steps detailed in https://www.youtube.com/watch?v=CbkEWcUZ7zM
  2. Made adjustments to enable Proxmox LXC usage.
  3. Ran the command: ansible-playbook site.yml -i inventory/my-cluster/hosts.ini
  4. Seeing the following output...

PLAY [proxmox] ***

TASK [Gathering Facts] *** ok: [10.13.38.12] ok: [10.13.38.13] ok: [10.13.38.11]

TASK [proxmox_lxc : check for container files that exist on this host] *** ok: [10.13.38.12] => (item=3831) ok: [10.13.38.13] => (item=3831) ok: [10.13.38.11] => (item=3831) ok: [10.13.38.12] => (item=3832) ok: [10.13.38.13] => (item=3832) ok: [10.13.38.11] => (item=3832) ok: [10.13.38.12] => (item=3833) ok: [10.13.38.13] => (item=3833) ok: [10.13.38.12] => (item=3834) ok: [10.13.38.11] => (item=3833) ok: [10.13.38.13] => (item=3834) ok: [10.13.38.12] => (item=3835) ok: [10.13.38.13] => (item=3835) ok: [10.13.38.11] => (item=3834) ok: [10.13.38.12] => (item=3836) ok: [10.13.38.13] => (item=3836) ok: [10.13.38.11] => (item=3835) ok: [10.13.38.11] => (item=3836)

TASK [proxmox_lxc : filter out files that do not exist] ** ok: [10.13.38.11] ok: [10.13.38.12] ok: [10.13.38.13]

TASK [proxmox_lxc : get container ids from filtered files] *** ok: [10.13.38.11] ok: [10.13.38.12] ok: [10.13.38.13]

TASK [proxmox_lxc : Ensure lxc config has the right apparmor profile] **** fatal: [10.13.38.11]: FAILED! => {"msg": "Invalid data passed to 'loop', it requires a list, got this instead: <generator object do_map at 0x7fa72e505660>. Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup."} fatal: [10.13.38.12]: FAILED! => {"msg": "Invalid data passed to 'loop', it requires a list, got this instead: <generator object do_map at 0x7fa72e502430>. Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup."} fatal: [10.13.38.13]: FAILED! => {"msg": "Invalid data passed to 'loop', it requires a list, got this instead: <generator object do_map at 0x7fa72e517430>. Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup."}

PLAY RECAP *** 10.13.38.11 : ok=4 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
10.13.38.12 : ok=4 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
10.13.38.13 : ok=4 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

Context (variables)

Operating system(es):

root@pveXX:~# pveversion pve-manager/7.4-3/9002ab8a (running kernel: 5.15.102-1-pve)

srvadmin@k3sXX:~/k3s-ansible$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.6 LTS Release: 20.04 Codename: focal

Hardware: 3 x amd64 servers for Proxmox VE nodes with combined 88 cpus and 160 memory 6 x amd64 LXC servers with 2 cores, 4GB memory

Variables Used

inventory/my-cluster/group_vars/all.yml

---
k3s_version: v1.25.9+k3s1
ansible_user: srvadmin
systemd_dir: /etc/systemd/system
system_timezone: "America/Chicago"
flannel_iface: "eth0"
apiserver_endpoint: "10.13.38.30"

k3s_token: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
k3s_node_ip: '{{ ansible_facts[flannel_iface]["ipv4"]["address"] }}'
k3s_master_taint: "{{ true if groups['node'] | default([]) | length >= 1 else false }}"

extra_args: >-
  --flannel-iface={{ flannel_iface }}
  --node-ip={{ k3s_node_ip }}

extra_server_args: >-
  {{ extra_args }}
  {{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }}
  --tls-san {{ apiserver_endpoint }}
  --disable servicelb
  --disable traefik

extra_agent_args: >-
  {{ extra_args }}

kube_vip_tag_version: "v0.5.12"

metal_lb_type: "native"
metal_lb_mode: "layer2"
metal_lb_frr_tag_version: "v7.5.1"
metal_lb_speaker_tag_version: "v0.13.9"
metal_lb_controller_tag_version: "v0.13.9"
metal_lb_ip_range: "10.13.38.130-10.13.38.139"

proxmox_lxc_configure: true
proxmox_lxc_ssh_user: srvadmin
proxmox_lxc_ct_ids:
  - 3831
  - 3832
  - 3833
  - 3834
  - 3835
  - 3836

'srvadmin@k3s28wk:~/k3s-ansible$ cat ansible.cfg' ''' [defaults] inventory = inventory/my-cluster/hosts.ini private_key_file = /home/srvadmin/.ssh/id_ed25519_ansible '''

Hosts

inventory/my-cluster/hosts.ini

[master]
10.13.38.31
10.13.38.32
10.13.38.33

[node]
10.13.38.34
10.13.38.35
10.13.38.36

[proxmox]
10.13.38.11
10.13.38.12
10.13.38.13

[k3s_cluster:children]
master
node

Possible Solution

alexanderjacuna commented 1 year ago

Adding the inventory/my-cluster/group_vars/proxmox.yml file.

---
ansible_user: '{{ proxmox_lxc_ssh_user }}'
alexanderjacuna commented 1 year ago

Updated to latest ansible version.

srvadmin@k3s28wk:~/k3s-ansible$ ansible --version
ansible [core 2.12.10]
  config file = /home/srvadmin/k3s-ansible/ansible.cfg
  configured module search path = ['/home/srvadmin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  ansible collection location = /home/srvadmin/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/bin/ansible
  python version = 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0]
  jinja version = 2.10.1
  libyaml = True
alexanderjacuna commented 1 year ago

Archive this issue. I think I need to double check my work. I expect that I'll need to open another issue but let me confirm a couple more things. Will open another issue if needed.