containers / ansible-podman-collections

Repository for Ansible content that can include playbooks, roles, modules, and plugins for use with the Podman tool
GNU General Public License v3.0
260 stars 141 forks source link

Quadlet filename set for podman_container but not podman_containers #803

Open Shadow53 opened 2 months ago

Shadow53 commented 2 months ago

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I am attempting to define multiple quadlet containers at once using the podman_containers module. This fails with the error:

fatal: [HOST]: FAILED! => {"changed": false, "msg": "Filename for container is required for creating a quadlet file."}

If I pass the exact same configuration to podman_container as part of a loop, it works fine.

Steps to reproduce the issue:

  1. Set up a host with Podman and Quadlet capabilities (I'm using Fedora CoreOS) and add it to an inventory file under the group caddy.

  2. Copy the following YAML to a playbook file:

Playbook ```yaml --- - name: Deploy Caddy using Podman hosts: caddy become: true handlers: - name: Restart Caddy ansible.builtin.systemd_service: daemon_reload: true enabled: true state: restarted name: caddy.service - name: Validate Caddyfiles notify: Reload Caddy containers.podman.podman_container_exec: name: systemd-caddy command: caddy validate workdir: /etc/caddy - name: Reload Caddy containers.podman.podman_container_exec: name: systemd-caddy command: caddy reload --force workdir: /etc/caddy vars: containers: - name: "{{ service_ident }}" image: docker.io/library/caddy:alpine cap_add: - NET_ADMIN mount: - type=bind,source=/etc/caddy,target=/etc/caddy,ro=true,Z=true network: "{{ service_ident }}" volume: - "{{ service_ident }}-config:/config:Z" - "{{ service_ident }}-data:/data:Z" containers_processed: "{{ containers | map('combine', dict(state='quadlet', pull='always', pod=service_ident, quadlet_options=['AutoUpdate=registry', 'Pull=true']), list_merge='append_rp', recursive=true) }}" service_ident: caddy pod_ports: - "80:80" - "443:443" volumes: - name: config - name: data tasks: - name: Create Caddy network containers.podman.podman_network: name: "{{ service_ident }}" state: quadlet internal: false - name: Create service podman pod containers.podman.podman_pod: name: "{{ service_ident }}" state: quadlet network: "{{ service_ident }}" publish: "{{ pod_ports | default() }}" restart_policy: "always" - name: Create service podman volume(s) containers.podman.podman_volume: name: "{{ service_ident }}-{{ item.name }}" state: quadlet loop: "{{ volumes }}" - name: Create a Caddy directories if not exist ansible.builtin.file: owner: root group: root path: /etc/caddy/sites state: directory mode: '0755' - name: Upload default Caddyfile notify: Validate Caddyfiles ansible.builtin.copy: owner: root group: root mode: "0400" dest: "/etc/caddy/Caddyfile" content: > { http_port 80 } import /etc/caddy/sites/* - name: Debug containers definition ansible.builtin.debug: var: to_debug vars: to_debug: "{{ containers_processed }}" - name: Create podman container notify: Restart Caddy containers.podman.podman_container: "{{ item }}" loop: "{{ containers_processed }}" - name: Create podman containers notify: Restart Caddy containers.podman.podman_containers: containers: "{{ containers_processed }}" ```
  1. Run the playbook. Notice how Create podman container succeeds while Create podman containers fails with the aforementioned error.

Describe the results you received:

fatal: [HOST]: FAILED! => {"changed": false, "msg": "Filename for container is required for creating a quadlet file."}

Describe the results you expected:

Success similar to using podman_container with loop.

Additional information you deem important (e.g. issue happens only occasionally):

The context for this is a homelab setup where I have a single role that deploys containers in a consistent way and various other roles that pass variables to the former. This is why the example playbook is formatted the way it is with a templated variable instead of defining the container inline.

Version of the containers.podman collection: Either git commit if installed from git: git show --summary Or version from ansible-galaxy if installed from galaxy: ansible-galaxy collection list | grep containers.podman

containers.podman                        1.15.2

Output of ansible --version:

ansible [core 2.16.8]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/shadow53/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.12/site-packages/ansible
  ansible collection location = /home/shadow53/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/bin/ansible
  python version = 3.12.4 (main, Jun  7 2024, 00:00:00) [GCC 14.1.1 20240607 (Red Hat 14.1.1-5)] (/usr/bin/python3)
  jinja version = 3.1.4
  libyaml = True

Output of podman version:

# Assuming the host in the inventory file
Client:       Podman Engine
Version:      5.1.0
API Version:  5.1.0
Go Version:   go1.22.3
Built:        Wed May 29 00:00:00 2024
OS/Arch:      linux/amd64

Output of podman info --debug:

Output ``` yaml host: arch: amd64 buildahVersion: 1.36.0 cgroupControllers: - cpu - memory - pids cgroupManager: systemd cgroupVersion: v2 conmon: package: conmon-2.1.10-1.fc40.x86_64 path: /usr/bin/conmon version: 'conmon version 2.1.10, commit: ' cpuUtilization: idlePercent: 97.05 systemPercent: 0.84 userPercent: 2.11 cpus: 1 databaseBackend: sqlite distribution: distribution: fedora variant: coreos version: "40" eventLogger: journald freeLocks: 2046 hostname: id.mnbryant.com idMappings: gidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 524288 size: 65536 uidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 524288 size: 65536 kernel: 6.8.11-300.fc40.x86_64 linkmode: dynamic logDriver: journald memFree: 251396096 memTotal: 2049576960 networkBackend: netavark networkBackendInfo: backend: netavark dns: package: aardvark-dns-1.11.0-1.fc40.x86_64 path: /usr/libexec/podman/aardvark-dns version: aardvark-dns 1.11.0 package: netavark-1.11.0-1.fc40.x86_64 path: /usr/libexec/podman/netavark version: netavark 1.11.0 ociRuntime: name: crun package: crun-1.15-1.fc40.x86_64 path: /usr/bin/crun version: |- crun version 1.15 commit: e6eacaf4034e84185fd8780ac9262bbf57082278 rundir: /run/user/1000/crun spec: 1.0.0 +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL os: linux pasta: executable: /usr/bin/pasta package: passt-0^20240510.g7288448-1.fc40.x86_64 version: | pasta 0^20240510.g7288448-1.fc40.x86_64 Copyright Red Hat GNU General Public License, version 2 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. remoteSocket: exists: false path: /run/user/1000/podman/podman.sock rootlessNetworkCmd: pasta security: apparmorEnabled: false capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT rootless: true seccompEnabled: true seccompProfilePath: /usr/share/containers/seccomp.json selinuxEnabled: true serviceIsRemote: false slirp4netns: executable: /usr/bin/slirp4netns package: slirp4netns-1.2.2-2.fc40.x86_64 version: |- slirp4netns version 1.2.2 commit: 0ee2d87523e906518d34a6b423271e4826f71faf libslirp: 4.7.0 SLIRP_CONFIG_VERSION_MAX: 4 libseccomp: 2.5.5 swapFree: 745795584 swapTotal: 1024454656 uptime: 171h 56m 19.00s (Approximately 7.12 days) variant: "" plugins: authorization: null log: - k8s-file - none - passthrough - journald network: - bridge - macvlan - ipvlan volume: - local registries: search: - registry.fedoraproject.org - registry.access.redhat.com - docker.io store: configFile: /var/home/core/.config/containers/storage.conf containerStore: number: 0 paused: 0 running: 0 stopped: 0 graphDriverName: overlay graphOptions: {} graphRoot: /var/home/core/.local/share/containers/storage graphRootAllocated: 69188169728 graphRootUsed: 5229629440 graphStatus: Backing Filesystem: xfs Native Overlay Diff: "true" Supports d_type: "true" Supports shifting: "false" Supports volatile: "true" Using metacopy: "false" imageCopyTmpDir: /var/tmp imageStore: number: 1 runRoot: /run/user/1000/containers transientStore: false volumePath: /var/home/core/.local/share/containers/storage/volumes version: APIVersion: 5.1.0 Built: 1716940800 BuiltTime: Wed May 29 00:00:00 2024 GitCommit: "" GoVersion: go1.22.3 Os: linux OsArch: linux/amd64 Version: 5.1.0 ```

Package info (e.g. output of rpm -q podman or apt list podman):

podman-5.1.0-1.fc40.x86_64

Playbook you run with ansible (e.g. content of playbook.yaml):

See above.

Command line and output of ansible run with high verbosity

Please NOTE: if you submit a bug about idempotency, run the playbook with --diff option, like:

ansible-playbook -i inventory --diff -vv playbook.yml

Output ``` ansible-playbook [core 2.16.8] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/shadow53/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.12/site-packages/ansible ansible collection location = /home/shadow53/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible-playbook python version = 3.12.4 (main, Jun 7 2024, 00:00:00) [GCC 14.1.1 20240607 (Red Hat 14.1.1-5)] (/usr/bin/python3) jinja version = 3.1.4 libyaml = True Using /etc/ansible/ansible.cfg as config file Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: caddy_standalone.yml ************************************************* 1 plays in caddy_standalone.yml PLAY [Deploy Caddy using Podman] *********************************************** TASK [Gathering Facts] ********************************************************* task path: /home/shadow53/Development/coreos/playbooks/caddy_standalone.yml:2 ok: [HOST] TASK [Create Caddy network] **************************************************** task path: /home/shadow53/Development/coreos/playbooks/caddy_standalone.yml:44 ok: [HOST] => {"actions": [], "changed": false, "network": {}} TASK [Create service podman pod] *********************************************** task path: /home/shadow53/Development/coreos/playbooks/caddy_standalone.yml:50 ok: [HOST] => {"actions": [], "changed": false, "pod": {}} TASK [Create service podman volume(s)] ***************************************** task path: /home/shadow53/Development/coreos/playbooks/caddy_standalone.yml:58 ok: [HOST] => (item={'name': 'config'}) => {"actions": [], "ansible_loop_var": "item", "changed": false, "item": {"name": "config"}, "volume": {}} ok: [HOST] => (item={'name': 'data'}) => {"actions": [], "ansible_loop_var": "item", "changed": false, "item": {"name": "data"}, "volume": {}} TASK [Create a Caddy directories if not exist] ********************************* task path: /home/shadow53/Development/coreos/playbooks/caddy_standalone.yml:64 ok: [HOST] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/caddy/sites", "secontext": "system_u:object_r:container_file_t:s0:c821,c974", "size": 33, "state": "directory", "uid": 0} TASK [Upload default Caddyfile] ************************************************ task path: /home/shadow53/Development/coreos/playbooks/caddy_standalone.yml:72 ok: [HOST] => {"changed": false, "checksum": "ff35b4525c1034362eb9249c174d27b2eb8f0692", "dest": "/etc/caddy/Caddyfile", "gid": 0, "group": "root", "mode": "0400", "owner": "root", "path": "/etc/caddy/Caddyfile", "secontext": "system_u:object_r:container_file_t:s0:c821,c974", "size": 49, "state": "file", "uid": 0} TASK [Debug containers definition] ********************************************* task path: /home/shadow53/Development/coreos/playbooks/caddy_standalone.yml:86 ok: [HOST] => { "to_debug": [ { "cap_add": [ "NET_ADMIN" ], "image": "docker.io/library/caddy:alpine", "mount": [ "type=bind,source=/etc/caddy,target=/etc/caddy,ro=true,Z=true" ], "name": "caddy", "network": "caddy", "pod": "caddy", "pull": "always", "quadlet_options": [ "AutoUpdate=registry", "Pull=true" ], "state": "quadlet", "volume": [ "caddy-config:/config:Z", "caddy-data:/data:Z" ] } ] } TASK [Create podman container] ************************************************* task path: /home/shadow53/Development/coreos/playbooks/caddy_standalone.yml:92 [WARNING]: Using a variable for a task's 'args' is unsafe in some situations (see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat- unsafe) ok: [HOST] => (item={'name': 'caddy', 'image': 'docker.io/library/caddy:alpine', 'cap_add': ['NET_ADMIN'], 'mount': ['type=bind,source=/etc/caddy,target=/etc/caddy,ro=true,Z=true'], 'network': 'caddy', 'volume': ['caddy-config:/config:Z', 'caddy-data:/data:Z'], 'state': 'quadlet', 'pull': 'always', 'pod': 'caddy', 'quadlet_options': ['AutoUpdate=registry', 'Pull=true']}) => {"actions": [], "ansible_loop_var": "item", "changed": false, "container": {}, "item": {"cap_add": ["NET_ADMIN"], "image": "docker.io/library/caddy:alpine", "mount": ["type=bind,source=/etc/caddy,target=/etc/caddy,ro=true,Z=true"], "name": "caddy", "network": "caddy", "pod": "caddy", "pull": "always", "quadlet_options": ["AutoUpdate=registry", "Pull=true"], "state": "quadlet", "volume": ["caddy-config:/config:Z", "caddy-data:/data:Z"]}} TASK [Create podman containers] ************************************************ task path: /home/shadow53/Development/coreos/playbooks/caddy_standalone.yml:97 fatal: [HOST]: FAILED! => {"changed": false, "msg": "Filename for container is required for creating a quadlet file."} PLAY RECAP ********************************************************************* HOST : ok=8 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ```

Additional environment details (AWS, VirtualBox, physical, etc.):

This is running against Fedora CoreOS running on a VPS, but seems unspecific to the host system.

sshnaidm commented 2 months ago

Thanks for catching, good to see someone uses this module :smile: