containers / ansible-podman-collections

Repository for Ansible content that can include playbooks, roles, modules, and plugins for use with the Podman tool
GNU General Public License v3.0
252 stars 137 forks source link

podman_image does not pull latest image unless force is specified #153

Closed wkulhanek closed 3 years ago

wkulhanek commented 3 years ago

/kind bug

Description

Wrote a playbook to update local images automatically. Found that when an image is already present on the target machine the image does not get pulled - even if there is a newer tag available in the repository. This is a problem when using "latest" or "version" tags that always point to the latest image of a major version.

force does force the image download but that should not be required.

Steps to reproduce the issue:

  1. Pull old image to the machine: podman pull docker.io/wkulhanek/logtofile:0.1
  2. Tag as latest: podman tag docker.io/wkulhanek/logtofile:0.1 docker.io/wkulhanek/logtofile:latest
  3. Run image update playbook (from below). Nothing happens.

Describe the results you received: No image was updated

Describe the results you expected: Expected the image to be updated. latest points to tag 0.2

Output of ansible --version:

ansible 2.10.4
  config file = None
  configured module search path = ['/home/kulhanek/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/kulhanek/virtualenvs/ansible/lib/python3.9/site-packages/ansible
  executable location = /home/kulhanek/virtualenvs/ansible/bin/ansible
  python version = 3.9.0 (default, Oct  6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]

Output of podman version:

Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.15.5
Built:        Tue Dec  8 09:37:50 2020
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.18.0
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.21-3.fc33.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.21, commit: 0f53fb68333bdead5fe4dc5175703e22cf9882ab'
  cpus: 4
  distribution:
    distribution: fedora
    version: "33"
  eventLogger: journald
  hostname: homeserver.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.9.13-200.fc33.x86_64
  linkmode: dynamic
  memFree: 9303658496
  memTotal: 16674136064
  ociRuntime:
    name: crun
    package: crun-0.16-1.fc33.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.16
      commit: eb0145e5ad4d8207e84a327248af76663d4e50dd
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.fc33.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.0
  swapFree: 4193251328
  swapTotal: 4294963200
  uptime: 36h 47m 29.35s (Approximately 1.50 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/kulhanek/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.3.0-1.fc33.x86_64
      Version: |-
        fusermount3 version: 3.9.3
        fuse-overlayfs: version 1.3
        FUSE library version 3.9.3
        using FUSE kernel interface version 7.31
  graphRoot: /home/kulhanek/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 0
  runRoot: /run/user/1000/containers
  volumePath: /home/kulhanek/.local/share/containers/storage/volumes
version:
  APIVersion: 2.1.0
  Built: 1607438270
  BuiltTime: Tue Dec  8 09:37:50 2020
  GitCommit: ""
  GoVersion: go1.15.5
  OsArch: linux/amd64
  Version: 2.2.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-2.2.1-1.fc33.x86_64

Playbok you run with ansible (e.g. content of playbook.yaml):

---
- name: Podwatch
  connection: local
  hosts: localhost
  gather_facts: false
  become: true
  tasks:
  - name: Get all container images
    containers.podman.podman_image_info:
      name: logtofile
    register: r_images
  - name: Debug
    debug:
      msg: "Found {{ r_images.images | length }} images"
#  - name: Debug
#    debug:
#      msg: "{{ r_images.images[0] }}"
  - name: Updating image {{ r_images.images[0].RepoTags[0] }} with SHA {{ r_images.images[0].Id }}
    # Using the command module works...
    # command: "podman pull {{ r_images.images[0].RepoTags[0] }}"
    containers.podman.podman_image:
      name: "{{ r_images.images[0].RepoTags[0] }}"
      pull: true
      state: present
    register: r_image
  - name: Debug image
    debug:
      msg: "{{ r_image }}"

Command line and output of ansible run with high verbosity:

ansible-playbook 2.10.4
  config file = None
  configured module search path = ['/home/kulhanek/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/kulhanek/virtualenvs/ansible/lib/python3.9/site-packages/ansible
  executable location = /home/kulhanek/virtualenvs/ansible/bin/ansible-playbook
  python version = 3.9.0 (default, Oct  6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
No config file found; using defaults
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: podwatch.yaml ****************************************************************************************************************************************************************************************************************
1 plays in podwatch.yaml

PLAY [Podwatch] ************************************************************************************************************************************************************************************************************************
META: ran handlers

TASK [Get all container images] ********************************************************************************************************************************************************************************************************
task path: /home/kulhanek/podwatch/podwatch.yaml:8
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: kulhanek
<127.0.0.1> EXEC /bin/sh -c 'echo ~kulhanek && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/kulhanek/.ansible/tmp `"&& mkdir "` echo /home/kulhanek/.ansible/tmp/ansible-tmp-1608134346.252182-312648-250675596871340 `" && echo ansible-tmp-1608134346.252182-312648-250675596871340="` echo /home/kulhanek/.ansible/tmp/ansible-tmp-1608134346.252182-312648-250675596871340 `" ) && sleep 0'
Using module file /home/kulhanek/.ansible/collections/ansible_collections/containers/podman/plugins/modules/podman_image_info.py
<127.0.0.1> PUT /home/kulhanek/.ansible/tmp/ansible-local-312635kvgx2qwm/tmp5szszt17 TO /home/kulhanek/.ansible/tmp/ansible-tmp-1608134346.252182-312648-250675596871340/AnsiballZ_podman_image_info.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/kulhanek/.ansible/tmp/ansible-tmp-1608134346.252182-312648-250675596871340/ /home/kulhanek/.ansible/tmp/ansible-tmp-1608134346.252182-312648-250675596871340/AnsiballZ_podman_image_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n  -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-fyhqixdvggrcceesexmarmrjafrdvvpy ; /home/kulhanek/virtualenvs/ansible/bin/python /home/kulhanek/.ansible/tmp/ansible-tmp-1608134346.252182-312648-250675596871340/AnsiballZ_podman_image_info.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/kulhanek/.ansible/tmp/ansible-tmp-1608134346.252182-312648-250675596871340/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "images": [
        {
            "Annotations": {},
            "Architecture": "amd64",
            "Author": "",
            "Comment": "",
            "Config": {
                "Entrypoint": [
                    "/usr/bin/writelog"
                ],
                "Env": [
                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
                ],
                "Labels": {
                    "org.label-schema.build-date": "20190305",
                    "org.label-schema.license": "GPLv2",
                    "org.label-schema.name": "CentOS Base Image",
                    "org.label-schema.schema-version": "1.0",
                    "org.label-schema.vendor": "CentOS"
                }
            },
            "Created": "2019-03-15T09:21:33.4263741Z",
            "Digest": "sha256:bf1dfd99bfc4840132068c8b4551482792554f84d8b178a4fa1a72b1f31c7d4c",
            "GraphDriver": {
                "Data": {
                    "LowerDir": "/var/lib/containers/storage/overlay/8ff0a26ff94454bda059960a4c5cf4aa04f6391bd2e4274905ab6a46e140dd7c/diff:/var/lib/containers/storage/overlay/d69483a6face4499acb974449d1303591fcbb5cdce5420f36f8a6607bda11854/diff",
                    "UpperDir": "/var/lib/containers/storage/overlay/4c00aa4d1a10dc8e298766600553c223741d9a00d167c3984836824b4e97c15f/diff",
                    "WorkDir": "/var/lib/containers/storage/overlay/4c00aa4d1a10dc8e298766600553c223741d9a00d167c3984836824b4e97c15f/work"
                },
                "Name": "overlay"
            },
            "History": [
                {
                    "created": "2019-03-14T21:19:52.66982152Z",
                    "created_by": "/bin/sh -c #(nop) ADD file:074f2c974463ab38cf3532134e8ba2c91c9e346457713f2e8b8e2ac0ee9fd83d in / "
                },
                {
                    "created": "2019-03-14T21:19:53.099141434Z",
                    "created_by": "/bin/sh -c #(nop)  LABEL org.label-schema.schema-version=1.0 org.label-schema.name=CentOS Base Image org.label-schema.vendor=CentOS org.label-schema.license=GPLv2 org.label-schema.build-date=20190305",
                    "empty_layer": true
                },
                {
                    "created": "2019-03-14T21:19:53.361167852Z",
                    "created_by": "/bin/sh -c #(nop)  CMD [\"/bin/bash\"]",
                    "empty_layer": true
                },
                {
                    "created": "2019-03-15T09:21:20.8555192Z",
                    "created_by": "/bin/sh -c #(nop) COPY dir:4b8fafb8494addde7092d87866b09e065899f159c76cf85eba8bd506c87f358d in / "
                },
                {
                    "created": "2019-03-15T09:21:32.9715861Z",
                    "created_by": "/bin/sh -c yum -y update && yum -y upgrade && yum -y clean all && rm -rf /var/cache/yum"
                },
                {
                    "created": "2019-03-15T09:21:33.4263741Z",
                    "created_by": "/bin/sh -c #(nop)  ENTRYPOINT [\"/usr/bin/writelog\"]",
                    "empty_layer": true
                }
            ],
            "Id": "bf4569d5abc61c9b9e5337722050c301ddc7597d332858d0fe28915b61aea787",
            "Labels": {
                "org.label-schema.build-date": "20190305",
                "org.label-schema.license": "GPLv2",
                "org.label-schema.name": "CentOS Base Image",
                "org.label-schema.schema-version": "1.0",
                "org.label-schema.vendor": "CentOS"
            },
            "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
            "NamesHistory": [
                "quay.io/gpte-devops-automation/logtofile:latest"
            ],
            "Os": "linux",
            "Parent": "",
            "RepoDigests": [
                "docker.io/wkulhanek/logtofile@sha256:bf1dfd99bfc4840132068c8b4551482792554f84d8b178a4fa1a72b1f31c7d4c",
                "docker.io/wkulhanek/logtofile@sha256:ef013bc12e3d6baa56ff1b8b9323bdecfec69d2898ce7b4a276089e55fdebe0c"
            ],
            "RepoTags": [
                "docker.io/wkulhanek/logtofile:latest"
            ],
            "RootFS": {
                "Layers": [
                    "sha256:d69483a6face4499acb974449d1303591fcbb5cdce5420f36f8a6607bda11854",
                    "sha256:b8ad5ae1f681a5a042e2b33e863136f07cde5f50f692524f9b840050feab7f41",
                    "sha256:b4dfd14cdd50bed841112c7d06d171b688bb1f9d32e809e6c8ac8d43c17f627d"
                ],
                "Type": "layers"
            },
            "Size": 232124760,
            "User": "",
            "Version": "18.09.2",
            "VirtualSize": 232124760
        }
    ],
    "invocation": {
        "module_args": {
            "executable": "podman",
            "name": [
                "logtofile"
            ]
        }
    }
}

TASK [Updating image docker.io/wkulhanek/logtofile:latest with SHA bf4569d5abc61c9b9e5337722050c301ddc7597d332858d0fe28915b61aea787] ***************************************************************************************************
task path: /home/kulhanek/podwatch/podwatch.yaml:18
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: kulhanek
<127.0.0.1> EXEC /bin/sh -c 'echo ~kulhanek && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/kulhanek/.ansible/tmp `"&& mkdir "` echo /home/kulhanek/.ansible/tmp/ansible-tmp-1608134347.1402886-312739-99533363958532 `" && echo ansible-tmp-1608134347.1402886-312739-99533363958532="` echo /home/kulhanek/.ansible/tmp/ansible-tmp-1608134347.1402886-312739-99533363958532 `" ) && sleep 0'
Using module file /home/kulhanek/.ansible/collections/ansible_collections/containers/podman/plugins/modules/podman_image.py
<127.0.0.1> PUT /home/kulhanek/.ansible/tmp/ansible-local-312635kvgx2qwm/tmp36m1173l TO /home/kulhanek/.ansible/tmp/ansible-tmp-1608134347.1402886-312739-99533363958532/AnsiballZ_podman_image.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/kulhanek/.ansible/tmp/ansible-tmp-1608134347.1402886-312739-99533363958532/ /home/kulhanek/.ansible/tmp/ansible-tmp-1608134347.1402886-312739-99533363958532/AnsiballZ_podman_image.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n  -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-bsppopevqwykcjhxbzcsluoklncyvyrj ; /home/kulhanek/virtualenvs/ansible/bin/python /home/kulhanek/.ansible/tmp/ansible-tmp-1608134347.1402886-312739-99533363958532/AnsiballZ_podman_image.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/kulhanek/.ansible/tmp/ansible-tmp-1608134347.1402886-312739-99533363958532/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "actions": [],
    "changed": false,
    "image": {},
    "invocation": {
        "module_args": {
            "auth_file": null,
            "build": {
                "annotation": null,
                "cache": true,
                "extra_args": null,
                "force_rm": null,
                "format": "oci",
                "rm": true,
                "volume": null
            },
            "ca_cert_dir": null,
            "executable": "podman",
            "force": false,
            "name": "docker.io/wkulhanek/logtofile:latest",
            "password": null,
            "path": null,
            "pull": true,
            "push": false,
            "push_args": {
                "compress": null,
                "dest": null,
                "format": null,
                "remove_signatures": null,
                "sign_by": null,
                "transport": null
            },
            "state": "present",
            "tag": "latest",
            "username": null,
            "validate_certs": true
        }
    }
}

TASK [Debug image] *********************************************************************************************************************************************************************************************************************
task path: /home/kulhanek/podwatch/podwatch.yaml:26
ok: [localhost] => {
    "msg": {
        "actions": [],
        "changed": false,
        "failed": false,
        "image": {}
    }
}

TASK [Abort] ***************************************************************************************************************************************************************************************************************************
task path: /home/kulhanek/podwatch/podwatch.yaml:34
fatal: [localhost]: FAILED! => {
    "changed": false,
    "msg": "Abort, abort"
}

PLAY RECAP *****************************************************************************************************************************************************************************************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Additional environment details (AWS, VirtualBox, physical, etc.): Bare Metal machine running Fedora Server 33 (with latest updates)

sshnaidm commented 3 years ago

@wkulhanek tbh I don't understand the reproducing. The playbook you run is supposed to pull docker.io/wkulhanek/logtofile:0.1 ({{ r_images.images[0].RepoTags[0] }} in your task) and it pulls it. You don't try to pull the latest.

But if I put latest in task, then yes, it runs podman image ls docker.io/wkulhanek/logtofile:latest and finds it, and if there is no force pull, then nothing is done. It's by design. Podman command just has force pulling by default. I'm not sure it should be default for the module though.

wkulhanek commented 3 years ago

@sshnaidm the image had changed. Tag was still 1.0 but the sha was different.

sshnaidm commented 3 years ago

@sshnaidm the image had changed. Tag was still 1.0 but the sha was different.

Anyway, if image exists with podman image ls <image_name>:<image_tag>, it won't be pulled by module until force is set. The force is false by default.

CyberFox001 commented 1 year ago

With the option force: yes, if the new dowloaded image had a new digest the task status is changed.

I had to read the podman_image module source code to understand it, the documentation was not enough.