ansible-collections / kubernetes.core

The collection includes a variety of Ansible content to help automate the management of applications in Kubernetes and OpenShift clusters, as well as the provisioning and maintenance of clusters themselves.
Other
216 stars 135 forks source link

AnsibleModule object has no attribute 'env_update' #631

Closed plegg-rh closed 1 year ago

plegg-rh commented 1 year ago
SUMMARY

When using kubernetes.core.helm to release a chart on the cluster, the task fails with

'AnsibleModule' object has no attribute 'env_update'

ISSUE TYPE
COMPONENT NAME

kubernetes.core.helm

ANSIBLE VERSION
ansible [core 2.13.3]
  config file = ~/oke-auto-deploy/ipi-cluster-install/ansible/ansible.cfg
  configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.9/site-packages/ansible
  ansible collection location = ~/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/bin/ansible
  python version = 3.9.13 (main, Nov  9 2022, 13:16:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-15)]
  jinja version = 3.1.2
  libyaml = True
COLLECTION VERSION
Collection      Version
--------------- -------
kubernetes.core 2.4.0
CONFIGURATION
DEFAULT_STDOUT_CALLBACK(~/oke-auto-deploy/ipi-cluster-install/ansible/ansible.cfg) = yaml
HOST_KEY_CHECKING(~/oke-auto-deploy/ipi-cluster-install/ansible/ansible.cfg) = False
OS / ENVIRONMENT

RHEL 8.7

STEPS TO REPRODUCE

Call the kubernetes.core.helm module with the following params:

invocation: api_key: chart_ref: jetstack/cert-manager chart_version: v1.8.2 create_namespace: true host: module_args: api_key: chart_ref: jetstack/cert-manager chart_version: v1.8.2 create_namespace: true host: release_name: cert-manager release_namespace: cert-manager release_state: present validate_certs: false values_files: ~/oke-auto-deploy/ipi-cluster-install/ansible/helm-values/cert-manager-values.yaml release_name: cert-manager release_namespace: cert-manager release_state: present validate_certs: false values_files: ~/oke-auto-deploy/ipi-cluster-install/ansible/helm-values/cert-manager-values.yaml

---
- name: Get host info for base plays
  hosts: localhost
  gather_facts: false

  vars_files:
    - charts.yaml

  tasks:

    - name: Login to cluster
      ansible.builtin.import_role:
        name: login_to_cluster

    - name: Deploy Helm Charts
      ansible.builtin.include_role:
        name: helm
      vars:
        helm:
          release_name: "{{ item.release_name }}"
          release_namespace: "{{ item.release_namespace }}"
          chart_repo_url: "{{ item.chart_repo_url }}"
          chart_version: "{{ item.chart_version }}"
          chart_values: "{{ item.chart_values }}"
          repo_name: "{{ item.repo_name }}"
          chart_ref: "{{ item.chart_ref }}"
      loop: "{{ charts }}"[
EXPECTED RESULTS

helm chart jetstack/cert-manager-v1.8.2 would be deployed to host with the values file.

ACTUAL RESULTS

Somewhere an attribute is being defined, by doesn't exist in helm.py.

<localhost> ESTABLISH LOCAL CONNECTION FOR USER: <user>
<localhost> EXEC /bin/sh -c 'echo ~<user> && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp `"&& mkdir "` echo ~/.ansible/tmp/ansible-tmp-1686753573.2095919-1839343-122825258319869 `" && echo ansible-tmp-1686753573.2095919-1839343-122825258319869="` echo ~/.ansible/tmp/ansible-tmp-1686753573.2095919-1839343-122825258319869 `" ) && sleep 0'
Using module file ~/.ansible/collections/ansible_collections/kubernetes/core/plugins/modules/helm.py
<localhost> PUT ~/.ansible/tmp/ansible-local-1839278bce84cwd/tmpl_hnrp9f TO ~/.ansible/tmp/ansible-tmp-1686753573.2095919-1839343-122825258319869/AnsiballZ_helm.py
<localhost> EXEC /bin/sh -c 'chmod u+x ~/.ansible/tmp/ansible-tmp-1686753573.2095919-1839343-122825258319869/ ~/.ansible/tmp/ansible-tmp-1686753573.2095919-1839343-122825258319869/AnsiballZ_helm.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python ~/.ansible/tmp/ansible-tmp-1686753573.2095919-1839343-122825258319869/AnsiballZ_helm.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r ~/.ansible/tmp/ansible-tmp-1686753573.2095919-1839343-122825258319869/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "~/.ansible/tmp/ansible-tmp-1686753573.2095919-1839343-122825258319869/AnsiballZ_helm.py", line 107, in <module>
    _ansiballz_main()
  File "~/.ansible/tmp/ansible-tmp-1686753573.2095919-1839343-122825258319869/AnsiballZ_helm.py", line 99, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "~/.ansible/tmp/ansible-tmp-1686753573.2095919-1839343-122825258319869/AnsiballZ_helm.py", line 48, in invoke_module
    run_name='__main__', alter_sys=True)
  File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_kubernetes.core.helm_payload_vvlcwp49/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/modules/helm.py", line 924, in <module>
  File "/tmp/ansible_kubernetes.core.helm_payload_vvlcwp49/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/modules/helm.py", line 740, in main
  File "/tmp/ansible_kubernetes.core.helm_payload_vvlcwp49/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/modules/helm.py", line 418, in get_release_status
  File "/tmp/ansible_kubernetes.core.helm_payload_vvlcwp49/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/helm.py", line 169, in run_helm_command
  File "/tmp/ansible_kubernetes.core.helm_payload_vvlcwp49/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/helm.py", line 97, in __getattr__
AttributeError: 'AnsibleModule' object has no attribute 'env_update'
fatal: [localhost]: FAILED! => changed=false
plegg-rh commented 1 year ago

helm version

version.BuildInfo{Version:"v3.7.1+7.el8", GitCommit:"8f33223fe17957f11ba7a88b016bc860f034c4e6", GitTreeState:"clean", GoVersion:"go1.16.7"}

PyYAML version

Name: PyYAML Version: 6.0 Summary: YAML parser and emitter for Python Home-page: https://pyyaml.org/ Author: Kirill Simonov Author-email: xi@resolvent.net License: MIT Location: /usr/local/lib64/python3.6/site-packages Requires:

gravesm commented 1 year ago

@plegg-rh I'm unable to reproduce this. Could you provide the role that's being included?

plegg-rh commented 1 year ago

@gravesm , thanks for the quick reply

helm > tasks > main.yaml


---

- name: "Ensure repo ({{ helm.repo_name }}) is added to Helm for {{ helm.release_name }}"
  kubernetes.core.helm_repository:
    name: "{{ helm.repo_name }}"
    repo_url: "{{ helm.chart_repo_url }}"

- name: "Configure release {{ helm.release_name }} on {{ ocp_cluster.env }} Cluster"
  kubernetes.core.helm:
    api_key: "{{ login_info.openshift_auth.api_key }}"
    host: "https://api.{{ ocp_cluster.env }}.{{ install_config.base_domain }}:6443"
    validate_certs: false
    release_state: present
    chart_ref: "{{ helm.chart_ref }}"
    name: "{{ helm.release_name }}"
    release_namespace: "{{ helm.release_namespace }}"
    chart_repo_url: "{{ helm.chart_repo_url }}"
    chart_version: "{{ helm.chart_version }}"
    create_namespace: true
    update_repo_cache: true
    values_files: 
      - "{{ helm.chart_values }}"

helm > default > main.yaml

---
helm:
  release_name: cert-manager
  release_namespace: cert-manager
  chart_repo_url: https://charts.jetstack.io/
  chart_version: 1.8.2
  create_namespace: true
  update_repo_cache: true
  chart_values: "{{ playbook_dir }}/helm-values/cert-manager-values.yaml"
  repo_name: jetstack

playbook:

---
- name: Get host info for base plays
  hosts: localhost
  gather_facts: false

  vars_files:
    - charts.yaml

  tasks:

    - name: Login to cluster
      ansible.builtin.import_role:
        name: login_to_cluster

    - name: Deploy Helm Charts
      ansible.builtin.include_role:
        name: helm
      vars:
        helm:
          release_name: "{{ item.release_name }}"
          release_namespace: "{{ item.release_namespace }}"
          chart_repo_url: "{{ item.chart_repo_url }}" 
          chart_version: "{{ item.chart_version }}"
          chart_values: "{{ item.chart_values }}"
          repo_name: "{{ item.repo_name }}"
          chart_ref: "{{ item.chart_ref }}"
      loop: "{{ charts }}"
plegg-rh commented 1 year ago

I've tried again in a venv with the following packages, and still getting the same result.

Package             Version
------------------- --------
attrs               22.2.0
cachetools          4.2.4
certifi             2023.5.7
charset-normalizer  2.0.12
google-auth         2.20.0
idna                3.4
importlib-metadata  4.8.3
jsonschema          3.2.0
kubernetes          26.1.0
kubernetes-validate 1.26.0
oauthlib            3.2.2
openshift           0.13.1
pip                 21.3.1
pyasn1              0.5.0
pyasn1-modules      0.3.0
pyrsistent          0.18.0
python-dateutil     2.8.2
python-string-utils 1.0.0
PyYAML              6.0
requests            2.27.1
requests-oauthlib   1.3.1
rsa                 4.9
setuptools          39.2.0
six                 1.16.0
typing_extensions   4.1.1
urllib3             1.26.16
websocket-client    1.3.1
zipp                3.6.0
gravesm commented 1 year ago

I tried again to reproduce this using python 3.9, ansible 2.13 and kubernetes.core 2.4. With the role you provided I'm able to successfully deploy multiple helm charts into openshift. One thing I noticed from the exception you posted is that the module is being executed with python 3.6, though you list python 3.9 in your ansible version output. I also tried reproducing with python 3.6 but it still works for me. It might be worth setting ansible_python_interpreter just to be absolutely certain which python interpreter and environment is being used.

plegg-rh commented 1 year ago

I was able to succeed by using python3.6 interpreter on this machine.

I was also able to get the playbook to work in a ubi8 container with python3.9.

TL;DR it looks like this is a machine specific issue.

GuilhermeCamposo commented 2 months ago

I'm facing the same problem.

ansible [core 2.17.3] python version = 3.12.4 (main, Jun 7 2024, 00:00:00) [GCC 14.1.1 20240607 (Red Hat 14.1.1-5)] (/usr/bin/python3) jinja version = 3.1.4

Collection Version


kubernetes.core 5.0.0