Open Danmanny opened 3 months ago
here is my awx.yaml, I have my own harbor which is housing all the images.
AWX: enabled: true name: awx spec: admin_user: admin image: harbor.xx/ansible/awx image_version: "23.1.0" postgres_image: harbor.xx/library/postgres postgres_image_version: "13" init_container_image: harbor.xx/ansible/awx-ee init_container_image_version: "23.1.04" ee_images:
I've also tried to helm install awx-operator just straight from the actual git repo, still has the same result. Am I missing something in the clean up?
I've deleted the crds, rolebindings, clusterrolebindings, I'm at a loss. I've deployed this same exact version in my enclosed environment and it works great.
Also here are some logs in the awx-operator pod
bash-4.4$ ansible-playbook main.yml -vvv ansible-playbook 2.9.27 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible/openshift'] ansible python module location = /usr/local/lib/python3.8/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.8.13 (default, Jun 14 2022, 17:49:07) [GCC 8.5.0 20210514 (Red Hat 8.5.0-13)] Using /etc/ansible/ansible.cfg as config file host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Parsed /etc/ansible/hosts inventory source with ini plugin ERROR! 'k8s_info' is not a valid attribute for a Play
The error appears to be in '/opt/ansible/roles/installer/tasks/main.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
do you still have these same issues with more recent versions of awx-operator?
@fosterseth
Tried it with awx operator 2.8.0 and 2.14.0. 2.14.0 worked perfectly just had to do the fsgroup for the postgres pod.
Please confirm the following
Bug Summary
Good afternoon,
I try to helm install AWX-Operator version 2.5.3 on my k8s clusters running v1.27.10+rke2r.
AWX Operator version
2.5.3
AWX version
23.1.0
Kubernetes platform
kubernetes
Kubernetes/Platform version
rke2
Modifications
no
Steps to reproduce
Not sure if kubevirt would have any issues with this, I've seen one post but it was about a year ago about kubervirt messing up api version. But my cluster is a bare bones, had an agrocd awx deployed on it but it was removed. I deleted all the crds and cluster roles, there is nothing left on the cluster.
Expected results
I expected awx to be installed I've done this before on another cluster.
Actual results
The operator just cycles a few times and stops.
Additional information
This is the steps that I did to get where I'm at.
1) helm fetch awx-operator/awx-operator --version 2.5.3 2) Put .tgz on node 3) put values.yaml on node 4) helm install awx-operator --create-namespace --namespace awx ./awx-operator-2.5.3.tgz -f awx.yaml
Operator Logs
This is what I get from the awx-operator logs
--------------------------- Ansible Task StdOut ------------------------------- 2024-04-04T12:38:00.279469258-04:00 2024-04-04T12:38:00.279477705-04:00 TASK [Check for presence of awx-task Deployment] **** 2024-04-04T12:38:00.279480612-04:00 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: too many values to unpack (expected 2) 2024-04-04T12:38:00.279501174-04:00 fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/opt/ansible/.ansible/tmp/ansible-tmp-1712248679.6658406-808-85479366350307/AnsiballZ_k8s_info.py\", line 102, in\n _ansiballz_main()\n File \"/opt/ansible/.ansible/tmp/ansible-tmp-1712248679.6658406-808-85479366350307/AnsiballZ_k8s_info.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/opt/ansible/.ansible/tmp/ansible-tmp-1712248679.6658406-808-85479366350307/AnsiballZ_k8s_info.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.kubernetes.core.plugins.modules.k8s_info', init_globals=None, run_name='main', alter_sys=True)\n File \"/usr/lib64/python3.8/runpy.py\", line 207, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.8/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.8/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_k8s_info_payload_05wsq_9a/ansible_k8s_info_payload.zip/ansible_collections/kubernetes/core/plugins/modules/k8s_info.py\", line 217, in \n File \"/tmp/ansible_k8s_info_payload_05wsq_9a/ansible_k8s_info_payload.zip/ansible_collections/kubernetes/core/plugins/modules/k8s_info.py\", line 211, in main\n File \"/tmp/ansible_k8s_info_payload_05wsq_9a/ansible_k8s_info_payload.zip/ansible_collections/kubernetes/core/plugins/modules