openshift / openshift-ansible

Install and config an OpenShift 3.x cluster
https://try.openshift.com
Apache License 2.0
2.17k stars 2.32k forks source link

ImportError: No module named yaml #1444

Closed simon3z closed 7 years ago

simon3z commented 8 years ago

If PyYAML is missing on the nodes openshift_facts fails with:

...
TASK: [openshift_facts | Gather Cluster facts and set is_containerized if needed] *** 
failed: [vm-18-239.eng.lab.tlv.redhat.com] => {"failed": true, "parsed": false}
Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-tmp-1455838900.0-240646320943626/openshift_facts", line 24, in <module>
    import yaml
ImportError: No module named yaml
OpenSSH_7.1p2, OpenSSL 1.0.2f-fips  28 Jan 2016
debug1: Reading configuration data /home/simon/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 56: Applying options for *
debug1: auto-mux: Trying existing master
debug1: mux_client_request_session: master session id: 2
Shared connection to vm-18-239.eng.lab.tlv.redhat.com closed.

failed: [vm-18-188.eng.lab.tlv.redhat.com] => {"failed": true, "parsed": false}
Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-tmp-1455838900.0-275097852138431/openshift_facts", line 24, in <module>
    import yaml
ImportError: No module named yaml
OpenSSH_7.1p2, OpenSSL 1.0.2f-fips  28 Jan 2016
debug1: Reading configuration data /home/simon/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 56: Applying options for *
debug1: auto-mux: Trying existing master
debug1: mux_client_request_session: master session id: 2
Shared connection to vm-18-188.eng.lab.tlv.redhat.com closed.

FATAL: all hosts have already failed -- aborting

This should be probably fixed installing PyYAML before using it.

cc @detiber

detiber commented 8 years ago

@simon3z is this an install on an atomic host?

The package should be installed for non-atomic hosts: https://github.com/openshift/openshift-ansible/blob/240c57525ba8a43286181c6b95518d509ae48a2a/roles/openshift_facts/tasks/main.yml#L20-L22

simon3z commented 8 years ago

@detiber is it possible that it's used before being installed? I think it's easy to reproduce: we just need to remove the package and test the playbook.

detiber commented 8 years ago

@simon3z it is not, but I'm wondering if you may be hitting the yum issue where the package isn't installed but is reported as installed. Can you replicate the issue after doing a yum update to the latest RHEL 7.2 packages on the hosts before installation?

abutcher commented 8 years ago

That issue is https://github.com/openshift/openshift-ansible/issues/1138

tbielawa commented 7 years ago

According to this comment, the issue should be fixed in a newer ansible version

https://github.com/ansible/ansible-modules-core/issues/3066#issuecomment-219048424

This issue has been inactive for quite some time. Please update and reopen this issue if this is still a priority you would like to see action on.

cooktheryan commented 7 years ago

@tbielawa I'm seeing this on Fedora 25 Cloud Image when running a containerized installation.

{"changed": false, "failed": true, "module_stderr": "Shared connection to ose-master01.rcook-aws.sysdeseng.com closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n  File \"/tmp/ansible_ZkmM3M/ansible_module_openshift_facts.py\", line 24, in <module>\r\n    import yaml\r\nImportError: No module named yaml\r\n", "msg": "MODULE FAILURE"}

ansible==2.2.0.0 installed through pip openshift-ansible commit 1deb6b06608e46f5c47bc127a148b89f6a12b63b

Play failure TASK [openshift_facts : Gather Cluster facts and set is_containerized if needed] ***

tbielawa commented 7 years ago

@cooktheryan can you try an experiment for me?

$ ansible -i YOUR_HOSTS_PATH -a "python -c 'import yaml; print dir(yaml)'" masters:nodes

detiber commented 7 years ago

@cooktheryan f25 uses python3 by default I think, which might be contributing to the problem here.

cooktheryan commented 7 years ago

The good part about the f25 image is the addition of the python components to allow base anisble pplaybooks to run vs the 24 cloud image which required a little bit of work to get the host off the ground.

On Dec 8, 2016 5:24 PM, "Jason DeTiberus" notifications@github.com wrote:

@cooktheryan https://github.com/cooktheryan f25 uses python3 by default I think, which might be contributing to the problem here.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openshift/openshift-ansible/issues/1444#issuecomment-265874178, or mute the thread https://github.com/notifications/unsubscribe-auth/ADoslC6FfKwk8LGLPz0A3z3bcoFDrp5Xks5rGIOmgaJpZM4HdkVk .

detiber commented 7 years ago

@dustymabe is this an issue that you have run into as well?

dustymabe commented 7 years ago

I think I did see this problem when trying to use python2. On Fedora 25 atomic host this is the command I used to tell it to use python3 as the interpreter: ansible-playbook -i myinventory playbooks/byo/config.yml -e 'ansible_python_interpreter=/usr/bin/python3'

tbielawa commented 7 years ago

@dustymabe @cooktheryan @simon3z can any of you please try to reproduce this error again and let us know the results. I'd like to close this issue out, but only as long as we can't reproduce this anymore.

simon3z commented 7 years ago

@tbielawa I haven't seen this since I reported it (at that time it was 100% reproducible with my env and VM images). I'll let you know if I'll run into it again.

dustymabe commented 7 years ago

I say we close this.

tbielawa commented 7 years ago

Two yea's. I'm closing this.

gtema commented 6 years ago

With Fedora27 Atomic without passing -e 'ansible_python_interpreter=/usr/bin/python3' the problem still exist

bojleros commented 6 years ago

`TASK [Run variable sanity checks] ***** Monday 26 February 2018 15:27:40 +0100 (0:00:00.044) 0:00:05.335 * fatal: [192.168.122.201]: FAILED! => {"msg": "last_checked_host: 192.168.122.201, last_checked_var: ansible_python;openshift-ansible requires Python 3 for Fedora; For information on enabling Python 3 with Ansible, see https://docs.ansible.com/ansible/python_3_support.html"}

[OSEv3:children] masters nodes etcd

[OSEv3:vars] openshift_deployment_type=origin openshift_release=v3.6.0 osm_cluster_network_cidr=10.128.0.0/14 openshift_portal_net=172.30.0.0/16 osm_host_subnet_length=9 openshift_disable_check=disk_availability,memory_availability ansible_python_interpreter=/usr/bin/python3

[masters] 192.168.122.201

[etcd] 192.168.122.201

[nodes] 192.168.122.201 openshift_schedulable=true openshift_node_labels="{'region': 'infra', 'zone': 'default'}"`

I have it too on current Fedora 27 Atomic.

dustymabe commented 6 years ago

without passing -e 'ansible_python_interpreter=/usr/bin/python3' the problem still exist

Yes. This is expected. Rather than include the python2 and python3 version of every library that is required we only include the python3 version. You need to use python3.

openshift-ansible requires Python 3 for Fedora; For information on enabling Python 3 with Ansible

^^ this error message is helpful

Iodun commented 6 years ago

I am trying to install OpenShift on Fedora 27 Atomic. With python2 I am getting the error discussed in this issue. But with python3 the "openshift-logging" playbook fails to run properly:

TASK [openshift_logging : Gather OpenShift Logging Facts] ***********************************************************************************************************************************************************************************************************************
fatal: [master.hellshift.net]: FAILED! => {"changed": false, "msg": "There was an exception trying to run the command '/usr/local/bin/oc get rolebindings logging-elasticsearch-view-role -n logging --user=system:admin/master-hellshift-net:3443 --config=/etc/origin/master/admin.kubeconfig -o json' a bytes-like object is required, not 'str'"}
    to retry, use: --limit @/root/openshift-ansible/playbooks/openshift-logging/config.retry

Any ideas?

onknows commented 6 years ago

This problem can be reproduced with https://github.com/drhelius/terraform-azure-openshift easily.

Yaml is a key component of Ansible. If Ansible cannot properly import a yaml this is reason for concern. Yaml works everywhere except in a certain specific task?

debianmaster commented 6 years ago

running into same issue