openshift / openshift-ansible

Install and config an OpenShift 3.x cluster
https://try.openshift.com
Apache License 2.0
2.18k stars 2.31k forks source link

Openshift Origin ansible-playbook Installation Failed. #6305

Closed garyyang6 closed 4 years ago

garyyang6 commented 6 years ago

Description

Per Openshift Origin Installation Guide, https://docs.openshift.org/latest/install_config/install/advanced_install.html. "Running the RPM-based Installer". I run the command,

ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml

I got,

fatal: [docker01.works.com]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "allow_downgrade": false, "conf_file": null, "disable_gpg_check": false, "disablerepo": null, "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "origin-clients" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "validate_certs": true } }, "msg": "Repository 'puppet-deps' is missing name in configuration, using id\nRepository base is listed more than once in the configuration\nRepository epel is listed more than once in the configuration\nRepository updates is listed more than once in the configuration\n\n\nTransaction check error:\n file /usr/bin/kubectl from install of origin-clients-3.6.1-1.0.008f2d5.x86_64 conflicts with file from package kubectl-1.8.4-0.x86_64\n\nError Summary\n-------------\n\n", "rc": 1, "results": [ "Loaded plugins: fastestmirror, langpacks\nLoading mirror speeds from cached hostfile\n base: mirrors.cmich.edu\n extras: linux.mirrors.es.net\n * updates: mirror.compevo.com\nResolving Dependencies\n--> Running transaction check\n---> Package origin-clients.x86_64 0:3.6.1-1.0.008f2d5 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n origin-clients x86_64 3.6.1-1.0.008f2d5 centos-openshift-origin 41 M\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 41 M\nInstalled size: 267 M\nDownloading packages:\nRunning transaction check\nRunning transaction test\n" ] } to retry, use: --limit @/root/openshift-ansible/playbooks/byo/config.retry

PLAY RECAP ** localhost : ok=11 changed=0 unreachable=0 failed=0
docker01.works.com : ok=82 changed=3 unreachable=0 failed=1
docker02.works.com : ok=96 changed=5 unreachable=0 failed=0

INSTALLER STATUS **** Initialization : Complete (0:02:21) Health Check : Complete (0:00:52) etcd Install : Complete (0:01:40) Master Install : In Progress (0:01:19) This phase can be restarted by running: playbooks/byo/openshift-master/config.yml

Failure summary:

  1. Hosts: docker01.works.com Play: Create OpenShift certificates for master hosts Task: Install clients Message: Repository 'puppet-deps' is missing name in configuration, using id Repository base is listed more than once in the configuration Repository epel is listed more than once in the configuration Repository updates is listed more than once in the configuration

           Transaction check error:
             file /usr/bin/kubectl from install of origin-clients-3.6.1-1.0.008f2d5.x86_64 conflicts with file from package kubectl-1.8.4-0.x86_64
    
           Error Summary
           -------------
Version

Please put the following version information in the code block indicated below.

cat /etc/ansible/hosts

Create an OSEv3 group that contains the masters and nodes groups

[OSEv3:children] masters nodes

Set variables common for all OSEv3 hosts

[OSEv3:vars]

SSH user, this user should allow ssh based auth without requiring a password

ansible_ssh_user=root

If ansible_ssh_user is not root, ansible_become must be set to true
ansible_become=true

openshift_deployment_type=origin

openshift_disable_check=memory_availability,disk_availability

openshift_disable_check=docker_storage

uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

host group for masters

[masters] docker01.works.com

host group for etcd

[etcd] docker02.works.com

host group for nodes, includes region info

[nodes] docker01.works.com docker02.works.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"

VERSION INFORMATION HERE PLEASE
Steps To Reproduce
  1. [step 1] Download openshit origin package. Create /etc/ansible/hosts
  2. [step 2] At docker01.works.com, as root, run the command, ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml
    Expected Results

    Openshift Origin should be installed.

Example command and output or error messages
 Message:  Repository 'puppet-deps' is missing name in configuration, using id
           Repository base is listed more than once in the configuration
           Repository epel is listed more than once in the configuration
           Repository updates is listed more than once in the configuration

           Transaction check error:
             file /usr/bin/kubectl from install of origin-clients-3.6.1-1.0.008f2d5.x86_64 conflicts with file from package kubectl-1.8.4-0.x86_64
Example command and output or error messages

For long output or logs, consider using a gist

Additional Information

Provide any additional information which may help us diagnose the issue.

EXTRA INFORMATION GOES HERE
aizuddin85 commented 6 years ago

which origin version you are trying to install? Your inventory is too simple, with a lot of information missing.

This is mine, proven working, note that I use container to deploy the ocp:

[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
ansible_ssh_user=root
deployment_type=origin
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
containerized=true
openshift_release=v3.6.1
openshift_image_tag=v3.6.1
openshift_public_hostname=master.devopshumans.com
openshift_master_default_subdomain=cloudapps.devopshumans.com
openshift_hosted_metrics_deploy=true
openshift_disable_check=docker_storage,memory_availability,disk_availability
openshift_master_overwrite_named_certificates=true
openshift_hosted_router_selector='region=infra'
enable_excluders=false
openshift_logging_kibana_hostname=logging.devopshumans.com
openshift_logging_es_cluster_size=1

[masters]
master.devopshumans.com openshift_schedulable=true

[etcd]
master.devopshumans.com

[nodes]
master.devopshumans.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true

My checked out branch:

[root@master openshift-ansible]# git status
# On branch release-3.6
nothing to commit, working directory clean
[root@master openshift-ansible]# git describe
openshift-ansible-3.6.173.0.82-1
[root@master openshift-ansible]# 
kpritam commented 6 years ago

Seems like you are trying to install latest openshift origin, here is mine inventory.erb file which works fine and installs Origin v3.8

`[OSEv3:children] masters nodes etcd nfs

[OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=origin openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] containerized=true openshift_release=v3.8.0 openshift_image_tag=v3.8.0-alpha.1 openshift_enable_excluders=false openshift_public_hostname=openshift.tmt.org openshift_master_default_subdomain=apps.openshift.tmt.org.xip.io openshift_metrics_install_metrics=true
openshift_logging_install_logging=true openshift_hosted_prometheus_deploy=true openshift_disable_check=docker_image_availability,memory_availability,disk_availability,docker_storage,docker_storage_driver

openshift_metrics_hawkular_limits_memory=0.5G openshift_metrics_hawkular_requests_memory=0.5G openshift_metrics_heapster_limits_memory=0.5G openshift_metrics_heapster_requests_memory=0.5G openshift_logging_es_ops_memory_limit=1Gi

openshift_metrics_storage_kind=nfs openshift_metrics_storage_access_modes=['ReadWriteOnce'] openshift_metrics_storage_nfs_directory=/exports openshift_metrics_storage_nfs_options='*(rw,root_squash)' openshift_metrics_storage_volume_name=metrics openshift_metrics_storage_volume_size=1Gi

openshift_logging_storage_kind=nfs openshift_logging_storage_access_modes=['ReadWriteOnce'] openshift_logging_storage_nfs_directory=/exports openshift_logging_storage_nfs_options='*(rw,root_squash)' openshift_logging_storage_volume_name=logging openshift_logging_storage_volume_size=1Gi openshift_logging_storage_labels={'storage': 'logging'}

openshift_prometheus_storage_kind=nfs openshift_prometheus_storage_access_modes=['ReadWriteOnce'] openshift_prometheus_storage_nfs_directory=/exports openshift_prometheus_storage_nfs_options='*(rw,root_squash)' openshift_prometheus_storage_volume_name=prometheus openshift_prometheus_storage_volume_size=1Gi openshift_prometheus_storage_labels={'storage': 'prometheus'} openshift_prometheus_storage_type='pvc'

openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_storage_nfs_directory=/exports openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)' openshift_hosted_registry_storage_volume_name=registry openshift_hosted_registry_storage_volume_size=1Gi

openshift_hosted_etcd_storage_kind=nfs openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)" openshift_hosted_etcd_storage_nfs_directory=/opt/osev3-etcd openshift_hosted_etcd_storage_volume_name=etcd-vol2 openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] openshift_hosted_etcd_storage_volume_size=1G openshift_hosted_etcd_storage_labels={'storage': 'etcd'}

[masters] openshift.tmt.org openshift_schedulable=true

[etcd] openshift.tmt.org

[nfs] openshift.tmt.org

[nodes] openshift.tmt.org openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true`

nileshalhat commented 5 years ago

Still error:

[root@node ~]# python3 $(which ansible-playbook) -i /etc/ansible/hosts /origin/openshift-ansible/playbooks/openshift-master/config.yml [WARNING]: Could not match supplied host pattern, ignoring: oo_masters_to_config

[WARNING]: Could not match supplied host pattern, ignoring: oo_masters

PLAY [Initialization Checkpoint Start] **

TASK [Set install initialization 'In Progress'] ***** ok: [node.lab.example.com]

PLAY [Populate config host groups] **

TASK [Load group name mapping variables] **** ok: [localhost]

TASK [Evaluate groups - g_etcd_hosts or g_new_etcd_hosts required] ** skipping: [localhost]

TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] ** skipping: [localhost]

TASK [Evaluate groups - g_node_hosts or g_new_node_hosts required] ** skipping: [localhost]

TASK [Evaluate groups - g_lb_hosts required] **** skipping: [localhost]

TASK [Evaluate groups - g_nfs_hosts required] *** skipping: [localhost]

TASK [Evaluate groups - g_nfs_hosts is single host] ***** skipping: [localhost]

TASK [Evaluate groups - g_glusterfs_hosts required] ***** skipping: [localhost]

TASK [Evaluate groups - Fail if no etcd hosts group is defined] ***** skipping: [localhost]

TASK [Evaluate oo_all_hosts] **** ok: [localhost] => (item=node.lab.example.com) ok: [localhost] => (item=node1.lab.example.com) ok: [localhost] => (item=localhost-py3)

TASK [Evaluate oo_masters] ** ok: [localhost] => (item=node.lab.example.com)

TASK [Evaluate oo_first_master] ***** ok: [localhost]

TASK [Evaluate oo_new_etcd_to_config] ***

TASK [Evaluate oo_masters_to_config] **** ok: [localhost] => (item=node.lab.example.com)

TASK [Evaluate oo_etcd_to_config] *** ok: [localhost] => (item=node.lab.example.com)

TASK [Evaluate oo_first_etcd] *** ok: [localhost]

TASK [Evaluate oo_etcd_hosts_to_upgrade] **** ok: [localhost] => (item=node.lab.example.com)

TASK [Evaluate oo_etcd_hosts_to_backup] ***** ok: [localhost] => (item=node.lab.example.com)

TASK [Evaluate oo_nodes_to_config] ** ok: [localhost] => (item=node.lab.example.com) ok: [localhost] => (item=node1.lab.example.com) ok: [localhost] => (item=localhost-py3)

TASK [Add master to oo_nodes_to_config] ***** skipping: [localhost] => (item=node.lab.example.com)

TASK [Evaluate oo_lb_to_config] *****

TASK [Evaluate oo_nfs_to_config] ****

TASK [Evaluate oo_glusterfs_to_config] **

TASK [Evaluate oo_etcd_to_migrate] ** ok: [localhost] => (item=node.lab.example.com) [WARNING]: Could not match supplied host pattern, ignoring: oo_lb_to_config

[WARNING]: Could not match supplied host pattern, ignoring: oo_nfs_to_config

PLAY [Ensure that all non-node hosts are accessible] ****

TASK [Gathering Facts] ** ok: [node.lab.example.com]

PLAY [Initialize basic host facts] **

TASK [Gathering Facts] ** ok: [node1.lab.example.com] ok: [node.lab.example.com] ok: [localhost-py3]

TASK [openshift_sanitize_inventory : include_tasks] ***** included: /origin/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml for node.lab.example.com, node1.lab.example.com, localhost-py3

TASK [openshift_sanitize_inventory : Check for usage of deprecated variables] *** ok: [node.lab.example.com] ok: [node1.lab.example.com] ok: [localhost-py3]

TASK [openshift_sanitize_inventory : debug] ***** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : set_stats] ***** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : Assign deprecated variables to correct counterparts] *** included: /origin/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations_logging.yml for node.lab.example.com, node1.lab.example.com, localhost-py3 => (item=/origin/openshift-ansible/roles/openshift_sanitize_inventory/tasks/../tasks/__deprecations_logging.yml) included: /origin/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations_metrics.yml for node.lab.example.com, node1.lab.example.com, localhost-py3 => (item=/origin/openshift-ansible/roles/openshift_sanitize_inventory/tasks/../tasks/__deprecations_metrics.yml)

TASK [openshift_sanitize_inventory : conditional_set_fact] ** ok: [node.lab.example.com] ok: [node1.lab.example.com] ok: [localhost-py3]

TASK [openshift_sanitize_inventory : set_fact] ** ok: [node.lab.example.com] ok: [node1.lab.example.com] ok: [localhost-py3]

TASK [openshift_sanitize_inventory : conditional_set_fact] ** ok: [node.lab.example.com] ok: [node1.lab.example.com] ok: [localhost-py3]

TASK [openshift_sanitize_inventory : Standardize on latest variable names] ** ok: [node.lab.example.com] ok: [node1.lab.example.com] ok: [localhost-py3]

TASK [openshift_sanitize_inventory : Normalize openshift_release] *** ok: [node.lab.example.com] ok: [node1.lab.example.com] ok: [localhost-py3]

TASK [openshift_sanitize_inventory : Abort when openshift_release is invalid] *** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : include_tasks] ***** included: /origin/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml for node.lab.example.com, node1.lab.example.com, localhost-py3

TASK [openshift_sanitize_inventory : Ensure that openshift_use_dnsmasq is true] ***** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : Ensure that openshift_node_dnsmasq_install_network_manager_hook is true] *** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : set_fact] **

TASK [openshift_sanitize_inventory : Ensure that dynamic provisioning is set if using dynamic storage] ** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] **** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] **** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : Ensure clusterid is set along with the cloudprovider] ** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : Ensure ansible_service_broker_remove and ansible_service_broker_install are mutually exclusive] **** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : Ensure template_service_broker_remove and template_service_broker_install are mutually exclusive] ** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : Ensure that all requires vsphere configuration variables are set] ** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : Ensure removed web console extension variables are not set] **** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : Ensure that web console port matches API server port] ** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_sanitize_inventory : At least one master is schedulable] **** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [Detecting Operating System from ostree_booted] **** ok: [node1.lab.example.com] ok: [localhost-py3] ok: [node.lab.example.com]

TASK [set openshift_deployment_type if unset] *** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [initialize_facts set fact openshift_is_atomic and openshift_is_containerized] ***** ok: [node.lab.example.com] ok: [node1.lab.example.com] ok: [localhost-py3]

TASK [Determine Atomic Host Docker Version] ***** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [assert atomic host docker version is 1.12 or later] *** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

PLAY [Initialize special first-master variables] ****

TASK [Gathering Facts] ** ok: [node.lab.example.com]

TASK [stat] ***** ok: [node.lab.example.com]

TASK [slurp] **** skipping: [node.lab.example.com]

TASK [set_fact] ***** skipping: [node.lab.example.com]

TASK [set_fact] ***** ok: [node.lab.example.com]

PLAY [Disable web console if required] **

TASK [set_fact] ***** skipping: [node.lab.example.com]

PLAY [Install packages necessary for installer] *****

TASK [Gathering Facts] ** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [Ensure openshift-ansible installer package deps are installed] **** skipping: [node.lab.example.com] => (item=iproute) skipping: [node.lab.example.com] => (item=python3-dbus) skipping: [node.lab.example.com] => (item=python3-PyYAML) skipping: [node.lab.example.com] => (item=) skipping: [node1.lab.example.com] => (item=iproute) skipping: [node.lab.example.com] => (item=yum-utils) skipping: [node1.lab.example.com] => (item=python3-dbus) skipping: [node1.lab.example.com] => (item=python3-PyYAML) skipping: [node1.lab.example.com] => (item=) skipping: [node1.lab.example.com] => (item=yum-utils) skipping: [localhost-py3] => (item=iproute) skipping: [localhost-py3] => (item=python3-dbus) skipping: [localhost-py3] => (item=python3-PyYAML) skipping: [localhost-py3] => (item=) skipping: [localhost-py3] => (item=yum-utils)

TASK [Ensure various deps for running system containers are installed] ** skipping: [node.lab.example.com] => (item=atomic) skipping: [node.lab.example.com] => (item=ostree) skipping: [node.lab.example.com] => (item=runc) skipping: [node1.lab.example.com] => (item=atomic) skipping: [node1.lab.example.com] => (item=ostree) skipping: [node1.lab.example.com] => (item=runc) skipping: [localhost-py3] => (item=atomic) skipping: [localhost-py3] => (item=ostree) skipping: [localhost-py3] => (item=runc)

PLAY [Initialize cluster facts] *****

TASK [Gathering Facts] ** ok: [node1.lab.example.com] ok: [node.lab.example.com] ok: [localhost-py3]

TASK [Gather Cluster facts] ***** ok: [node1.lab.example.com] ok: [localhost-py3] ok: [node.lab.example.com]

TASK [Set fact of no_proxy_internal_hostnames] ** skipping: [node.lab.example.com] skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [Initialize openshift.node.sdn_mtu] **** ok: [node1.lab.example.com] ok: [node.lab.example.com] ok: [localhost-py3]

PLAY [Initialize etcd host variables] ***

TASK [Gathering Facts] ** ok: [node.lab.example.com]

TASK [set_fact] ***** ok: [node.lab.example.com]

TASK [set_fact] ***** ok: [node.lab.example.com]

PLAY [Determine openshift_version to configure on first master] *****

TASK [Gathering Facts] ** ok: [node.lab.example.com]

TASK [include_role : openshift_version] *****

TASK [openshift_version : Use openshift.common.version fact as version to configure if already installed] *** skipping: [node.lab.example.com]

TASK [openshift_version : include_tasks] **** included: /origin/openshift-ansible/roles/openshift_version/tasks/first_master_containerized_version.yml for node.lab.example.com

TASK [openshift_version : Set containerized version to configure if openshift_image_tag specified] ** ok: [node.lab.example.com]

TASK [openshift_version : Set containerized version to configure if openshift_release specified] **** skipping: [node.lab.example.com]

TASK [openshift_version : Lookup latest containerized version if no version specified] ** skipping: [node.lab.example.com]

TASK [openshift_version : set_fact] ***** skipping: [node.lab.example.com]

TASK [openshift_version : Set precise containerized version to configure if openshift_release specified] **** skipping: [node.lab.example.com]

TASK [openshift_version : set_fact] ***** skipping: [node.lab.example.com]

TASK [openshift_version : set_fact] ***** ok: [node.lab.example.com]

TASK [openshift_version : debug] **** ok: [node.lab.example.com] => { "msg": "openshift_pkg_version was not defined. Falling back to -1.5.0" }

TASK [openshift_version : set_fact] ***** ok: [node.lab.example.com]

TASK [openshift_version : debug] **** skipping: [node.lab.example.com]

TASK [openshift_version : set_fact] ***** skipping: [node.lab.example.com]

TASK [openshift_version : debug] **** ok: [node.lab.example.com] => { "openshift_release": "1.5" }

TASK [openshift_version : debug] **** ok: [node.lab.example.com] => { "openshift_image_tag": "v1.5.0" }

TASK [openshift_version : debug] **** ok: [node.lab.example.com] => { "openshift_pkg_version": "-1.5.0" }

TASK [openshift_version : debug] **** ok: [node.lab.example.com] => { "openshift_version": "1.5.0" }

PLAY [Set openshift_version for etcd, node, and master hosts] ***

TASK [Gathering Facts] ** ok: [localhost-py3] ok: [node1.lab.example.com]

TASK [set_fact] ***** ok: [node1.lab.example.com] ok: [localhost-py3]

PLAY [Ensure the requested version packages are available.] *****

TASK [Gathering Facts] ** ok: [localhost-py3] ok: [node1.lab.example.com]

TASK [include_role : openshift_version] *****

TASK [openshift_version : Check openshift_version for rpm installation] ***** skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_version : Fail if rpm version and docker image version are different] *** skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_version : For an RPM install, abort when the release requested does not match the available version.] *** skipping: [node1.lab.example.com] skipping: [localhost-py3]

TASK [openshift_version : debug] **** ok: [node1.lab.example.com] => { "openshift_release": "1.5" } ok: [localhost-py3] => { "openshift_release": "1.5" }

TASK [openshift_version : debug] **** ok: [node1.lab.example.com] => { "openshift_image_tag": "v1.5.0" } ok: [localhost-py3] => { "openshift_image_tag": "v1.5.0" }

TASK [openshift_version : debug] **** ok: [node1.lab.example.com] => { "openshift_pkg_version": "-1.5.0" } ok: [localhost-py3] => { "openshift_pkg_version": "-1.5.0" }

TASK [openshift_version : debug] **** ok: [node1.lab.example.com] => { "openshift_version": "1.5.0" } ok: [localhost-py3] => { "openshift_version": "1.5.0" }

PLAY [Verify Requirements] **

TASK [Gathering Facts] ** ok: [node.lab.example.com]

TASK [Run variable sanity checks] *** ok: [node.lab.example.com]

PLAY [Initialization Checkpoint End] ****

TASK [Set install initialization 'Complete'] **** ok: [node.lab.example.com]

PLAY [Master Install Checkpoint Start] **

TASK [Set Master install 'In Progress'] ***** ok: [node.lab.example.com]

PLAY [Create OpenShift certificates for master hosts] ***

TASK [Gathering Facts] ** ok: [node.lab.example.com]

TASK [openshift_master_facts : Verify required variables are set] *** skipping: [node.lab.example.com]

TASK [openshift_master_facts : Set g_metrics_hostname] ** ok: [node.lab.example.com]

TASK [openshift_master_facts : set_fact] **** skipping: [node.lab.example.com]

TASK [openshift_master_facts : Set master facts] **** ok: [node.lab.example.com]

TASK [openshift_master_facts : Determine if scheduler config present] *** ok: [node.lab.example.com]

TASK [openshift_master_facts : Set Default scheduler predicates and priorities] ***** fatal: [node.lab.example.com]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'openshift_master_facts_default_predicates'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Unknown short_version 1.5"} to retry, use: --limit @/origin/openshift-ansible/playbooks/openshift-master/config.retry

PLAY RECAP ** localhost : ok=11 changed=0 unreachable=0 failed=0 localhost-py3 : ok=23 changed=0 unreachable=0 failed=0 node.lab.example.com : ok=42 changed=0 unreachable=0 failed=1 node1.lab.example.com : ok=23 changed=0 unreachable=0 failed=0

INSTALLER STATUS **** Initialization : Complete (0:00:18) Master Install : In Progress (0:00:02) This phase can be restarted by running: playbooks/openshift-master/config.yml

openshift-bot commented 4 years ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 4 years ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

openshift-bot commented 4 years ago

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close

openshift-ci-robot commented 4 years ago

@openshift-bot: Closing this issue.

In response to [this](https://github.com/openshift/openshift-ansible/issues/6305#issuecomment-666201041): >Rotten issues close after 30d of inactivity. > >Reopen the issue by commenting `/reopen`. >Mark the issue as fresh by commenting `/remove-lifecycle rotten`. >Exclude this issue from closing again by commenting `/lifecycle frozen`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.