openshift / openshift-ansible

Install and config an OpenShift 3.x cluster
https://try.openshift.com
Apache License 2.0
2.18k stars 2.31k forks source link

3.11 prerequisites.yml fails when openshift_node_labels happens to be set #11680

Closed adelton closed 4 years ago

adelton commented 5 years ago

Description

The https://github.com/openshift/openshift-ansible/blob/release-3.11/README.md says

the old openshift_node_labels value is effectively ignored.

In 3.10, I was able to use ansible-inventory that used both openshift_node_labels and openshift_node_group_name. I run the same test code against various OpenShift versions and if the value is ignored, there shouldn't be any problem having it there.

However, with 3.11, that started to fail.

Version

Please put the following version information in the code block indicated below.

ansible 2.6.17
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, May 20 2019, 12:21:26) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]

If you're operating from a git clone:

openshift-ansible-3.11.117-1-2-gfc9e800

If you're running from playbooks installed via RPM

package openshift-ansible is not installed
Steps To Reproduce
  1. Run ansible-playbook -i ansible-inventory /path/to/openshift-ansible/playbooks/prerequisites.yml
Expected Results

Describe what you expected to happen.

The ansible-playbook should pass.

Observed Results

Describe what is actually happening.

PLAY [Verify Requirements] *****************************************************

TASK [Run variable sanity checks] **********************************************
Monday 10 June 2019  12:27:26 -0400 (0:00:00.077)       0:00:05.694 *********** 
fatal: [master.example.com]: FAILED! => {"msg": "last_checked_host: master.example.com, last_checked_var: openshift_master_identity_providers;Found removed variables: openshift_node_labels is replaced by openshift_node_groups[<item>].labels; "}

PLAY RECAP *********************************************************************
master.example.com : ok=30   changed=0    unreachable=0    failed=1   
localhost                  : ok=11   changed=0    unreachable=0    failed=0   
Additional Information

Provide any additional information which may help us diagnose the issue.

RHEL 7.6

[OSEv3:children]
masters
nodes

[OSEv3:vars]
ansible_ssh_user=root
deployment_type=origin
openshift_disable_check=memory_availability,docker_storage,disk_availability,package_version
openshift_repos_enable_testing=true
openshift_enable_excluders=false

osm_cluster_network_cidr=10.128.0.0/14
openshift_portal_net=172.30.0.0/16
osm_host_subnet_length=9

openshift_router_selector='node-router=true'
openshift_registry_selector='node-registry=true'
template_service_broker_selector={"node-app": "true"}
openshift_web_console_nodeselector={"node-app": "true"}

openshift_node_groups=[ { 'name': 'node-all-on-one', 'labels': [ 'node-router=true', 'node-registry=true', 'node-app=true', 'node-role.kubernetes.io/master=true', 'node-role.kubernetes.io/infra=true', 'node-role.kubernetes.io/compute=true' ] }, { 'name': 'node-router', 'labels': [ 'node-router=true', 'node-role.kubernetes.io/infra=true' ] }, { 'name': 'node-registry', 'labels': [ 'node-registry=true', 'node-role.kubernetes.io/infra=true' ] }, { 'name': 'node-app', 'labels': [ 'node-app=true', 'node-role.kubernetes.io/compute=true' ] }, { 'name': 'node-master', 'labels': [ 'node-role.kubernetes.io/master=true' ] } ]

openshift_master_identity_providers=[{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]

openshift_release="3.11"
[masters]
master.example.com

[etcd]
master.example.com

[nodes]
master.example.com openshift_schedulable=true openshift_node_labels="{ 'node-router': 'true', 'node-registry': 'true', 'node-app': 'true', 'node-master': 'true' }" openshift_node_group_name='node-all-on-one'
openshift-bot commented 4 years ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 4 years ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

openshift-bot commented 4 years ago

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close

openshift-ci-robot commented 4 years ago

@openshift-bot: Closing this issue.

In response to [this](https://github.com/openshift/openshift-ansible/issues/11680#issuecomment-667324344): >Rotten issues close after 30d of inactivity. > >Reopen the issue by commenting `/reopen`. >Mark the issue as fresh by commenting `/remove-lifecycle rotten`. >Exclude this issue from closing again by commenting `/lifecycle frozen`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
Naveenca1611 commented 1 year ago

Hi, Please help me to troubleshoot the issue

Refered for installation :- https://devpress.csdn.net/cicd/62ec1b2b19c509286f416433.html

openshift version 3.11 centos 7

error:

PLAY [Verify Requirements] *****

TASK [Gathering Facts] ***** ok: [10.11.95.82]

TASK [Run variable sanity checks] ** fatal: [10.11.95.82]: FAILED! => {"msg": "last_checked_host: 10.11.95.82, last_checked_var: openshift_master_identity_providers;Found removed variables: openshift_node_labels is replaced by openshift_node_groups[].labels; "}

PLAY RECAP ***** 10.11.95.81 : ok=30 changed=4 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0 10.11.95.82 : ok=49 changed=4 unreachable=0 failed=1 skipped=46 rescued=0 ignored=0 10.11.95.84 : ok=30 changed=4 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0 localhost : ok=8 changed=0 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0

ansible inventory file :

[OSEv3:children] masters nodes

[OSEv3:vars] ansible_ssh_user=root ansible_become=true deployment_type=origin

[nodes:vars] openshift_disable_check=disk_availability,memory_availability,docker_storage

[masters:vars] openshift_disable_check=disk_availability,memory_availability,docker_storage

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]

containerized=true openshift_release=3.11 openshift_image_tag=v3.11 openshift_public_hostname=10.11.95.82 openshift_master_default_subdomain=apps.10.11.95.82

host group for masters

[masters] 10.11.95.82 openshift_node_group="{'region': 'infra', 'zone': 'default'}"

host group for nodes, includes region info

[nodes] 10.11.95.82 openshift_node_labels="{'region': 'infra', 'zone': 'default'}" 10.11.95.81 openshift_node_labels="{'region': 'primary', 'zone': 'east'}" 10.11.95.84 openshift_node_labels="{'region': 'primary', 'zone': 'west'}"