Closed adelton closed 4 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
Hi, Please help me to troubleshoot the issue
Refered for installation :- https://devpress.csdn.net/cicd/62ec1b2b19c509286f416433.html
openshift version 3.11 centos 7
error:
PLAY [Verify Requirements] *****
TASK [Gathering Facts] ***** ok: [10.11.95.82]
TASK [Run variable sanity checks] **
fatal: [10.11.95.82]: FAILED! => {"msg": "last_checked_host: 10.11.95.82, last_checked_var: openshift_master_identity_providers;Found removed variables: openshift_node_labels is replaced by openshift_node_groups[
PLAY RECAP ***** 10.11.95.81 : ok=30 changed=4 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0 10.11.95.82 : ok=49 changed=4 unreachable=0 failed=1 skipped=46 rescued=0 ignored=0 10.11.95.84 : ok=30 changed=4 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0 localhost : ok=8 changed=0 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0
[OSEv3:children] masters nodes
[OSEv3:vars] ansible_ssh_user=root ansible_become=true deployment_type=origin
[nodes:vars] openshift_disable_check=disk_availability,memory_availability,docker_storage
[masters:vars] openshift_disable_check=disk_availability,memory_availability,docker_storage
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
containerized=true openshift_release=3.11 openshift_image_tag=v3.11 openshift_public_hostname=10.11.95.82 openshift_master_default_subdomain=apps.10.11.95.82
[masters] 10.11.95.82 openshift_node_group="{'region': 'infra', 'zone': 'default'}"
Description
The https://github.com/openshift/openshift-ansible/blob/release-3.11/README.md says
In 3.10, I was able to use
ansible-inventory
that used bothopenshift_node_labels
andopenshift_node_group_name
. I run the same test code against various OpenShift versions and if the value is ignored, there shouldn't be any problem having it there.However, with 3.11, that started to fail.
Version
Please put the following version information in the code block indicated below.
ansible --version
If you're operating from a git clone:
git describe
If you're running from playbooks installed via RPM
rpm -q openshift-ansible
Steps To Reproduce
ansible-playbook -i ansible-inventory /path/to/openshift-ansible/playbooks/prerequisites.yml
Expected Results
Describe what you expected to happen.
The
ansible-playbook should pass
.Observed Results
Describe what is actually happening.
Additional Information
Provide any additional information which may help us diagnose the issue.
$ cat /etc/redhat-release
)RHEL 7.6