openshift / openshift-ansible

Install and config an OpenShift 3.x cluster
https://try.openshift.com
Apache License 2.0
2.18k stars 2.31k forks source link

OSCP 3.3 - installer fails when creating volumes #2553

Closed raffaelespazzoli closed 6 years ago

raffaelespazzoli commented 8 years ago

The installer fails with the following output:

PLAY [Create persistent volumes] ***********************************************

TASK [setup] *******************************************************************
ok: [master1.c.openshift-enablement-exam.internal]

TASK [openshift_facts : Detecting Operating System] ****************************
fatal: [master1.c.openshift-enablement-exam.internal]: FAILED! => {"failed": true, "msg": "The conditional check 'persistent_volumes | length > 0 or persistent_volume_claims | length > 0' failed. The error was: '{{ hostvars[groups.oo_first_master.0] | oo_persistent_volumes(groups) }}: create_pv'"}
Version

atomic-openshift-utils-3.3.28-1.git.0.762256b.el7.noarch openshift-ansible-3.3.28-1.git.0.762256b.el7.noarch ansible 2.2.0 all the nodes and the ansible host have the following: Linux ose-bastion 3.10.0-327.36.1.el7.x86_64 #1 SMP Wed Aug 17 03:02:37 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux

I'm installing on Google Cloud Platform

Steps To Reproduce

ansible-playbook -v -i hosts /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

the host file is provided below

Current Result

the above error

Expected Result

complete installation

Additional Information
#This is an example of a bring your own (byo) host inventory

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd
nfs

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a
# password. If using ssh key based auth, then the key should be managed by an
# ssh agent.
ansible_ssh_user=rspazzol

# If ansible_ssh_user is not root, ansible_become must be set to true and the
# user must be configured for passwordless sudo
ansible_become=yes

# Debug level for all OpenShift components (Defaults to 2)
debug_level=2

# deployment type valid values are origin, online, atomic-enterprise, and openshift-enterprise
deployment_type=openshift-enterprise

# Specify the generic release of OpenShift to install. This is used mainly just during installation, after which we
# rely on the version running on the first master. Works best for containerized installs where we can usually
# use this to lookup the latest exact version of the container images, which is the tag actually used to configure
# the cluster. For RPM installations we just verify the version detected in your configured repos matches this
# release.
openshift_release=v3.3

# Specify an exact container image tag to install or configure.
# WARNING: This value will be used for all hosts in containerized environments, even those that have another version installed.
# This could potentially trigger an upgrade and downtime, so be careful with modifying this value after the cluster is set up.
#openshift_image_tag=v3.2.0.46

# Specify an exact rpm version to install or configure.
# WARNING: This value will be used for all hosts in RPM based environments, even those that have another version installed.
# This could potentially trigger an upgrade and downtime, so be careful with modifying this value after the cluster is set up.
#openshift_pkg_version=-3.2.0.46

# Install the openshift examples
#openshift_install_examples=true

# Configure logoutURL in the master config for console customization
# See: https://docs.openshift.org/latest/install_config/web_console_customization.html#changing-the-logout-url
#openshift_master_logout_url=http://example.com

# Configure extensionScripts in the master config for console customization
# See: https://docs.openshift.org/latest/install_config/web_console_customization.html#loading-custom-scripts-and-stylesheets
#openshift_master_extension_scripts=['/path/to/script1.js','/path/to/script2.js']

# Configure extensionStylesheets in the master config for console customization
# See: https://docs.openshift.org/latest/install_config/web_console_customization.html#loading-custom-scripts-and-stylesheets
#openshift_master_extension_stylesheets=['/path/to/stylesheet1.css','/path/to/stylesheet2.css']

# Configure extensions in the master config for console customization
# See: https://docs.openshift.org/latest/install_config/web_console_customization.html#serving-static-files
#openshift_master_extensions=[{'name': 'images', 'sourceDirectory': '/path/to/my_images'}]

# Configure extensions in the master config for console customization
# See: https://docs.openshift.org/latest/install_config/web_console_customization.html#serving-static-files
#openshift_master_oauth_template=/path/to/login-template.html

# Configure imagePolicyConfig in the master config
# See: https://godoc.org/github.com/openshift/origin/pkg/cmd/server/api#ImagePolicyConfig
#openshift_master_image_policy_config={"maxImagesBulkImportedPerRepository": 3, "disableScheduledImport": true}

# Docker Configuration
# Add additional, insecure, and blocked registries to global docker configuration
# For enterprise deployment types we ensure that registry.access.redhat.com is
# included if you do not include it
#openshift_docker_additional_registries=registry.example.com
#openshift_docker_insecure_registries=registry.example.com
#openshift_docker_blocked_registries=registry.hacker.com
# Disable pushing to dockerhub
#openshift_docker_disable_push_dockerhub=True
# Items added, as is, to end of /etc/sysconfig/docker OPTIONS
# Default value: "--log-driver=json-file --log-opt max-size=50m"
#openshift_docker_options="-l warn --ipv6=false"

# Specify exact version of Docker to configure or upgrade to.
# Downgrades are not supported and will error out. Be careful when upgrading docker from < 1.10 to > 1.10.
# docker_version="1.10.3"

# Skip upgrading Docker during an OpenShift upgrade, leaves the current Docker version alone.
# docker_upgrade=False

# Alternate image format string, useful if you've got your own registry mirror
#oreg_url=example.com/openshift3/ose-${component}:${version}
# If oreg_url points to a registry other than registry.access.redhat.com we can
# modify image streams to point at that registry by setting the following to true
#openshift_examples_modify_imagestreams=true

# Additional yum repos to install
#openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://example.com/puddle/build/AtomicOpenShift/3.1/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]

# htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
# Defining htpasswd users
#openshift_master_htpasswd_users={'user1': '<pre-hashed password>', 'user2': '<pre-hashed password>'}
# or
#openshift_master_htpasswd_file=<path to local pre-generated htpasswd file>

# Allow all auth
#openshift_master_identity_providers=[{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]

# LDAP auth
#openshift_master_identity_providers=[{'name': 'my_ldap_provider', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': '', 'bindPassword': '', 'ca': '', 'insecure': 'false', 'url': 'ldap://ldap.example.com:389/ou=users,dc=example,dc=com?uid'}]
# Configuring the ldap ca certificate
#openshift_master_ldap_ca=<ca text>
# or
#openshift_master_ldap_ca_file=<path to local ca file to use>

# Available variables for configuring certificates for other identity providers:
#openshift_master_openid_ca
#openshift_master_openid_ca_file
#openshift_master_request_header_ca
#openshift_master_request_header_ca_file

# Cloud Provider Configuration
#
# Note: You may make use of environment variables rather than store
# sensitive configuration within the ansible inventory.
# For example:
#openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
#openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
#
# AWS
#openshift_cloudprovider_kind=aws
# Note: IAM profiles may be used instead of storing API credentials on disk.
#openshift_cloudprovider_aws_access_key=aws_access_key_id
#openshift_cloudprovider_aws_secret_key=aws_secret_access_key
#
# Openstack
#openshift_cloudprovider_kind=openstack
#openshift_cloudprovider_openstack_auth_url=http://openstack.example.com:35357/v2.0/
#openshift_cloudprovider_openstack_username=username
#openshift_cloudprovider_openstack_password=password
#openshift_cloudprovider_openstack_domain_id=domain_id
#openshift_cloudprovider_openstack_domain_name=domain_name
#openshift_cloudprovider_openstack_tenant_id=tenant_id
#openshift_cloudprovider_openstack_tenant_name=tenant_name
#openshift_cloudprovider_openstack_region=region
#openshift_cloudprovider_openstack_lb_subnet_id=subnet_id
#
# GCE
openshift_cloudprovider_kind=gce

# Project Configuration
#osm_project_request_message=''
#osm_project_request_template=''
#osm_mcs_allocator_range='s0:/2'
#osm_mcs_labels_per_project=5
#osm_uid_allocator_range='1000000000-1999999999/10000'

# Configure additional projects
#openshift_additional_projects={'my-project': {'default_node_selector': 'label=value'}}

# Enable cockpit
osm_use_cockpit=true
#
# Set cockpit plugins
osm_cockpit_plugins=['cockpit-kubernetes']

# Native high availability cluster method with optional load balancer.
# If no lb group is defined, the installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=master.10.128.0.10.xip.io
openshift_master_cluster_public_hostname=master.104.197.199.131.xip.io

# Pacemaker high availability cluster method.
# Pacemaker HA environment must be able to self provision the
# configured VIP. For installation openshift_master_cluster_hostname
# must resolve to the configured VIP.
#openshift_master_cluster_method=pacemaker
#openshift_master_cluster_password=openshift_cluster
#openshift_master_cluster_vip=192.168.133.25
#openshift_master_cluster_public_vip=192.168.133.25
#openshift_master_cluster_hostname=openshift-ansible.test.example.com
#openshift_master_cluster_public_hostname=openshift-ansible.test.example.com

# Override the default controller lease ttl
#osm_controller_lease_ttl=30

# Configure controller arguments
#osm_controller_args={'resource-quota-sync-period': ['10s']}

# Configure api server arguments
#osm_api_server_args={'max-requests-inflight': ['400']}

# default subdomain to use for exposed routes
openshift_master_default_subdomain=apps.104.198.35.122.xip.io

# additional cors origins
#osm_custom_cors_origins=['foo.example.com', 'bar.example.com']

# default project node selector
osm_default_node_selector='region=primary'

# Override the default pod eviction timeout
#openshift_master_pod_eviction_timeout=5m

# Override the default oauth tokenConfig settings:
# openshift_master_access_token_max_seconds=86400
# openshift_master_auth_token_max_seconds=500

# Override master servingInfo.maxRequestsInFlight
#openshift_master_max_requests_inflight=500

# default storage plugin dependencies to install, by default the ceph and
# glusterfs plugin dependencies will be installed, if available.
#osn_storage_plugin_deps=['ceph','glusterfs']

# OpenShift Router Options
#
# An OpenShift router will be created during install if there are
# nodes present with labels matching the default router selector,
# "region=infra". Set openshift_node_labels per node as needed in
# order to label nodes.
#
# Example:
# [nodes]
# node.example.com openshift_node_labels="{'region': 'infra'}"
#
# Router selector (optional)
# Router will only be created if nodes matching this label are present.
# Default value: 'region=infra'
openshift_hosted_router_selector='region=infra'
#
# Router replicas (optional)
# Unless specified, openshift-ansible will calculate the replica count
# based on the number of nodes matching the openshift router selector.
#openshift_hosted_router_replicas=2
#
# Router force subdomain (optional)
# A router path format to force on all routes used by this router
# (will ignore the route host value)
#openshift_hosted_router_force_subdomain='${name}-${namespace}.apps.example.com'
#
# Router certificate (optional)
# Provide local certificate paths which will be configured as the
# router's default certificate.
#openshift_hosted_router_certificate={"certfile": "/path/to/router.crt", "keyfile": "/path/to/router.key", "cafile": "/path/to/router-ca.crt"}
#
# Disable management of the OpenShift Router
#openshift_hosted_manage_router=false

# Openshift Registry Options
#
# An OpenShift registry will be created during install if there are
# nodes present with labels matching the default registry selector,
# "region=infra". Set openshift_node_labels per node as needed in
# order to label nodes.
#
# Example:
# [nodes]
# node.example.com openshift_node_labels="{'region': 'infra'}"
#
# Registry selector (optional)
# Registry will only be created if nodes matching this label are present.
# Default value: 'region=infra'
openshift_hosted_registry_selector='region=infra'
#
# Registry replicas (optional)
# Unless specified, openshift-ansible will calculate the replica count
# based on the number of nodes matching the openshift registry selector.
#openshift_hosted_registry_replicas=2
#
# Disable management of the OpenShift Registry
#openshift_hosted_manage_registry=false

# Registry Storage Options
#
# NFS Host Group
# An NFS volume will be created with path "nfs_directory/volume_name"
# on the host within the [nfs] host group.  For example, the volume
# path using these options would be "/exports/registry"
openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_nfs_directory=/exports
openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=10Gi
#
# External NFS Host
# NFS volume must already exist with path "nfs_directory/_volume_name" on
# the storage_host. For example, the remote volume path using these
# options would be "nfs.example.com:/exports/registry"
#openshift_hosted_registry_storage_kind=nfs
#openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
#openshift_hosted_registry_storage_host=nfs.example.com
#openshift_hosted_registry_storage_nfs_directory=/exports
#openshift_hosted_registry_storage_volume_name=registry
#openshift_hosted_registry_storage_volume_size=10Gi
#
# Openstack
# Volume must already exist.
#openshift_hosted_registry_storage_kind=openstack
#openshift_hosted_registry_storage_access_modes=['ReadWriteOnce']
#openshift_hosted_registry_storage_openstack_filesystem=ext4
#openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57
#openshift_hosted_registry_storage_volume_size=10Gi
#
# AWS S3
# S3 bucket must already exist.
#openshift_hosted_registry_storage_kind=object
#openshift_hosted_registry_storage_provider=s3
#openshift_hosted_registry_storage_s3_accesskey=aws_access_key_id
#openshift_hosted_registry_storage_s3_secretkey=aws_secret_access_key
#openshift_hosted_registry_storage_s3_bucket=bucket_name
#openshift_hosted_registry_storage_s3_region=bucket_region
#openshift_hosted_registry_storage_s3_chunksize=26214400
#openshift_hosted_registry_storage_s3_rootdirectory=/registry
#openshift_hosted_registry_pullthrough=true
#openshift_hosted_registry_acceptschema2=true
#openshift_hosted_registry_enforcequota=true

# Metrics deployment
# See: https://docs.openshift.com/enterprise/latest/install_config/cluster_metrics.html
#
# By default metrics are not automatically deployed, set this to enable them
openshift_hosted_metrics_deploy=true
#
# Storage Options
# If openshift_hosted_metrics_storage_kind is unset then metrics will be stored
# in an EmptyDir volume and will be deleted when the cassandra pod terminates.
# Storage options A & B currently support only one cassandra pod which is
# generally enough for up to 1000 pods. Additional volumes can be created
# manually after the fact and metrics scaled per the docs.
#
# Option A - NFS Host Group
# An NFS volume will be created with path "nfs_directory/volume_name"
# on the host within the [nfs] host group.  For example, the volume
# path using these options would be "/exports/metrics"
#openshift_hosted_metrics_storage_kind=nfs
#openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce']
#openshift_hosted_metrics_storage_nfs_directory=/exports
#openshift_hosted_metrics_storage_nfs_options='*(rw,root_squash)'
#openshift_hosted_metrics_storage_volume_name=metrics
#openshift_hosted_metrics_storage_volume_size=10Gi
#
# Option B - External NFS Host
# NFS volume must already exist with path "nfs_directory/_volume_name" on
# the storage_host. For example, the remote volume path using these
# options would be "nfs.example.com:/exports/metrics"
#openshift_hosted_metrics_storage_kind=nfs
#openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce']
#openshift_hosted_metrics_storage_host=nfs.example.com
#openshift_hosted_metrics_storage_nfs_directory=/exports
#openshift_hosted_metrics_storage_volume_name=metrics
#openshift_hosted_metrics_storage_volume_size=10Gi
#
# Option C - Dynamic -- If openshift supports dynamic volume provisioning for
# your cloud platform use this.
openshift_hosted_metrics_storage_kind=dynamic
#
# Override metricsPublicURL in the master config for cluster metrics
# Defaults to https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics
# Currently, you may only alter the hostname portion of the url, alterting the
# `/hawkular/metrics` path will break installation of metrics.
#openshift_hosted_metrics_public_url=https://hawkular-metrics.example.com/hawkular/metrics

# Logging deployment
#
# Currently logging deployment is disabled by default, enable it by setting this
openshift_hosted_logging_deploy=true
#
# Logging storage config
# Option A - NFS Host Group
# An NFS volume will be created with path "nfs_directory/volume_name"
# on the host within the [nfs] host group.  For example, the volume
# path using these options would be "/exports/logging"
#openshift_hosted_logging_storage_kind=nfs
#openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
#openshift_hosted_logging_storage_nfs_directory=/exports
#openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)'
#openshift_hosted_logging_storage_volume_name=logging
#openshift_hosted_logging_storage_volume_size=10Gi
#
# Option B - External NFS Host
# NFS volume must already exist with path "nfs_directory/_volume_name" on
# the storage_host. For example, the remote volume path using these
# options would be "nfs.example.com:/exports/logging"
#openshift_hosted_logging_storage_kind=nfs
#openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
#openshift_hosted_logging_storage_host=nfs.example.com
#openshift_hosted_logging_storage_nfs_directory=/exports
#openshift_hosted_logging_storage_volume_name=logging
#openshift_hosted_logging_storage_volume_size=10Gi
#
# Option C - Dynamic -- If openshift supports dynamic volume provisioning for
# your cloud platform use this.
openshift_hosted_logging_storage_kind=dynamic
#
# Option D - none -- Logging will use emptydir volumes which are destroyed when
# pods are deleted
#
# Other Logging Options -- Common items you may wish to reconfigure, for the complete
# list of options please see roles/openshift_hosted_logging/README.md
#
# Configure loggingPublicURL in the master config for aggregate logging, defaults
# to https://kibana.{{ openshift_master_default_subdomain }}
#openshift_master_logging_public_url=https://kibana.example.com
# Configure the number of elastic search nodes, unless you're using dynamic provisioning
# this value must be 1
#openshift_hosted_logging_elasticsearch_cluster_size=1
#openshift_hosted_logging_hostname=logging.apps.example.com
# Configure the prefix and version for the deployer image
#openshift_hosted_logging_deployer_prefix=registry.example.com:8888/openshift3/
#openshift_hosted_logging_deployer_version=3.3.0

# Configure the multi-tenant SDN plugin (default is 'redhat/openshift-ovs-subnet')
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'

# Disable the OpenShift SDN plugin
# openshift_use_openshift_sdn=False

# Configure SDN cluster network and kubernetes service CIDR blocks. These
# network blocks should be private and should not conflict with network blocks
# in your infrastructure that pods may require access to. Can not be changed
# after deployment.
#osm_cluster_network_cidr=10.1.0.0/16
#openshift_portal_net=172.30.0.0/16

# ExternalIPNetworkCIDRs controls what values are acceptable for the
# service external IP field. If empty, no externalIP may be set. It
# may contain a list of CIDRs which are checked for access. If a CIDR
# is prefixed with !, IPs in that CIDR will be rejected. Rejections
# will be applied first, then the IP checked against one of the
# allowed CIDRs. You should ensure this range does not overlap with
# your nodes, pods, or service CIDRs for security reasons.
#openshift_master_external_ip_network_cidrs=['0.0.0.0/0']

# Configure number of bits to allocate to each host’s subnet e.g. 8
# would mean a /24 network on the host.
#osm_host_subnet_length=8

# Configure master API and console ports.
#openshift_master_api_port=8443
#openshift_master_console_port=8443

# set RPM version for debugging purposes
#openshift_pkg_version=-3.1.0.0

# Configure custom ca certificate
#openshift_master_ca_certificate={'certfile': '/path/to/ca.crt', 'keyfile': '/path/to/ca.key'}
#
# NOTE: CA certificate will not be replaced with existing clusters.
# This option may only be specified when creating a new cluster or
# when redeploying cluster certificates with the redeploy-certificates
# playbook. If replacing the CA certificate in an existing cluster
# with a custom ca certificate, the following variable must also be
# set.
#openshift_certificates_redeploy_ca=true

# Configure custom named certificates (SNI certificates)
#
# https://docs.openshift.com/enterprise/latest/install_config/certificate_customization.html
#
# NOTE: openshift_master_named_certificates is cached on masters and is an
# additive fact, meaning that each run with a different set of certificates
# will add the newly provided certificates to the cached set of certificates.
#
# An optional CA may be specified for each named certificate. CAs will
# be added to the OpenShift CA bundle which allows for the named
# certificate to be served for internal cluster communication.
#
# If you would like openshift_master_named_certificates to be overwritten with
# the provided value, specify openshift_master_overwrite_named_certificates.
#openshift_master_overwrite_named_certificates=true
#
# Provide local certificate paths which will be deployed to masters
#openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "cafile": "/path/to/custom-ca1.crt"}]
#
# Detected names may be overridden by specifying the "names" key
#openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"], "cafile": "/path/to/custom-ca1.crt"}]

# Session options
#openshift_master_session_name=ssn
#openshift_master_session_max_seconds=3600

# An authentication and encryption secret will be generated if secrets
# are not provided. If provided, openshift_master_session_auth_secrets
# and openshift_master_encryption_secrets must be equal length.
#
# Signing secrets, used to authenticate sessions using
# HMAC. Recommended to use secrets with 32 or 64 bytes.
#openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
#
# Encrypting secrets, used to encrypt sessions. Must be 16, 24, or 32
# characters long, to select AES-128, AES-192, or AES-256.
#openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']

# configure how often node iptables rules are refreshed
#openshift_node_iptables_sync_period=5s

# Configure nodeIP in the node config
# This is needed in cases where node traffic is desired to go over an
# interface other than the default network interface.
#openshift_node_set_node_ip=True

# Force setting of system hostname when configuring OpenShift
# This works around issues related to installations that do not have valid dns
# entries for the interfaces attached to the host.
#openshift_set_hostname=True

# Configure dnsIP in the node config
#openshift_dns_ip=172.30.0.1

# Configure node kubelet arguments
#openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']}

# Configure logrotate scripts
# See: https://github.com/nickhammond/ansible-logrotate
#logrotate_scripts=[{"name": "syslog", "path": "/var/log/cron\n/var/log/maillog\n/var/log/messages\n/var/log/secure\n/var/log/spooler\n", "options": ["daily", "rotate 7", "compress", "sharedscripts", "missingok"], "scripts": {"postrotate": "/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true"}}]

# openshift-ansible will wait indefinitely for your input when it detects that the
# value of openshift_hostname resolves to an IP address not bound to any local
# interfaces. This mis-configuration is problematic for any pod leveraging host
# networking and liveness or readiness probes.
# Setting this variable to true will override that check.
#openshift_override_hostname_check=true

# Configure dnsmasq for cluster dns, switch the host's local resolver to use dnsmasq
# and configure node's dnsIP to point at the node's local dnsmasq instance. Defaults
# to True for Origin 1.2 and OSE 3.2. False for 1.1 / 3.1 installs, this cannot
# be used with 1.0 and 3.0.
#openshift_use_dnsmasq=False
# Define an additional dnsmasq.conf file to deploy to /etc/dnsmasq.d/openshift-ansible.conf
# This is useful for POC environments where DNS may not actually be available yet.
#openshift_node_dnsmasq_additional_config_file=/home/bob/ose-dnsmasq.conf

# Global Proxy Configuration
# These options configure HTTP_PROXY, HTTPS_PROXY, and NOPROXY environment
# variables for docker and master services.
#openshift_http_proxy=http://USER:PASSWORD@IPADDR:PORT
#openshift_https_proxy=https://USER:PASSWORD@IPADDR:PORT
#openshift_no_proxy='.hosts.example.com,some-host.com'
#
# Most environments don't require a proxy between openshift masters, nodes, and
# etcd hosts. So automatically add those hostnames to the openshift_no_proxy list.
# If all of your hosts share a common domain you may wish to disable this and
# specify that domain above.
#openshift_generate_no_proxy_hosts=True
#
# These options configure the BuildDefaults admission controller which injects
# environment variables into Builds. These values will default to the global proxy
# config values. You only need to set these if they differ from the global settings
# above. See BuildDefaults
# documentation at https://docs.openshift.org/latest/admin_guide/build_defaults_overrides.html
#openshift_builddefaults_http_proxy=http://USER:PASSWORD@HOST:PORT
#openshift_builddefaults_https_proxy=https://USER:PASSWORD@HOST:PORT
#openshift_builddefaults_no_proxy=build_defaults
#openshift_builddefaults_git_http_proxy=http://USER:PASSWORD@HOST:PORT
#openshift_builddefaults_git_https_proxy=https://USER:PASSWORD@HOST:PORT
# Or you may optionally define your own serialized as json
#openshift_builddefaults_json='{"BuildDefaults":{"configuration":{"kind":"BuildDefaultsConfig","apiVersion":"v1","gitHTTPSProxy":"http://proxy.example.com.redhat.com:3128","gitHTTPProxy":"http://proxy.example.com.redhat.com:3128","env":[{"name":"HTTP_PROXY","value":"http://proxy.example.com.redhat.com:3128"},{"name":"NO_PROXY","value":"ose3-master.example.com"}]}}'
# masterConfig.volumeConfig.dynamicProvisioningEnabled, configurable as of 1.2/3.2, enabled by default
#openshift_master_dynamic_provisioning_enabled=False

# Configure usage of openshift_clock role.
#openshift_clock_enabled=true

# OpenShift Per-Service Environment Variables
# Environment variables are added to /etc/sysconfig files for
# each OpenShift service: node, master (api and controllers).
# API and controllers environment variables are merged in single
# master environments.
#openshift_master_api_env_vars={"ENABLE_HTTP2": "true"}
#openshift_master_controllers_env_vars={"ENABLE_HTTP2": "true"}
#openshift_node_env_vars={"ENABLE_HTTP2": "true"}

# Enable API service auditing, available as of 3.2
#openshift_master_audit_config={"basicAuditEnabled": true}

# host group for masters
[masters]
master[1:3].c.openshift-enablement-exam.internal

[etcd]
master[1:3].c.openshift-enablement-exam.internal

[nfs]
ose-bastion.c.openshift-enablement-exam.internal

# NOTE: Currently we require that masters be part of the SDN which requires that they also be nodes
# However, in order to ensure that your masters are not burdened with running pods you should
# make them unschedulable by adding openshift_schedulable=False any node that's also a master.
[nodes]
master[1:3].c.openshift-enablement-exam.internal openshift_schedulable=false
node[1:3].c.openshift-enablement-exam.internal openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
infranode[1:2].c.openshift-enablement-exam.internal openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
abutcher commented 8 years ago

Hey @raffaelespazzoli, is that the full error message? It looks like the error message has been truncated but ansible may have chopped it.

raffaelespazzoli commented 8 years ago

here is another snapshot of the log, this was run with -vv, but there isn't much more additional detail:

PLAY [Create persistent volumes] ***********************************************

TASK [setup] *******************************************************************
ok: [master1.c.openshift-enablement-exam.internal]

TASK [openshift_facts : Detecting Operating System] ****************************
task path: /usr/share/ansible/openshift-ansible/roles/openshift_facts/tasks/main.yml:2
fatal: [master1.c.openshift-enablement-exam.internal]: FAILED! => {"failed": true, "msg": "The conditional check 'persistent_volumes | length > 0 or persistent_volume_claims | length > 0' failed. The error was: '{{ hostvars[groups.oo_first_master.0] | oo_persistent_volumes(groups) }}: create_pv'"}
    to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/byo/config.retry

PLAY RECAP *********************************************************************
infranode1.c.openshift-enablement-exam.internal : ok=134  changed=3    unreachable=0    failed=0   
infranode2.c.openshift-enablement-exam.internal : ok=134  changed=3    unreachable=0    failed=0   
localhost                  : ok=15   changed=9    unreachable=0    failed=0   
master1.c.openshift-enablement-exam.internal : ok=388  changed=23   unreachable=0    failed=1   
master2.c.openshift-enablement-exam.internal : ok=292  changed=14   unreachable=0    failed=0   
master3.c.openshift-enablement-exam.internal : ok=292  changed=14   unreachable=0    failed=0   
node1.c.openshift-enablement-exam.internal : ok=134  changed=3    unreachable=0    failed=0   
node2.c.openshift-enablement-exam.internal : ok=134  changed=3    unreachable=0    failed=0   
node3.c.openshift-enablement-exam.internal : ok=134  changed=3    unreachable=0    failed=0   
ose-bastion.c.openshift-enablement-exam.internal : ok=69   changed=1    unreachable=0    failed=0

To add more information, I've provisioned the environment in Google container platform following the reference architecture described here. I'm adhering to the availability zone scheme. I haven't provisioned the external loadbalancer for the master yet because I need the final certificates.

abutcher commented 8 years ago

I'd expect a stack trace based on the failure. We pass hostvars into these filters to generate a list of volumes and claims to make and the failure is occurring in oo_persistent_volumes. I wonder if we'd get the stack trace with more verbosity.

raffaelespazzoli commented 8 years ago

I rerun the installer with -vvvv, below is what I get:

PLAY [Create persistent volumes] ***********************************************

TASK [setup] *******************************************************************
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py
<master1.c.openshift-enablement-exam.internal> ESTABLISH SSH CONNECTION FOR USER: rspazzol
<master1.c.openshift-enablement-exam.internal> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=rspazzol -o ConnectTimeout=10 -o ControlPath=/home/rspazzol/.ansible/cp/ansible-ssh-%h-%p-%r master1.c.openshift-enablement-exam.internal '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475713625.09-251520290060066 `" && echo ansible-tmp-1475713625.09-251520290060066="` echo $HOME/.ansible/tmp/ansible-tmp-1475713625.09-251520290060066 `" ) && sleep 0'"'"''
<master1.c.openshift-enablement-exam.internal> PUT /tmp/tmp04o5OC TO /home/rspazzol/.ansible/tmp/ansible-tmp-1475713625.09-251520290060066/setup.py
<master1.c.openshift-enablement-exam.internal> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=rspazzol -o ConnectTimeout=10 -o ControlPath=/home/rspazzol/.ansible/cp/ansible-ssh-%h-%p-%r '[master1.c.openshift-enablement-exam.internal]'
<master1.c.openshift-enablement-exam.internal> ESTABLISH SSH CONNECTION FOR USER: rspazzol
<master1.c.openshift-enablement-exam.internal> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=rspazzol -o ConnectTimeout=10 -o ControlPath=/home/rspazzol/.ansible/cp/ansible-ssh-%h-%p-%r master1.c.openshift-enablement-exam.internal '/bin/sh -c '"'"'chmod u+x /home/rspazzol/.ansible/tmp/ansible-tmp-1475713625.09-251520290060066/ /home/rspazzol/.ansible/tmp/ansible-tmp-1475713625.09-251520290060066/setup.py && sleep 0'"'"''
<master1.c.openshift-enablement-exam.internal> ESTABLISH SSH CONNECTION FOR USER: rspazzol
<master1.c.openshift-enablement-exam.internal> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=rspazzol -o ConnectTimeout=10 -o ControlPath=/home/rspazzol/.ansible/cp/ansible-ssh-%h-%p-%r -tt master1.c.openshift-enablement-exam.internal '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-ofmngrycrffxsqaxuendlbvoimsgwuqj; /usr/bin/python /home/rspazzol/.ansible/tmp/ansible-tmp-1475713625.09-251520290060066/setup.py; rm -rf "/home/rspazzol/.ansible/tmp/ansible-tmp-1475713625.09-251520290060066/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
ok: [master1.c.openshift-enablement-exam.internal]

TASK [openshift_facts : Detecting Operating System] ****************************
task path: /usr/share/ansible/openshift-ansible/roles/openshift_facts/tasks/main.yml:2
fatal: [master1.c.openshift-enablement-exam.internal]: FAILED! => {
    "failed": true, 
    "msg": "The conditional check 'persistent_volumes | length > 0 or persistent_volume_claims | length > 0' failed. The error was: '{{ hostvars[groups.oo_first_master.0] | oo_persistent_volumes(groups) }}: create_pv'"
}
    to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/byo/config.retry

PLAY RECAP *********************************************************************
infranode1.c.openshift-enablement-exam.internal : ok=134  changed=2    unreachable=0    failed=0   
infranode2.c.openshift-enablement-exam.internal : ok=134  changed=2    unreachable=0    failed=0   
localhost                  : ok=15   changed=9    unreachable=0    failed=0   
master1.c.openshift-enablement-exam.internal : ok=388  changed=22   unreachable=0    failed=1   
master2.c.openshift-enablement-exam.internal : ok=292  changed=13   unreachable=0    failed=0   
master3.c.openshift-enablement-exam.internal : ok=292  changed=14   unreachable=0    failed=0   
node1.c.openshift-enablement-exam.internal : ok=134  changed=2    unreachable=0    failed=0   
node2.c.openshift-enablement-exam.internal : ok=134  changed=2    unreachable=0    failed=0   
node3.c.openshift-enablement-exam.internal : ok=134  changed=2    unreachable=0    failed=0   
raffaelespazzoli commented 8 years ago

Instructions on how to reproduce the issue can be found here

https://github.com/raffaelespazzoli/openshift-enablement-exam

raffaelespazzoli commented 8 years ago

one update: the installation was successful with the latest playbooks from https://github.com/openshift/openshift-ansible HEAD. I'd say this issue is worth investigating.

raffaelespazzoli commented 8 years ago

an update. I'm now getting this issue at a customer. We are installing OpenShift on VMWare machines, so the issue is not google cloud related. Also my workaround of using the upstream ansible installer does not work anymore because now the logging and metrics deployers have been bumped to 3.3.1 and those images don't exist in registry.access.redhat.com yet.

I need urgently a workaround on this.

in attachement the customer's hosts file hosts.txt

ghost commented 8 years ago

When you add 'nfs' in the '[OSEv3:children]' section and '[nfs]' with the first master is then the error gone? We had the same problem and after this at least the registry problem was gone.

BTW: Please can you remove the next time the commented lines out this makes the hostsfile much more readable

raffaelespazzoli commented 8 years ago

@cw-aleks ,

I'm not sure I understand what you mean. Can you explain this better? "When you add 'nfs' in the '[OSEv3:children]' section and '[nfs]' with the first master is then the error gone?"

nfs with the first master? what does it mean? for us the first master is not an nfs server.... please make an example of what you mean.

ghost commented 8 years ago

I mean.

[OSEv3:children]
masters
nodes
etcd
lb
nfs

... other data.

[nfs]
master1
sdodson commented 8 years ago

@raffaelespazzoli You can set openshift_hosted_logging_deployer_version and openshift_hosted_metrics_deployer_version to '3.3.0' as a workaround.

raffaelespazzoli commented 8 years ago

@sdodson thanks for the suggestion, we tried that yesterday, it worked for metrics but nor for logging and the ansible installer continues of the metrics are not successfully deployed, but fails if logging is not successfully deployed....(not sure why we have this difference).

raffaelespazzoli commented 8 years ago

@sdodson actually we tried this: openshift_hosted_logging_image_version and openshift_hosted_metrics_image_version. I'll give it a try with the other two attributes that you mention.

raffaelespazzoli commented 8 years ago

Aleks,

I'm not sure I understand what you mean. Can you explain this better? "When you add 'nfs' in the '[OSEv3:children]' section and '[nfs]' with the first master is then the error gone?"

nfs with the first master? what does it mean? for us the first master is not an nfs server....

2016-10-25 23:06 GMT-07:00 Aleks notifications@github.com:

When you add 'nfs' in the '[OSEv3:children]' section and '[nfs]' with the first master is then the error gone? We had the same problem and after this at least the registry problem was gone.

BTW: Please can you remove the next time the commented lines out this makes the hostsfile much more readable

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openshift/openshift-ansible/issues/2553#issuecomment-256257693, or mute the thread https://github.com/notifications/unsubscribe-auth/AF5I3BijEnlfGRhnRqRbrg-NTPJJVSH9ks5q3u4BgaJpZM4KPF6a .

ciao/bye Raffaele

ghost commented 8 years ago

@raffaelespazzoli I refer to the pv/pvc error not to the image version error.

I had the same error at ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml. After I have added the first master as a nfs server the setup was finished.

I have then changed the pv for the registry to the remote nfs server, something like this. https://docs.openshift.com/container-platform/3.3/install_config/registry/deploy_registry_existing_clusters.html#storage-for-the-registry

I'm not sure if this is the best solution but we must finishing the setup so it was a working solution.

raffaelespazzoli commented 8 years ago

an update: @sdodson 's workaround about using openshift_hosted_logging_deployer_version and openshift_hosted_metrics_deployer_version worked. Thanks. it still does not look that when we are at a customer we cannot use the stock ansible installer from the rpms.

sdodson commented 8 years ago

@raffaelespazzoli Pllease ensure that if you're using a github checkout that you remove all openshift-ansible RPMs (yum remove -y openshift-ansible\*) and vice versa, if you're using the rpm versions make sure you're not running ansible from within a github checkout directory.

lucamaf commented 6 years ago

had similar issue with v1.5.0 origin and switching to v.3.6.1 solved it, but unsure what was the issue