Closed Zuldajri closed 5 years ago
Provide a brief description of your issue here. For example:
Trying to automate the deployment of OpenShift Container Platform on Azure Version 3.11 I get a failure Running variable sanity checks:
TASK [Run variable sanity checks] ** fatal: [williamcluster-master-0]: FAILED! => {"msg": "last_checked_host: williamcluster-master-0, last_checked_var: openshift_master_identity_providers;Found removed variables: openshift_hostname is replaced byRemoved: See documentation; "}
Reproducing Microsoft's deployment : https://github.com/Microsoft/openshift-container-platform
Please put the following version information in the code block indicated below.
ansible --version
If you're operating from a git clone:
git describe
If you're running from playbooks installed via RPM
rpm -q openshift-ansible
Place the output between the code block below:
ansible 2.6.13 config file = /etc/ansible/ansible.cfg configured module search path = [u'/usr/share/ansible/openshift-ansible/roles/lib_utils/library', u'/usr/share/ansible/openshift-ansible/library'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] rpm -q openshift-ansible openshift-ansible-3.11.82-3.git.0.9718d0a.el7.noarch
Describe what you expected to happen.
Working deployment
Describe what is actually happening.
fatal: [williamcluster-master-0]: FAILED! => {"msg": "last_checked_host: williamcluster-master-0, last_checked_var: openshift_master_identity_providers;Found removed variables: openshift_hostname is replaced byRemoved: See documentation; "}
For long output or logs, consider using a gist
Provide any additional information which may help us diagnose the issue.
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.5 (Maipo) **The deployment script :** #!/bin/bash echo $(date) " - Starting Script" set -e export SUDOUSER=$1 export PASSWORD="$2" export MASTER=$3 export MASTERPUBLICIPHOSTNAME=$4 export MASTERPUBLICIPADDRESS=$5 export INFRA=$6 export NODE=$7 export NODECOUNT=$8 export INFRACOUNT=$9 export MASTERCOUNT=${10} export ROUTING=${11} export REGISTRYSA=${12} export ACCOUNTKEY="${13}" export METRICS=${14} export LOGGING=${15} export TENANTID=${16} export SUBSCRIPTIONID=${17} export AADCLIENTID=${18} export AADCLIENTSECRET="${19}" export RESOURCEGROUP=${20} export LOCATION=${21} export AZURE=${22} export STORAGEKIND=${23} export ENABLECNS=${24} export CNS=${25} export CNSCOUNT=${26} export VNETNAME=${27} export NODENSG=${28} export NODEAVAILIBILITYSET=${29} export MASTERCLUSTERTYPE=${30} export PRIVATEIP=${31} export PRIVATEDNS=${32} export MASTERPIPNAME=${33} export ROUTERCLUSTERTYPE=${34} export INFRAPIPNAME=${35} export IMAGEURL=${36} export WEBSTORAGE=${37} export CUSTOMROUTINGCERTTYPE=${38} export CUSTOMMASTERCERTTYPE=${39} export PROXYSETTING=${40} export HTTPPROXYENTRY="${41}" export HTTSPPROXYENTRY="${42}" export NOPROXYENTRY="${43}" export BASTION=$(hostname) # Set CNS to default storage type. Will be overridden later if Azure is true export CNS_DEFAULT_STORAGE=true # Setting DOMAIN variable export DOMAIN=`domainname -d` # Determine if Commercial Azure or Azure Government CLOUD=$( curl -H Metadata:true "http://169.254.169.254/metadata/instance/compute/location?api-version=2017-04-02&format=text" | cut -c 1-2 ) export CLOUD=${CLOUD^^} export MASTERLOOP=$((MASTERCOUNT - 1)) export INFRALOOP=$((INFRACOUNT - 1)) export NODELOOP=$((NODECOUNT - 1)) echo $(date) " - Configuring SSH ControlPath to use shorter path name" sed -i -e "s/^# control_path = %(directory)s\/%%h-%%r/control_path = %(directory)s\/%%h-%%r/" /etc/ansible/ansible.cfg sed -i -e "s/^#host_key_checking = False/host_key_checking = False/" /etc/ansible/ansible.cfg sed -i -e "s/^#pty=False/pty=False/" /etc/ansible/ansible.cfg sed -i -e "s/^#stdout_callback = skippy/stdout_callback = skippy/" /etc/ansible/ansible.cfg sed -i -e "s/^#pipelining = False/pipelining = True/" /etc/ansible/ansible.cfg # echo $(date) " - Modifying sudoers" sed -i -e "s/Defaults requiretty/# Defaults requiretty/" /etc/sudoers sed -i -e '/Defaults env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"/aDefaults env_keep += "PATH"' /etc/sudoers # Create docker registry config based on Commercial Azure or Azure Government if [[ $CLOUD == "US" ]] then DOCKERREGISTRYYAML=dockerregistrygov.yaml export CLOUDNAME="AzureUSGovernmentCloud" else DOCKERREGISTRYYAML=dockerregistrypublic.yaml export CLOUDNAME="AzurePublicCloud" fi # Logging into Azure CLI if [ "$AADCLIENTID" != "" ] then echo $(date) " - Logging into Azure CLI" az login --service-principal -u $AADCLIENTID -p $AADCLIENTSECRET -t $TENANTID az account set -s $SUBSCRIPTIONID # Adding Storage Extension az extension add --name storage-preview fi # Setting the default openshift_cloudprovider_kind if Azure enabled if [[ $AZURE == "true" ]] then CLOUDKIND="openshift_cloudprovider_kind=azure openshift_cloudprovider_azure_client_id=\"{{ aadClientId }}\" openshift_cloudprovider_azure_client_secret=\"{{ aadClientSecret }}\" openshift_cloudprovider_azure_tenant_id=\"{{ tenantId }}\" openshift_cloudprovider_azure_subscription_id=\"{{ subscriptionId }}\" openshift_cloudprovider_azure_cloud=$CLOUDNAME openshift_cloudprovider_azure_vnet_name=$VNETNAME openshift_cloudprovider_azure_security_group_name=$NODENSG openshift_cloudprovider_azure_availability_set_name=$NODEAVAILIBILITYSET openshift_cloudprovider_azure_resource_group=$RESOURCEGROUP openshift_cloudprovider_azure_location=$LOCATION" CNS_DEFAULT_STORAGE=false if [[ $STORAGEKIND == "managed" ]] then SCKIND="openshift_storageclass_parameters={'kind': 'managed', 'storageaccounttype': 'Premium_LRS'}" else SCKIND="openshift_storageclass_parameters={'kind': 'shared', 'storageaccounttype': 'Premium_LRS'}" fi fi # Configure PROXY settings for OpenShift cluster if [[ $PROXYSETTING == "custom" ]] then PROXY="openshift_http_proxy=$HTTPPROXYENTRY openshift_https_proxy=$HTTSPPROXYENTRY openshift_no_proxy='$NOPROXYENTRY'" fi # Cloning Ansible playbook repository echo $(date) " - Cloning Ansible playbook repository" ((cd /home/$SUDOUSER && git clone https://github.com/Microsoft/openshift-container-platform-playbooks.git) || (cd /home/$SUDOUSER/openshift-container-platform-playbooks && git pull)) if [ -d /home/${SUDOUSER}/openshift-container-platform-playbooks ] then echo " - Retrieved playbooks successfully" else echo " - Retrieval of playbooks failed" exit 99 fi # Configure custom routing certificate echo $(date) " - Create variable for routing certificate based on certificate type" if [[ $CUSTOMROUTINGCERTTYPE == "custom" ]] then ROUTINGCERTIFICATE="openshift_hosted_router_certificate={\"cafile\": \"/tmp/routingca.pem\", \"certfile\": \"/tmp/routingcert.pem\", \"keyfile\": \"/tmp/routingkey.pem\"}" else ROUTINGCERTIFICATE="" fi # Configure custom master API certificate echo $(date) " - Create variable for master api certificate based on certificate type" if [[ $CUSTOMMASTERCERTTYPE == "custom" ]] then MASTERCERTIFICATE="openshift_master_overwrite_named_certificates=true openshift_master_named_certificates=[{\"names\": [\"$MASTERPUBLICIPHOSTNAME\"], \"cafile\": \"/tmp/masterca.pem\", \"certfile\": \"/tmp/mastercert.pem\", \"keyfile\": \"/tmp/masterkey.pem\"}]" else MASTERCERTIFICATE="" fi # Configure master cluster address information based on Cluster type (private or public) echo $(date) " - Create variable for master cluster address based on cluster type" if [[ $MASTERCLUSTERTYPE == "private" ]] then MASTERCLUSTERADDRESS="openshift_master_cluster_hostname=$MASTER-0 openshift_master_cluster_public_hostname=$PRIVATEDNS openshift_master_cluster_public_vip=$PRIVATEIP" else MASTERCLUSTERADDRESS="openshift_master_cluster_hostname=$MASTERPUBLICIPHOSTNAME openshift_master_cluster_public_hostname=$MASTERPUBLICIPHOSTNAME openshift_master_cluster_public_vip=$MASTERPUBLICIPADDRESS" fi # Create Master nodes grouping echo $(date) " - Creating Master nodes grouping" for (( c=0; c<$MASTERCOUNT; c++ )) do mastergroup="$mastergroup $MASTER-$c openshift_hostname=$MASTER-$c openshift_node_group_name='node-config-master'" done # Create Infra nodes grouping echo $(date) " - Creating Infra nodes grouping" for (( c=0; c<$INFRACOUNT; c++ )) do infragroup="$infragroup $INFRA-$c openshift_hostname=$INFRA-$c openshift_node_group_name='node-config-infra'" done # Create Nodes grouping echo $(date) " - Creating Nodes grouping" for (( c=0; c<$NODECOUNT; c++ )) do nodegroup="$nodegroup $NODE-$c openshift_hostname=$NODE-$c openshift_node_group_name='node-config-compute'" done # Create CNS nodes grouping if CNS is enabled if [[ $ENABLECNS == "true" ]] then echo $(date) " - Creating CNS nodes grouping" for (( c=0; c<$CNSCOUNT; c++ )) do cnsgroup="$cnsgroup $CNS-$c openshift_hostname=$CNS-$c openshift_node_group_name='node-config-compute'" done fi # Setting the HA Mode if more than one master if [ $MASTERCOUNT != 1 ] then echo $(date) " - Enabling HA mode for masters" export HAMODE="openshift_master_cluster_method=native" fi # Create Temp Ansible Hosts File echo $(date) " - Create Ansible Hosts file" cat > /etc/ansible/hosts <<EOF [tempnodes] $mastergroup $infragroup $nodegroup $cnsgroup EOF # Run a loop playbook to ensure DNS Hostname resolution is working prior to continuing with script echo $(date) " - Running DNS Hostname resolution check" runuser -l $SUDOUSER -c "ansible-playbook ~/openshift-container-platform-playbooks/check-dns-host-name-resolution.yaml" # Working with custom header logo can only happen is Azure is enabled IMAGECT=nope if [ $AZURE == "true" ] then # Enabling static web site on the web storage account echo "Custom Header: Enabling a static-website in the web storage account" az storage blob service-properties update --account-name $WEBSTORAGE --static-website # Retrieving URL WEBSTORAGEURL=$(az storage account show -n $WEBSTORAGE --query primaryEndpoints.web -o tsv) else # If its not a valid HTTP or HTTPS Url set it to empty echo "Custom Header: Invalid http or https URL" IMAGEURL="" fi # Getting the image type assuming a valid URL # Failing is ok it will just default to the standard image if [[ $IMAGEURL =~ ^http ]] then # If this curl fails then the script will just use the default image # no retries required IMAGECT=$(curl --head $IMAGEURL | grep -i content-type: | awk '{print $NF}' | tr -d '\r') || true IMAGETYPE=$(echo $IMAGECT | awk -F/ '{print $2}' | awk -F+ '{print $1}') echo "Custom Header: $IMAGETYPE identified" else echo "Custom Header: No Valid Image URL specified" fi # Create base CSS file cat > /tmp/customlogo.css <<EOF #header-logo { background-image: url("${WEBSTORAGEURL}customlogo.${IMAGETYPE}"); height: 20px; } EOF # If there is an image then transfer it if [[ $IMAGECT =~ ^image ]] then # If this curl fails then the script will just use the default image # no retries required echo "Custom Header: $IMAGETYPE downloaded" curl -o /tmp/originallogo.$IMAGETYPE $IMAGEURL || true convert /tmp/originallogo.$IMAGETYPE -geometry x20 /tmp/customlogo.$IMAGETYPE || true # Uploading the custom css and image echo "Custom Header: Uploading a logo of type $IMAGECT" az storage blob upload-batch -s /tmp --pattern customlogo.* -d \$web --account-name $WEBSTORAGE fi # If there is an image then activate it in the install CUSTOMCSS="" if [ -f /tmp/customlogo.$IMAGETYPE ] then # To be added to /etc/ansible/hosts echo "Custom Header: Adding Image to Ansible Hosts file" CUSTOMCSS="openshift_web_console_extension_stylesheet_urls=['${WEBSTORAGEURL}customlogo.css']" fi # Create glusterfs configuration if CNS is enabled if [[ $ENABLECNS == "true" ]] then echo $(date) " - Creating glusterfs configuration" for (( c=0; c<$CNSCOUNT; c++ )) do runuser $SUDOUSER -c "ssh-keyscan -H $CNS-$c >> ~/.ssh/known_hosts" drive=$(runuser $SUDOUSER -c "ssh $CNS-$c 'sudo /usr/sbin/fdisk -l'" | awk '$1 == "Disk" && $2 ~ /^\// && ! /mapper/ {if (drive) print drive; drive = $2; sub(":", "", drive);} drive && /^\// {drive = ""} END {if (drive) print drive;}') drive1=$(echo $drive | cut -d ' ' -f 1) drive2=$(echo $drive | cut -d ' ' -f 2) drive3=$(echo $drive | cut -d ' ' -f 3) cnsglusterinfo="$cnsglusterinfo $CNS-$c glusterfs_devices='[ \"${drive1}\", \"${drive2}\", \"${drive3}\" ]'" done fi # Create Ansible Hosts File echo $(date) " - Create Ansible Hosts file" cat > /etc/ansible/hosts <<EOF # Create an OSEv3 group that contains the masters and nodes groups [OSEv3:children] masters nodes etcd master0 glusterfs new_nodes # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=$SUDOUSER ansible_become=yes openshift_install_examples=true deployment_type=openshift-enterprise openshift_release=v3.11 #openshift_image_tag=v3.11 #openshift_pkg_version=-3.11 docker_udev_workaround=True openshift_use_dnsmasq=true openshift_master_default_subdomain=$ROUTING openshift_override_hostname_check=true osm_use_cockpit=true os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant' openshift_master_api_port=443 openshift_master_console_port=443 osm_default_node_selector='node-role.kubernetes.io/compute=true' openshift_disable_check=memory_availability,docker_image_availability $CLOUDKIND $SCKIND $CUSTOMCSS $ROUTINGCERTIFICATE $MASTERCERTIFICATE $PROXY # Workaround for docker image failure # https://access.redhat.com/solutions/3480921 oreg_url=registry.access.redhat.com/openshift3/ose-\${component}:\${version} openshift_examples_modify_imagestreams=true # default selectors for router and registry services openshift_router_selector='node-role.kubernetes.io/infra=true' openshift_registry_selector='node-role.kubernetes.io/infra=true' $registrygluster # Deploy Service Catalog openshift_enable_service_catalog=false # Type of clustering being used by OCP $HAMODE # Addresses for connecting to the OpenShift master nodes $MASTERCLUSTERADDRESS # Enable HTPasswdPasswordIdentityProvider openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # Setup metrics openshift_metrics_install_metrics=false openshift_metrics_start_cluster=true openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra":"true"} openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra":"true"} openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra":"true"} # Setup logging openshift_logging_install_logging=false openshift_logging_fluentd_nodeselector={"logging":"true"} openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra":"true"} openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra":"true"} openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra":"true"} openshift_logging_master_public_url=https://$MASTERPUBLICIPHOSTNAME # host group for masters [masters] $MASTER-[0:${MASTERLOOP}] # host group for etcd [etcd] $MASTER-[0:${MASTERLOOP}] [master0] $MASTER-0 # Only populated when CNS is enabled [glusterfs] $cnsglusterinfo # host group for nodes [nodes] $mastergroup $infragroup $nodegroup $cnsgroup # host group for adding new nodes [new_nodes] EOF # Setup NetworkManager to manage eth0 echo $(date) " - Running NetworkManager playbook" runuser -l $SUDOUSER -c "ansible-playbook -f 30 /usr/share/ansible/openshift-ansible/playbooks/openshift-node/network_manager.yml" # Configure DNS so it always has the domain name echo $(date) " - Adding $DOMAIN to search for resolv.conf" runuser $SUDOUSER -c "ansible all -o -f 30 -b -m lineinfile -a 'dest=/etc/sysconfig/network-scripts/ifcfg-eth0 line=\"DOMAIN=$DOMAIN\"'" # Configure resolv.conf on all hosts through NetworkManager echo $(date) " - Restarting NetworkManager" runuser -l $SUDOUSER -c "ansible all -o -f 30 -b -m service -a \"name=NetworkManager state=restarted\"" echo $(date) " - NetworkManager configuration complete" # Run OpenShift Container Platform prerequisites playbook echo $(date) " - Running Prerequisites via Ansible Playbook" runuser -l $SUDOUSER -c "ansible-playbook -e openshift_cloudprovider_azure_client_id=$AADCLIENTID -e openshift_cloudprovider_azure_client_secret=\"$AADCLIENTSECRET\" -e openshift_cloudprovider_azure_tenant_id=$TENANTID -e openshift_cloudprovider_azure_subscription_id=$SUBSCRIPTIONID -f 30 /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml" echo $(date) " - Prerequisites check complete" # Initiating installation of OpenShift Container Platform using Ansible Playbook echo $(date) " - Installing OpenShift Container Platform via Ansible Playbook" runuser -l $SUDOUSER -c "ansible-playbook -e openshift_cloudprovider_azure_client_id=$AADCLIENTID -e openshift_cloudprovider_azure_client_secret=\"$AADCLIENTSECRET\" -e openshift_cloudprovider_azure_tenant_id=$TENANTID -e openshift_cloudprovider_azure_subscription_id=$SUBSCRIPTIONID -f 30 /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml" if [ $? -eq 0 ] then echo $(date) " - OpenShift Cluster installed successfully" else echo $(date) " - OpenShift Cluster failed to install" exit 6 fi # Install OpenShift Atomic Client cd /root mkdir .kube runuser ${SUDOUSER} -c "scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SUDOUSER}@${MASTER}-0:~/.kube/config /tmp/kube-config" cp /tmp/kube-config /root/.kube/config mkdir /home/${SUDOUSER}/.kube cp /tmp/kube-config /home/${SUDOUSER}/.kube/config chown --recursive ${SUDOUSER} /home/${SUDOUSER}/.kube rm -f /tmp/kube-config yum -y install atomic-openshift-clients # Adding user to OpenShift authentication file echo $(date) " - Adding OpenShift user" runuser $SUDOUSER -c "ansible-playbook -f 30 ~/openshift-container-platform-playbooks/addocpuser.yaml" # Assigning cluster admin rights to OpenShift user echo $(date) " - Assigning cluster admin rights to user" runuser $SUDOUSER -c "ansible-playbook -f 30 ~/openshift-container-platform-playbooks/assignclusteradminrights.yaml" # Configure Docker Registry to use Azure Storage Account echo $(date) " - Configuring Docker Registry to use Azure Storage Account" runuser $SUDOUSER -c "ansible-playbook -f 30 ~/openshift-container-platform-playbooks/$DOCKERREGISTRYYAML" # Reconfigure glusterfs storage class if [ $CNS_DEFAULT_STORAGE == "true" ] then echo $(date) "- Create default glusterfs storage class" cat > /home/$SUDOUSER/default-glusterfs-storage.yaml <<EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "$CNS_DEFAULT_STORAGE" name: default-glusterfs-storage parameters: resturl: http://heketi-storage-glusterfs.${ROUTING} restuser: admin secretName: heketi-storage-admin-secret secretNamespace: glusterfs provisioner: kubernetes.io/glusterfs reclaimPolicy: Delete EOF runuser -l $SUDOUSER -c "oc create -f /home/$SUDOUSER/default-glusterfs-storage.yaml" echo $(date) " - Sleep for 10" sleep 10 fi # Ensuring selinux is configured properly if [[ $ENABLECNS == "true" ]] then # Setting selinux to allow gluster-fusefs access echo $(date) " - Setting selinux to allow gluster-fuse access" runuser -l $SUDOUSER -c "ansible all -o -f 30 -b -a 'sudo setsebool -P virt_sandbox_use_fusefs on'" || true # End of CNS specific section fi # Adding some labels back because they go missing echo $(date) " - Adding api and logging labels" runuser -l $SUDOUSER -c "oc label --overwrite nodes $MASTER-0 openshift-infra=apiserver" runuser -l $SUDOUSER -c "oc label --overwrite nodes --all logging-infra-fluentd=true logging=true" # Restarting things so everything is clean before installing anything else echo $(date) " - Rebooting cluster to complete installation" runuser -l $SUDOUSER -c "ansible-playbook -f 30 ~/openshift-container-platform-playbooks/reboot-master.yaml" runuser -l $SUDOUSER -c "ansible-playbook -f 30 ~/openshift-container-platform-playbooks/reboot-nodes.yaml" sleep 20 # Installing Service Catalog, Ansible Service Broker and Template Service Broker if [[ $AZURE == "true" || $ENABLECNS == "true" ]] then runuser -l $SUDOUSER -c "ansible-playbook -e openshift_cloudprovider_azure_client_id=$AADCLIENTID -e openshift_cloudprovider_azure_client_secret=\"$AADCLIENTSECRET\" -e openshift_cloudprovider_azure_tenant_id=$TENANTID -e openshift_cloudprovider_azure_subscription_id=$SUBSCRIPTIONID -e openshift_enable_service_catalog=true -f 30 /usr/share/ansible/openshift-ansible/playbooks/openshift-service-catalog/config.yml" fi # Adding Open Sevice Broker for Azaure (requires service catalog) if [[ $AZURE == "true" ]] then oc new-project osba oc process -f https://raw.githubusercontent.com/Azure/open-service-broker-azure/master/contrib/openshift/osba-os-template.yaml \ -p ENVIRONMENT=AzurePublicCloud \ -p AZURE_SUBSCRIPTION_ID=$SUBSCRIPTIONID \ -p AZURE_TENANT_ID=$TENANTID \ -p AZURE_CLIENT_ID=$AADCLIENTID \ -p AZURE_CLIENT_SECRET=$AADCLIENTSECRET \ | oc create -f - fi # Configure Metrics if [[ $METRICS == "true" ]] then sleep 30 echo $(date) "- Deploying Metrics" if [[ $AZURE == "true" || $ENABLECNS == "true" ]] then runuser -l $SUDOUSER -c "ansible-playbook -e openshift_cloudprovider_azure_client_id=$AADCLIENTID -e openshift_cloudprovider_azure_client_secret=\"$AADCLIENTSECRET\" -e openshift_cloudprovider_azure_tenant_id=$TENANTID -e openshift_cloudprovider_azure_subscription_id=$SUBSCRIPTIONID -e openshift_metrics_install_metrics=True -e openshift_metrics_cassandra_storage_type=dynamic -f 30 /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml" else runuser -l $SUDOUSER -c "ansible-playbook -e openshift_metrics_install_metrics=True /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml" fi if [ $? -eq 0 ] then echo $(date) " - Metrics configuration completed successfully" else echo $(date) " - Metrics configuration failed" exit 11 fi fi # Configure Logging if [[ $LOGGING == "true" ]] then sleep 60 echo $(date) "- Deploying Logging" if [[ $AZURE == "true" || $ENABLECNS == "true" ]] then runuser -l $SUDOUSER -c "ansible-playbook -e openshift_cloudprovider_azure_client_id=$AADCLIENTID -e openshift_cloudprovider_azure_client_secret=\"$AADCLIENTSECRET\" -e openshift_cloudprovider_azure_tenant_id=$TENANTID -e openshift_cloudprovider_azure_subscription_id=$SUBSCRIPTIONID -e openshift_logging_install_logging=True -e openshift_logging_es_pvc_dynamic=true -f 30 /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml" else runuser -l $SUDOUSER -c "ansible-playbook -e openshift_logging_install_logging=True -f 30 /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml" fi if [ $? -eq 0 ] then echo $(date) " - Logging configuration completed successfully" else echo $(date) " - Logging configuration failed" exit 12 fi fi # Configure cluster for private masters if [[ $MASTERCLUSTERTYPE == "private" ]] then echo $(date) " - Configure cluster for private masters" runuser -l $SUDOUSER -c "ansible-playbook -f 30 ~/openshift-container-platform-playbooks/activate-private-lb.31x.yaml" echo $(date) " - Delete Master Public IP if cluster is using private masters" az network public-ip delete -g $RESOURCEGROUP -n $MASTERPIPNAME fi # Delete Router / Infra Public IP if cluster is using private router if [[ $ROUTERCLUSTERTYPE == "private" ]] then echo $(date) " - Delete Router / Infra Public IP address" az network public-ip delete -g $RESOURCEGROUP -n $INFRAPIPNAME fi # Setting Masters to non-schedulable echo $(date) " - Setting Masters to non-schedulable" runuser -l $SUDOUSER -c "ansible-playbook -f 10 ~/openshift-container-platform-playbooks/reset-masters-non-schedulable.yaml" # Re-enabling requiretty echo $(date) " - Re-enabling requiretty" sed -i -e "s/# Defaults requiretty/Defaults requiretty/" /etc/sudoers # Delete yaml files echo $(date) " - Deleting unecessary files" rm -rf /home/${SUDOUSER}/openshift-container-platform-playbooks # Delete pem files echo $(date) " - Delete pem files" rm -rf /tmp/*.pem echo $(date) " - Sleep for 30" sleep 30 echo $(date) " - Script complete" **Output of the deployment script:** Wed Feb 27 17:42:20 UTC 2019 - Starting Script Wed Feb 27 17:42:20 UTC 2019 - Configuring SSH ControlPath to use shorter path name Wed Feb 27 17:42:20 UTC 2019 - Logging into Azure CLI [ { "cloudName": "AzureCloud", "id": "§§§§§§§§§§§§§§§", "isDefault": true, "name": "Microsoft Azure", "state": "Enabled", "tenantId": "§§§§§§§§§§§§§§§§§§§§", "user": { "name": "§§§§§§§§§§§§§§§§§§§§§§§§§§", "type": "servicePrincipal" } } ] Wed Feb 27 17:42:29 UTC 2019 - Cloning Ansible playbook repository Cloning into 'openshift-container-platform-playbooks'... - Retrieved playbooks successfully Wed Feb 27 17:42:30 UTC 2019 - Create variable for routing certificate based on certificate type Wed Feb 27 17:42:30 UTC 2019 - Create variable for master api certificate based on certificate type Wed Feb 27 17:42:30 UTC 2019 - Create variable for master cluster address based on cluster type Wed Feb 27 17:42:30 UTC 2019 - Creating Master nodes grouping Wed Feb 27 17:42:30 UTC 2019 - Creating Infra nodes grouping Wed Feb 27 17:42:30 UTC 2019 - Creating Nodes grouping Wed Feb 27 17:42:30 UTC 2019 - Create Ansible Hosts file Wed Feb 27 17:42:30 UTC 2019 - Running DNS Hostname resolution check PLAY [all] ********************************************************************* TASK [Wait for DNS hostname resolution - will try for up to 33 minutes] ******** ok: [williamcluster-infra-0] ok: [williamcluster-node-0] ok: [williamcluster-master-0] PLAY RECAP ********************************************************************* williamcluster-infra-0 : ok=1 changed=0 unreachable=0 failed=0 williamcluster-master-0 : ok=1 changed=0 unreachable=0 failed=0 williamcluster-node-0 : ok=1 changed=0 unreachable=0 failed=0 Custom Header: Enabling a static-website in the web storage account { "cors": [], "deleteRetentionPolicy": { "days": null, "enabled": false }, "hourMetrics": { "enabled": true, "includeApis": true, "retentionPolicy": { "days": 7, "enabled": true }, "version": "1.0" }, "logging": { "delete": false, "read": false, "retentionPolicy": { "days": null, "enabled": false }, "version": "1.0", "write": false }, "minuteMetrics": { "enabled": false, "includeApis": null, "retentionPolicy": { "days": null, "enabled": false }, "version": "1.0" }, "staticWebsite": { "enabled": true, "errorDocument_404Path": null, "indexDocument": null } } Custom Header: No Valid Image URL specified Wed Feb 27 17:42:36 UTC 2019 - Create Ansible Hosts file Wed Feb 27 17:42:36 UTC 2019 - Running NetworkManager playbook PLAY [Populate config host groups] ********************************************* TASK [Load group name mapping variables] *************************************** ok: [localhost] TASK [Evaluate groups - g_nfs_hosts is single host] **************************** TASK [Evaluate oo_all_hosts] *************************************************** ok: [localhost] => (item=williamcluster-master-0) ok: [localhost] => (item=williamcluster-infra-0) ok: [localhost] => (item=williamcluster-node-0) TASK [Evaluate oo_masters] ***************************************************** ok: [localhost] => (item=williamcluster-master-0) TASK [Evaluate oo_first_master] ************************************************ ok: [localhost] TASK [Evaluate oo_new_etcd_to_config] ****************************************** TASK [Evaluate oo_masters_to_config] ******************************************* ok: [localhost] => (item=williamcluster-master-0) TASK [Evaluate oo_etcd_to_config] ********************************************** ok: [localhost] => (item=williamcluster-master-0) TASK [Evaluate oo_first_etcd] ************************************************** ok: [localhost] TASK [Evaluate oo_etcd_hosts_to_upgrade] *************************************** ok: [localhost] => (item=williamcluster-master-0) TASK [Evaluate oo_etcd_hosts_to_backup] **************************************** ok: [localhost] => (item=williamcluster-master-0) TASK [Evaluate oo_nodes_to_config] ********************************************* ok: [localhost] => (item=williamcluster-master-0) ok: [localhost] => (item=williamcluster-infra-0) ok: [localhost] => (item=williamcluster-node-0) TASK [Evaluate oo_lb_to_config] ************************************************ TASK [Evaluate oo_nfs_to_config] *********************************************** TASK [Evaluate oo_glusterfs_to_config] ***************************************** TASK [Evaluate oo_etcd_to_migrate] ********************************************* ok: [localhost] => (item=williamcluster-master-0) PLAY [Install and configure NetworkManager] ************************************ TASK [Gathering Facts] ********************************************************* ok: [williamcluster-node-0] ok: [williamcluster-master-0] ok: [williamcluster-infra-0] TASK [Detecting Operating System] ********************************************** changed: [williamcluster-node-0] changed: [williamcluster-master-0] changed: [williamcluster-infra-0] TASK [install NetworkManager] ************************************************** ok: [williamcluster-node-0] ok: [williamcluster-infra-0] ok: [williamcluster-master-0] TASK [configure NetworkManager] ************************************************ changed: [williamcluster-node-0] => (item=USE_PEERDNS) changed: [williamcluster-infra-0] => (item=USE_PEERDNS) changed: [williamcluster-master-0] => (item=USE_PEERDNS) ok: [williamcluster-node-0] => (item=NM_CONTROLLED) ok: [williamcluster-infra-0] => (item=NM_CONTROLLED) ok: [williamcluster-master-0] => (item=NM_CONTROLLED) TASK [enable and start NetworkManager] ***************************************** ok: [williamcluster-node-0] ok: [williamcluster-master-0] ok: [williamcluster-infra-0] PLAY RECAP ********************************************************************* localhost : ok=11 changed=0 unreachable=0 failed=0 williamcluster-infra-0 : ok=5 changed=2 unreachable=0 failed=0 williamcluster-master-0 : ok=5 changed=2 unreachable=0 failed=0 williamcluster-node-0 : ok=5 changed=2 unreachable=0 failed=0 Wed Feb 27 17:42:50 UTC 2019 - Adding rsqnkdbefxkelbuixhhtz2wode.ax.internal.cloudapp.net to search for resolv.conf williamcluster-node-0 | CHANGED => {"backup": "", "changed": true, "msg": "line added"} williamcluster-infra-0 | CHANGED => {"backup": "", "changed": true, "msg": "line added"} williamcluster-master-0 | CHANGED => {"backup": "", "changed": true, "msg": "line added"} Wed Feb 27 17:42:51 UTC 2019 - Restarting NetworkManager williamcluster-node-0 | CHANGED => {"changed": true, "name": "NetworkManager", "state": "started", "status": {"ActiveEnterTimestamp": "Wed 2019-02-27 17:30:37 UTC", "ActiveEnterTimestampMonotonic": "36267307", "ActiveExitTimestamp": "Wed 2019-02-27 17:30:37 UTC", "ActiveExitTimestampMonotonic": "36191412", "ActiveState": "active", "After": "basic.target systemd-journald.socket dbus.service network-pre.target system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-02-27 17:30:37 UTC", "AssertTimestampMonotonic": "36239115", "Before": "NetworkManager-wait-online.service network.service shutdown.target multi-user.target network.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "BusName": "org.freedesktop.NetworkManager", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "537212130", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-02-27 17:30:37 UTC", "ConditionTimestampMonotonic": "36239115", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/NetworkManager.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Network Manager", "DevicePolicy": "auto", "Documentation": "man:NetworkManager(8)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "1240", "ExecMainStartTimestamp": "Wed 2019-02-27 17:30:37 UTC", "ExecMainStartTimestampMonotonic": "36239618", "ExecMainStatus": "0", "ExecReload": "{ path=/usr/bin/dbus-send ; argv[]=/usr/bin/dbus-send --print-reply --system --type=method_call --dest=org.freedesktop.NetworkManager /org/freedesktop/NetworkManager org.freedesktop.NetworkManager.Reload uint32:0 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/NetworkManager ; argv[]=/usr/sbin/NetworkManager --no-daemon ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/NetworkManager.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "NetworkManager.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-02-27 17:30:37 UTC", "InactiveEnterTimestampMonotonic": "36235092", "InactiveExitTimestamp": "Wed 2019-02-27 17:30:37 UTC", "InactiveExitTimestampMonotonic": "36240102", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "31777", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "31777", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "1240", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "NetworkManager.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "read-only", "ProtectSystem": "yes", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "NetworkManager-wait-online.service", "Requires": "basic.target", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "network.target system.slice", "WatchdogTimestamp": "Wed 2019-02-27 17:30:37 UTC", "WatchdogTimestampMonotonic": "36267261", "WatchdogUSec": "0"}} williamcluster-infra-0 | CHANGED => {"changed": true, "name": "NetworkManager", "state": "started", "status": {"ActiveEnterTimestamp": "Wed 2019-02-27 17:30:40 UTC", "ActiveEnterTimestampMonotonic": "36075354","ActiveExitTimestamp": "Wed 2019-02-27 17:30:40 UTC", "ActiveExitTimestampMonotonic": "35982874", "ActiveState": "active", "After": "basic.target network-pre.target dbus.service systemd-journald.socket system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-02-27 17:30:40 UTC", "AssertTimestampMonotonic": "36043869", "Before": "shutdown.target network.service NetworkManager-wait-online.service network.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "BusName": "org.freedesktop.NetworkManager", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart":"yes", "CanStop": "yes", "CapabilityBoundingSet": "537212130", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-02-27 17:30:40 UTC", "ConditionTimestampMonotonic": "36043869", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/NetworkManager.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Network Manager", "DevicePolicy": "auto", "Documentation": "man:NetworkManager(8)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "1243", "ExecMainStartTimestamp": "Wed 2019-02-27 17:30:40 UTC", "ExecMainStartTimestampMonotonic": "36044384", "ExecMainStatus": "0", "ExecReload": "{ path=/usr/bin/dbus-send ; argv[]=/usr/bin/dbus-send --print-reply --system --type=method_call --dest=org.freedesktop.NetworkManager /org/freedesktop/NetworkManager org.freedesktop.NetworkManager.Reload uint32:0 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/NetworkManager ; argv[]=/usr/sbin/NetworkManager --no-daemon ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/NetworkManager.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "NetworkManager.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-02-27 17:30:40 UTC", "InactiveEnterTimestampMonotonic": "36040864", "InactiveExitTimestamp": "Wed 2019-02-27 17:30:40 UTC", "InactiveExitTimestampMonotonic": "36045026", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "31777", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "31777", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "1243", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "NetworkManager.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "read-only", "ProtectSystem": "yes", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "NetworkManager-wait-online.service", "Requires": "basic.target", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "network.target system.slice", "WatchdogTimestamp": "Wed 2019-02-27 17:30:40 UTC", "WatchdogTimestampMonotonic": "36075263", "WatchdogUSec": "0"}} williamcluster-master-0 | CHANGED => {"changed": true, "name": "NetworkManager", "state": "started", "status": {"ActiveEnterTimestamp": "Wed 2019-02-27 17:30:56 UTC", "ActiveEnterTimestampMonotonic": "34068220", "ActiveExitTimestamp": "Wed 2019-02-27 17:30:56 UTC", "ActiveExitTimestampMonotonic": "33992600", "ActiveState": "active", "After": "systemd-journald.socket system.slice dbus.service basic.target network-pre.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-02-27 17:30:56 UTC", "AssertTimestampMonotonic": "34038643", "Before": "NetworkManager-wait-online.service network.service shutdown.target network.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "BusName": "org.freedesktop.NetworkManager", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "537212130", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-02-27 17:30:56 UTC", "ConditionTimestampMonotonic": "34038643", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/NetworkManager.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Network Manager", "DevicePolicy": "auto", "Documentation": "man:NetworkManager(8)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "1242", "ExecMainStartTimestamp": "Wed 2019-02-27 17:30:56 UTC", "ExecMainStartTimestampMonotonic": "34039089", "ExecMainStatus": "0", "ExecReload": "{ path=/usr/bin/dbus-send ; argv[]=/usr/bin/dbus-send --print-reply --system --type=method_call --dest=org.freedesktop.NetworkManager /org/freedesktop/NetworkManager org.freedesktop.NetworkManager.Reload uint32:0 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/NetworkManager ; argv[]=/usr/sbin/NetworkManager --no-daemon ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/NetworkManager.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "NetworkManager.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-02-27 17:30:56 UTC", "InactiveEnterTimestampMonotonic": "34035564", "InactiveExitTimestamp": "Wed 2019-02-27 17:30:56 UTC", "InactiveExitTimestampMonotonic": "34039551", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "31777", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "31777", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "1242", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "NetworkManager.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "read-only", "ProtectSystem": "yes", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "NetworkManager-wait-online.service", "Requires": "basic.target", "Restart": "on-failure", "RestartUSec": "100ms","Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "network.target system.slice", "WatchdogTimestamp": "Wed 2019-02-27 17:30:56 UTC", "WatchdogTimestampMonotonic": "34068158", "WatchdogUSec": "0"}} Wed Feb 27 17:42:53 UTC 2019 - NetworkManager configuration complete Wed Feb 27 17:42:53 UTC 2019 - Running Prerequisites via Ansible Playbook PLAY [Fail openshift_kubelet_name_override for new hosts] ********************** TASK [Gathering Facts] ********************************************************* ok: [williamcluster-infra-0] ok: [williamcluster-node-0] ok: [williamcluster-master-0] TASK [Fail when openshift_kubelet_name_override is defined] ******************** PLAY [Initialization Checkpoint Start] ***************************************** TASK [Set install initialization 'In Progress'] ******************************** ok: [williamcluster-master-0] PLAY [Populate config host groups] ********************************************* TASK [Load group name mapping variables] *************************************** ok: [localhost] TASK [Evaluate groups - g_nfs_hosts is single host] **************************** TASK [Evaluate oo_all_hosts] *************************************************** ok: [localhost] => (item=williamcluster-master-0) ok: [localhost] => (item=williamcluster-infra-0) ok: [localhost] => (item=williamcluster-node-0) TASK [Evaluate oo_masters] ***************************************************** ok: [localhost] => (item=williamcluster-master-0) TASK [Evaluate oo_first_master] ************************************************ ok: [localhost] TASK [Evaluate oo_new_etcd_to_config] ****************************************** TASK [Evaluate oo_masters_to_config] ******************************************* ok: [localhost] => (item=williamcluster-master-0) TASK [Evaluate oo_etcd_to_config] ********************************************** ok: [localhost] => (item=williamcluster-master-0) TASK [Evaluate oo_first_etcd] ************************************************** ok: [localhost] TASK [Evaluate oo_etcd_hosts_to_upgrade] *************************************** ok: [localhost] => (item=williamcluster-master-0) TASK [Evaluate oo_etcd_hosts_to_backup] **************************************** ok: [localhost] => (item=williamcluster-master-0) TASK [Evaluate oo_nodes_to_config] ********************************************* ok: [localhost] => (item=williamcluster-master-0) ok: [localhost] => (item=williamcluster-infra-0) ok: [localhost] => (item=williamcluster-node-0) TASK [Evaluate oo_lb_to_config] ************************************************ TASK [Evaluate oo_nfs_to_config] *********************************************** TASK [Evaluate oo_glusterfs_to_config] ***************************************** TASK [Evaluate oo_etcd_to_migrate] ********************************************* ok: [localhost] => (item=williamcluster-master-0) PLAY [Ensure that all non-node hosts are accessible] *************************** TASK [Gathering Facts] ********************************************************* ok: [williamcluster-master-0] PLAY [Initialize basic host facts] ********************************************* TASK [Gathering Facts] ********************************************************* ok: [williamcluster-master-0] ok: [williamcluster-node-0] ok: [williamcluster-infra-0] TASK [openshift_sanitize_inventory : include_tasks] **************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml for williamcluster-master-0, williamcluster-infra-0, williamcluster-node-0 TASK [openshift_sanitize_inventory : Check for usage of deprecated variables] *** ok: [williamcluster-master-0] ok: [williamcluster-infra-0] ok: [williamcluster-node-0] TASK [openshift_sanitize_inventory : debug] ************************************ TASK [openshift_sanitize_inventory : set_stats] ******************************** TASK [openshift_sanitize_inventory : set_fact] ********************************* ok: [williamcluster-master-0] ok: [williamcluster-infra-0] ok: [williamcluster-node-0] TASK [openshift_sanitize_inventory : Standardize on latest variable names] ***** ok: [williamcluster-master-0] ok: [williamcluster-infra-0] ok: [williamcluster-node-0] TASK [openshift_sanitize_inventory : Normalize openshift_release] ************** ok: [williamcluster-master-0] ok: [williamcluster-infra-0] ok: [williamcluster-node-0] TASK [openshift_sanitize_inventory : Abort when openshift_release is invalid] *** TASK [openshift_sanitize_inventory : include_tasks] **************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml for williamcluster-master-0, williamcluster-infra-0, williamcluster-node-0 TASK [openshift_sanitize_inventory : set_fact] ********************************* TASK [openshift_sanitize_inventory : Ensure that dynamic provisioning is set if using dynamic storage] *** TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] *** TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] *** TASK [openshift_sanitize_inventory : Check for deprecated prometheus/grafana install] *** TASK [openshift_sanitize_inventory : Ensure clusterid is set along with the cloudprovider] *** TASK [openshift_sanitize_inventory : Ensure ansible_service_broker_remove and ansible_service_broker_install are mutually exclusive] *** TASK [openshift_sanitize_inventory : Ensure template_service_broker_remove and template_service_broker_install are mutually exclusive] *** TASK [openshift_sanitize_inventory : Ensure that all requires vsphere configuration variables are set] *** TASK [openshift_sanitize_inventory : ensure provider configuration variables are defined] *** TASK [openshift_sanitize_inventory : Ensure removed web console extension variables are not set] *** TASK [openshift_sanitize_inventory : Ensure that web console port matches API server port] *** TASK [openshift_sanitize_inventory : At least one master is schedulable] ******* TASK [Detecting Operating System from ostree_booted] *************************** ok: [williamcluster-node-0] ok: [williamcluster-infra-0] ok: [williamcluster-master-0] TASK [set openshift_deployment_type if unset] ********************************** ok: [williamcluster-master-0] ok: [williamcluster-infra-0] ok: [williamcluster-node-0] TASK [initialize_facts set fact openshift_is_atomic] *************************** ok: [williamcluster-master-0] ok: [williamcluster-infra-0] ok: [williamcluster-node-0] TASK [Determine Atomic Host Docker Version] ************************************ TASK [assert atomic host docker version is 1.12 or later] ********************** PLAY [Retrieve existing master configs and validate] *************************** TASK [openshift_control_plane : stat] ****************************************** ok: [williamcluster-master-0] TASK [openshift_control_plane : slurp] ***************************************** TASK [openshift_control_plane : set_fact] ************************************** TASK [openshift_control_plane : Check for file paths outside of /etc/origin/master in master's config] *** TASK [openshift_control_plane : set_fact] ************************************** TASK [set_fact] **************************************************************** TASK [set_fact] **************************************************************** PLAY [Initialize special first-master variables] ******************************* TASK [Gathering Facts] ********************************************************* ok: [williamcluster-master-0] TASK [set_fact] **************************************************************** TASK [set_fact] **************************************************************** ok: [williamcluster-master-0] PLAY [Disable web console if required] ***************************************** TASK [set_fact] **************************************************************** PLAY [Setup yum repositories for all hosts] ************************************ TASK [rhel_subscribe : fail] *************************************************** TASK [rhel_subscribe : Install Red Hat Subscription manager] ******************* TASK [rhel_subscribe : Is host already registered?] **************************** TASK [rhel_subscribe : Register host using user/password] ********************** TASK [rhel_subscribe : Register host using activation key] ********************* TASK [rhel_subscribe : Determine if OpenShift Pool Already Attached] *********** TASK [rhel_subscribe : Attach to OpenShift Pool] ******************************* TASK [rhel_subscribe : Satellite preparation] ********************************** TASK [openshift_repos : Ensure libselinux-python is installed] ***************** ok: [williamcluster-node-0] ok: [williamcluster-infra-0] ok: [williamcluster-master-0] TASK [openshift_repos : Remove openshift_additional.repo file] ***************** ok: [williamcluster-node-0] ok: [williamcluster-infra-0] ok: [williamcluster-master-0] TASK [openshift_repos : Create any additional repos that are defined] ********** TASK [openshift_repos : include_tasks] ***************************************** TASK [openshift_repos : include_tasks] ***************************************** TASK [openshift_repos : Ensure clean repo cache in the event repos have been changed manually] *** changed: [williamcluster-master-0] => { "msg": "First run of openshift_repos" } changed: [williamcluster-infra-0] => { "msg": "First run of openshift_repos" } changed: [williamcluster-node-0] => { "msg": "First run of openshift_repos" } TASK [openshift_repos : Record that openshift_repos already ran] *************** ok: [williamcluster-master-0] ok: [williamcluster-infra-0] ok: [williamcluster-node-0] RUNNING HANDLER [openshift_repos : refresh cache] ****************************** changed: [williamcluster-infra-0] changed: [williamcluster-node-0] changed: [williamcluster-master-0] PLAY [Install packages necessary for installer] ******************************** TASK [Gathering Facts] ********************************************************* ok: [williamcluster-master-0] ok: [williamcluster-infra-0] ok: [williamcluster-node-0] TASK [Determine if chrony is installed] **************************************** changed: [williamcluster-master-0] changed: [williamcluster-infra-0] changed: [williamcluster-node-0] TASK [Install ntp package] ***************************************************** TASK [Start and enable ntpd/chronyd] ******************************************* changed: [williamcluster-master-0] changed: [williamcluster-infra-0] changed: [williamcluster-node-0] TASK [Ensure openshift-ansible installer package deps are installed] *********** changed: [williamcluster-node-0] changed: [williamcluster-master-0] changed: [williamcluster-infra-0] PLAY [Initialize cluster facts] ************************************************ TASK [Gathering Facts] ********************************************************* ok: [williamcluster-master-0] ok: [williamcluster-infra-0] ok: [williamcluster-node-0] TASK [get openshift_current_version] ******************************************* ok: [williamcluster-node-0] ok: [williamcluster-infra-0] ok: [williamcluster-master-0] TASK [set_fact openshift_portal_net if present on masters] ********************* TASK [Gather Cluster facts] **************************************************** changed: [williamcluster-node-0] changed: [williamcluster-infra-0] changed: [williamcluster-master-0] TASK [Set fact of no_proxy_internal_hostnames] ********************************* TASK [Initialize openshift.node.sdn_mtu] *************************************** changed: [williamcluster-master-0] changed: [williamcluster-infra-0] changed: [williamcluster-node-0] TASK [set_fact l_kubelet_node_name] ******************************************** ok: [williamcluster-master-0] ok: [williamcluster-infra-0] ok: [williamcluster-node-0] PLAY [Initialize etcd host variables] ****************************************** TASK [Gathering Facts] ********************************************************* ok: [williamcluster-master-0] TASK [set_fact] **************************************************************** ok: [williamcluster-master-0] TASK [set_fact] **************************************************************** ok: [williamcluster-master-0] PLAY [Determine openshift_version to configure on first master] **************** TASK [Gathering Facts] ********************************************************* ok: [williamcluster-master-0] TASK [include_role : openshift_version] **************************************** TASK [openshift_version : Use openshift_current_version fact as version to configure if already installed] *** TASK [openshift_version : Set openshift_version to openshift_release if undefined] *** ok: [williamcluster-master-0] TASK [openshift_version : debug] *********************************************** ok: [williamcluster-master-0] => { "msg": "openshift_pkg_version was not defined. Falling back to -3.11" } TASK [openshift_version : set_fact] ******************************************** ok: [williamcluster-master-0] TASK [openshift_version : debug] *********************************************** ok: [williamcluster-master-0] => { "msg": "openshift_image_tag was not defined. Falling back to v3.11" } TASK [openshift_version : set_fact] ******************************************** ok: [williamcluster-master-0] TASK [openshift_version : assert openshift_release in openshift_image_tag] ***** ok: [williamcluster-master-0] => { "changed": false, "msg": "All assertions passed" } TASK [openshift_version : assert openshift_release in openshift_pkg_version] *** ok: [williamcluster-master-0] => { "changed": false, "msg": "All assertions passed" } TASK [openshift_version : debug] *********************************************** ok: [williamcluster-master-0] => { "openshift_release": "3.11" } TASK [openshift_version : debug] *********************************************** ok: [williamcluster-master-0] => { "openshift_image_tag": "v3.11" } TASK [openshift_version : debug] *********************************************** ok: [williamcluster-master-0] => { "openshift_pkg_version": "-3.11*" } TASK [openshift_version : debug] *********************************************** ok: [williamcluster-master-0] => { "openshift_version": "3.11" } PLAY [Set openshift_version for etcd, node, and master hosts] ****************** TASK [Gathering Facts] ********************************************************* ok: [williamcluster-infra-0] ok: [williamcluster-node-0] TASK [set_fact] **************************************************************** ok: [williamcluster-infra-0] ok: [williamcluster-node-0] PLAY [Verify Requirements] ***************************************************** TASK [Gathering Facts] ********************************************************* ok: [williamcluster-master-0] TASK [Run variable sanity checks] ********************************************** fatal: [williamcluster-master-0]: FAILED! => {"msg": "last_checked_host: williamcluster-master-0, last_checked_var: openshift_master_identity_providers;Found removed variables: openshift_hostname is replaced byRemoved: See documentation; "} PLAY RECAP ********************************************************************* localhost : ok=11 changed=0 unreachable=0 failed=0 williamcluster-infra-0 : ok=27 changed=7 unreachable=0 failed=0 williamcluster-master-0 : ok=46 changed=7 unreachable=0 failed=1 williamcluster-node-0 : ok=27 changed=7 unreachable=0 failed=0 INSTALLER STATUS *************************************************************** Initialization : In Progress (0:02:23)
Dupe of #11097
Hello Vadim,
Thanks for pointing out to this issue. I still don't understand what I did wrong here. Could you be a bit more specific?
Thanks,
William
https://docs.okd.io/3.10/upgrading/automated_upgrades.html#upgrades-updating-host-name-parameters
Thanks Vadim,
I will check it out.
Description
Provide a brief description of your issue here. For example:
TASK [Run variable sanity checks] ** fatal: [williamcluster-master-0]: FAILED! => {"msg": "last_checked_host: williamcluster-master-0, last_checked_var: openshift_master_identity_providers;Found removed variables: openshift_hostname is replaced byRemoved: See documentation; "}
Version
Please put the following version information in the code block indicated below.
ansible --version
If you're operating from a git clone:
git describe
If you're running from playbooks installed via RPM
rpm -q openshift-ansible
Place the output between the code block below:
Steps To Reproduce
Expected Results
Describe what you expected to happen.
Observed Results
Describe what is actually happening.
For long output or logs, consider using a gist
Additional Information
Provide any additional information which may help us diagnose the issue.
$ cat /etc/redhat-release
)