openshift / openshift-ansible

Install and config an OpenShift 3.x cluster
https://try.openshift.com
Apache License 2.0
2.18k stars 2.31k forks source link

OSE 3.10 - Service Catalog install failed with multi-master #10959

Closed dalisani closed 5 years ago

dalisani commented 5 years ago

Description

On a multi master install, OpenShift 3.10 installation fails at Service Catalog install about a dozen times now. Last change i did was to resolv the master hostname (osint.sql.hpe.com) to the load balancer (10.210.44.34)

DNS Entries: c1-master1 Host(A) 10.210.44.31 c1-master1 Host(A) 10.210.44.31 c1-master1 Host(A) 10.210.44.31 ospub Host(A) 10.210.44.31 c1-master2 Host(A) 10.210.44.32 c1-master3 Host(A) 10.210.44.33 c1-lb Host(A) 10.210.44.34 osint Host(A) 10.210.44.34 c1-node5 Host(A) 10.210.44.35 c1-node6 Host(A) 10.210.44.36 c1-node7 Host(A) 10.210.44.37 c1-node8 Host(A) 10.210.44.38

Version
Ansible 2.4.6.0
openshift-ansible-3.10.83-1.git.0.12699eb.el7.noarch
Steps To Reproduce
  1. Node setup: 3 masters, 3 co-locating etcd, 1 load-balancer, 2 nodes, 2 infra-nodes -- inventory file being used
Expected Results

Describe what you expected to happen.

pass OpenShift Installation
Observed Results

INSTALLER STATUS ***** Initialization : Complete (0:00:17) Health Check : Complete (0:00:37) Node Bootstrap Preparation : Complete (0:06:55) etcd Install : Complete (0:00:30) Load Balancer Install : Complete (0:00:46) Master Install : Complete (0:09:08) Master Additional Install : Complete (0:01:16) Node Join : Complete (0:00:31) Hosted Install : Complete (0:00:24) Web Console Install : Complete (0:00:25) Service Catalog Install : In Progress (0:11:27) Failure summary:

  1. Hosts: c1-master1.sql.hpe.com Play: Service Catalog Task: Report errors Message: Catalog install failed.
    
    install debugs
    fatal: [c1-master1.sql.hpe.com]: FAILED! => {"attempts": 60, "changed": false, "cmd": ["curl", "-k", "https://apiserver.kube-service-catalog.svc/healthz"], "delta": "0:00:01.013341", "end": "2019-01-04 09:39:55.707792", "failed": true, "msg": "non-zero return code", "rc": 7, "start": "2019-01-04 09:39:54.694451", "stderr": "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (7) Failed connect to apiserver.kube-service-catalog.svc:443; Connection refused", "stderr_lines": ["  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current", "                                 Dload  Upload   Total   Spent    Left  Speed", "", "  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0", "  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (7) Failed connect to apiserver.kube-service-catalog.svc:443; Connection refused"], "stdout": "", "stdout_lines": []}
    ...ignoring

TASK [openshift_service_catalog : debug] **** Friday 04 January 2019 09:39:55 -0600 (0:00:00.236) 0:11:37.184 **** ok: [c1-master1.sql.hpe.com] => { "msg": [ "In project kube-service-catalog on server https://osint.sql.hpe.com:443", "", "https://apiserver-kube-service-catalog.sql.hpe.com (passthrough) to pod port secure (svc/apiserver)", " daemonset/apiserver manages registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83", " generation #2 running for 31 minutes - 0/3 pods growing to 3", " pod/apiserver-849qf runs registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83", " pod/apiserver-jc2s8 runs registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83", " pod/apiserver-qgqtv runs registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83", "", "svc/controller-manager - 172.30.230.109:443 -> 6443", " daemonset/controller-manager manages registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83", " generation #2 running for 31 minutes - 0/3 pods growing to 3", " pod/controller-manager-hb2mx runs registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83", " pod/controller-manager-lgt6t runs registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83", " pod/controller-manager-j8tvz runs registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83", "", "Errors:", " pod/apiserver-qgqtv is crash-looping", " pod/controller-manager-j8tvz is crash-looping", " * pod/controller-manager-lgt6t is crash-looping", "", "3 errors, 3 warnings identified, use 'oc status -v' to see details." ] } TASK [openshift_service_catalog : Get pods in the kube-service-catalog namespace] *** Friday 04 January 2019 09:39:56 -0600 (0:00:00.036) 0:11:37.220 **** changed: [c1-master1.sql.hpe.com]

TASK [openshift_service_catalog : debug] **** Friday 04 January 2019 09:39:56 -0600 (0:00:00.274) 0:11:37.495 **** ok: [c1-master1.sql.hpe.com] => { "msg": [ "NAME READY STATUS RESTARTS AGE IP NODE", "apiserver-849qf 0/1 Running 7 11m 10.128.0.6 c1-master1", "apiserver-jc2s8 0/1 Running 7 11m 10.130.0.18 c1-master2", "apiserver-qgqtv 0/1 CrashLoopBackOff 7 11m 10.129.0.6 c1-master3", "controller-manager-hb2mx 0/1 Running 7 11m 10.129.0.5 c1-master3", "controller-manager-j8tvz 0/1 CrashLoopBackOff 7 11m 10.128.0.7 c1-master1", "controller-manager-lgt6t 0/1 CrashLoopBackOff 7 11m 10.130.0.19 c1-master2" ] }

TASK [openshift_service_catalog : Get events in the kube-service-catalog namespace] ***** Friday 04 January 2019 09:39:56 -0600 (0:00:00.034) 0:11:37.529 **** changed: [c1-master1.sql.hpe.com]

TASK [openshift_service_catalog : debug] **** Friday 04 January 2019 09:39:56 -0600 (0:00:00.291) 0:11:37.821 **** ok: [c1-master1.sql.hpe.com] => { "msg": [ "LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE", "31m 31m 1 apiserver-49wx5.1576adb04da257b1 Pod spec.containers{apiserver} Normal Pulling kubelet, c1-master1 pulling image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\"", "31m 31m 1 apiserver-49wx5.1576adb20a79d7c5 Pod spec.containers{apiserver} Normal Pulled kubelet, c1-master1 Successfully pulled image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\"", "29m 31m 3 apiserver-49wx5.1576adb20c02881b Pod spec.containers{apiserver} Normal Created kubelet, c1-master1 Created container", "30m 31m 2 apiserver-49wx5.1576adb2125fe374 Pod spec.containers{apiserver} Normal Started kubelet, c1-master1 Started container", "26m 30m 20 apiserver-49wx5.1576adba5c020aab Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master1 Readiness probe failed: HTTP probe failed with statuscode: 500", "29m 30m 5 apiserver-49wx5.1576adbad38d91cb Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master1 Liveness probe failed: HTTP probe failed with statuscode: 500", "29m 30m 2 apiserver-49wx5.1576adbf93c90219 Pod spec.containers{apiserver} Normal Killing kubelet, c1-master1 Killing container with id docker://apiserver:Container failed liveness probe.. Container will be killed and recreated.", "29m 30m 2 apiserver-49wx5.1576adbf93e6a61c Pod spec.containers{apiserver} Normal Pulled kubelet, c1-master1 Container image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\" already present on machine", "11m 26m 59 apiserver-49wx5.1576adf9adbacdaa Pod spec.containers{apiserver} Warning BackOff kubelet, c1-master1 Back-off restarting failed container", "10m 10m 1 apiserver-849qf.1576aed1db434674 Pod spec.containers{apiserver} Normal Pulling kubelet, c1-master1 pulling image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\"", "10m 10m 1 apiserver-849qf.1576aed21cda28e2 Pod spec.containers{apiserver} Normal Pulled kubelet, c1-master1 Successfully pulled image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\"", "9m 10m 2 apiserver-849qf.1576aed21edf689e Pod spec.containers{apiserver} Normal Created kubelet, c1-master1 Created container", "9m 10m 2 apiserver-849qf.1576aed2255b0b57 Pod spec.containers{apiserver} Normal Started kubelet, c1-master1 Started container", "9m 10m 9 apiserver-849qf.1576aeda4a13e354 Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master1 Readiness probe failed: HTTP probe failed with statuscode: 500", "5m 10m 12 apiserver-849qf.1576aedb38e1373d Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master1 Liveness probe failed: HTTP probe failed with statuscode: 500", "9m 9m 1 apiserver-849qf.1576aee01fbae937 Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master1 Readiness probe failed: Get https://10.128.0.6:6443/healthz: dial tcp 10.128.0.6:6443: getsockopt: connection refused", "9m 9m 2 apiserver-849qf.1576aee0222c3b75 Pod spec.containers{apiserver} Normal Killing kubelet, c1-master1 Killing container with id docker://apiserver:Container failed liveness probe.. Container will be killed and recreated.", "9m 9m 2 apiserver-849qf.1576aee0225490eb Pod spec.containers{apiserver} Normal Pulled kubelet, c1-master1 Container image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\" already present on machine", "48s 5m 22 apiserver-849qf.1576af1a5d7c8ca9 Pod spec.containers{apiserver} Warning BackOff kubelet, c1-master1 Back-off restarting failed container", "31m 31m 1 apiserver-h4kx8.1576adb04b0a850e Pod spec.containers{apiserver} Normal Pulling kubelet, c1-master2 pulling image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\"", "31m 31m 1 apiserver-h4kx8.1576adb1e62a1818 Pod spec.containers{apiserver} Normal Pulled kubelet, c1-master2 Successfully pulled image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\"", "29m 31m 3 apiserver-h4kx8.1576adb1e854f5e4 Pod spec.containers{apiserver} Normal Created kubelet, c1-master2 Created container", "30m 31m 2 apiserver-h4kx8.1576adb1eec09769 Pod spec.containers{apiserver} Normal Started kubelet, c1-master2 Started container", "29m 31m 5 apiserver-h4kx8.1576adb94ba0c594 Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master2 Liveness probe failed: HTTP probe failed with statuscode: 500", "26m 31m 19 apiserver-h4kx8.1576adb9fed21ef0 Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master2 Readiness probe failed: HTTP probe failed with statuscode: 500", "29m 30m 2 apiserver-h4kx8.1576adbeaa1f0d70 Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master2 Readiness probe failed: Get https://10.130.0.16:6443/healthz: http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=\"\"", "29m 30m 2 apiserver-h4kx8.1576adbeaca8778d Pod spec.containers{apiserver} Normal Killing kubelet, c1-master2 Killing container with id docker://apiserver:Container failed liveness probe.. Container will be killed and recreated.", "29m 30m 2 apiserver-h4kx8.1576adbeacc6bd48 Pod spec.containers{apiserver} Normal Pulled kubelet, c1-master2 Container image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\" already present on machine", "11m 26m 58 apiserver-h4kx8.1576adf8b7bfb82d Pod spec.containers{apiserver} Warning BackOff kubelet, c1-master2 Back-off restarting failed container", "10m 10m 1 apiserver-jc2s8.1576aed1e119f5a5 Pod spec.containers{apiserver} Normal Pulling kubelet, c1-master2 pulling image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\"", "10m 10m 1 apiserver-jc2s8.1576aed39328074c Pod spec.containers{apiserver} Normal Pulled kubelet, c1-master2 Successfully pulled image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\"", "9m 10m 2 apiserver-jc2s8.1576aed394c4bf33 Pod spec.containers{apiserver} Normal Created kubelet, c1-master2 Created container", "9m 10m 2 apiserver-jc2s8.1576aed39b3f674c Pod spec.containers{apiserver} Normal Started kubelet, c1-master2 Started container", "5m 10m 23 apiserver-jc2s8.1576aedb65c437b0 Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master2 Readiness probe failed: HTTP probe failed with statuscode: 500", "9m 10m 5 apiserver-jc2s8.1576aedcda8b5fef Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master2 Liveness probe failed: HTTP probe failed with statuscode: 500", "9m 9m 2 apiserver-jc2s8.1576aee197f6fb83 Pod spec.containers{apiserver} Normal Killing kubelet, c1-master2 Killing container with id docker://apiserver:Container failed liveness probe.. Container will be killed and recreated.", "9m 9m 2 apiserver-jc2s8.1576aee1981a48fc Pod spec.containers{apiserver} Normal Pulled kubelet, c1-master2 Container image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\" already present on machine", "57s 5m 21 apiserver-jc2s8.1576af1bcd36baca Pod spec.containers{apiserver} Warning BackOff kubelet, c1-master2 Back-off restarting failed container", "26m 26m 1 apiserver-q74n2.1576adf43c1da30d Pod spec.containers{apiserver} Normal Pulling kubelet, c1-master3 pulling image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\"", "26m 26m 1 apiserver-q74n2.1576adf5ce41f8fb Pod spec.containers{apiserver} Normal Pulled kubelet, c1-master3 Successfully pulled image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\"", "24m 26m 3 apiserver-q74n2.1576adf5d04b97b2 Pod spec.containers{apiserver} Normal Created kubelet, c1-master3 Created container", "25m 26m 2 apiserver-q74n2.1576adf5d6a6dabc Pod spec.containers{apiserver} Normal Started kubelet, c1-master3 Started container", "25m 26m 9 apiserver-q74n2.1576adfd6e13fa31 Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master3 Readiness probe failed: HTTP probe failed with statuscode: 500", "21m 26m 12 apiserver-q74n2.1576adfda9fe251f Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master3 Liveness probe failed: HTTP probe failed with statuscode: 500", "24m 25m 2 apiserver-q74n2.1576ae02698905fa Pod spec.containers{apiserver} Normal Killing kubelet, c1-master3 Killing container with id docker://apiserver:Container failed liveness probe.. Container will be killed and recreated.", "16m 25m 7 apiserver-q74n2.1576ae0269a8a3ab Pod spec.containers{apiserver} Normal Pulled kubelet, c1-master3 Container image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\" already present on machine", "6m 21m 59 apiserver-q74n2.1576ae3cc2a8a350 Pod spec.containers{apiserver} Warning BackOff kubelet, c1-master3 Back-off restarting failed container", "6m 6m 1 apiserver-qgqtv.1576af15f89c0485 Pod spec.containers{apiserver} Normal Pulling kubelet, c1-master3 pulling image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\"", "5m 5m 1 apiserver-qgqtv.1576af179501dd31 Pod spec.containers{apiserver} Normal Pulled kubelet, c1-master3 Successfully pulled image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\"", "4m 5m 3 apiserver-qgqtv.1576af17969b809b Pod spec.containers{apiserver} Normal Created kubelet, c1-master3 Created container", "5m 5m 2 apiserver-qgqtv.1576af179ce74369 Pod spec.containers{apiserver} Normal Started kubelet, c1-master3 Started container", "1m 5m 12 apiserver-qgqtv.1576af1fe5164d8c Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master3 Liveness probe failed: HTTP probe failed with statuscode: 500", "4m 5m 8 apiserver-qgqtv.1576af20209e6f12 Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master3 Readiness probe failed: HTTP probe failed with statuscode: 500", "4m 5m 2 apiserver-qgqtv.1576af24ce6ee124 Pod spec.containers{apiserver} Normal Killing kubelet, c1-master3 Killing container with id docker://apiserver:Container failed liveness probe.. Container will be killed and recreated.", " 5m 7 apiserver-qgqtv.1576af24ce939038 Pod spec.containers{apiserver} Normal Pulled kubelet, c1-master3 Container image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\" already present on machine", "4m 4m 1 apiserver-qgqtv.1576af3074294456 Pod spec.containers{apiserver} Warning Unhealthy kubelet, c1-master3 Readiness probe failed: Get https://10.129.0.6:6443/healthz: dial tcp 10.129.0.6:6443: getsockopt: connection refused", "31m 31m 1 apiserver.1576adafc382e973 DaemonSet Normal SuccessfulCreate daemonset-controller Created pod: apiserver-h4kx8", "31m 31m 1 apiserver.1576adafc3d20890 DaemonSet Normal SuccessfulCreate daemonset-controller Created pod: apiserver-q74n2", "31m 31m 1 apiserver.1576adafc3eca4a6 DaemonSet Normal SuccessfulCreate daemonset-controller Created pod: apiserver-49wx5", "11m 11m 1 apiserver.1576aece99db8963 DaemonSet Normal SuccessfulDelete daemonset-controller Deleted pod: apiserver-q74n2", "11m 11m 1 apiserver.1576aece99e90f61 DaemonSet Normal SuccessfulDelete daemonset-controller Deleted pod: apiserver-49wx5", "11m 11m 1 apiserver.1576aece99eacf04 DaemonSet Normal SuccessfulDelete daemonset-controller Deleted pod: apiserver-h4kx8", "11m 11m 1 apiserver.1576aed151be341c DaemonSet Normal SuccessfulCreate daemonset-controller Created pod: apiserver-jc2s8", "11m 11m 1 apiserver.1576aed1520d3a3f DaemonSet Normal SuccessfulCreate daemonset-controller Created pod: apiserver-qgqtv", "11m 11m 1 apiserver.1576aed15212ae68 DaemonSet Normal SuccessfulCreate daemonset-controller Created pod: apiserver-849qf", "31m 31m 2 controller-manager-6j9s4.1576adb068febc9c Pod Warning FailedMount kubelet, c1-master1 MountVolume.SetUp failed for volume \"service-catalog-ssl\" : secrets \"controllermanager-ssl\" not found", "31m 31m 1 controller-manager-6j9s4.1576adb12f09f67a Pod spec.containers{controller-manager} Normal Pulling kubelet, c1-master1 pulling image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\"", "31m 31m 1 controller-manager-6j9s4.1576adb238597a11 Pod spec.containers{controller-manager} Normal Pulled kubelet, c1-master1 Successfully pulled image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\"", "30m 31m 2 controller-manager-6j9s4.1576adb239d61679 Pod spec.containers{controller-manager} Normal Created kubelet, c1-master1 Created container", "30m 31m 2 controller-manager-6j9s4.1576adb2405b5f00 Pod spec.containers{controller-manager} Normal Started kubelet, c1-master1 Started container", "26m 31m 24 controller-manager-6j9s4.1576adb968c08481 Pod spec.containers{controller-manager} Warning Unhealthy kubelet, c1-master1 Readiness probe failed: HTTP probe failed with statuscode: 500", "29m 30m 5 controller-manager-6j9s4.1576adbb33f76129 Pod spec.containers{controller-manager} Warning Unhealthy kubelet, c1-master1 Liveness probe failed: HTTP probe failed with statuscode: 500", "30m 30m 1 controller-manager-6j9s4.1576adbff1539ad5 Pod spec.containers{controller-manager} Normal Killing kubelet, c1-master1 Killing container with id docker://controller-manager:Container failed liveness probe.. Container will be killed and recreated.", "30m 30m 1 controller-manager-6j9s4.1576adbff174718d Pod spec.containers{controller-manager} Normal Pulled kubelet, c1-master1 Container image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\" already present on machine", "11m 26m 58 controller-manager-6j9s4.1576adfa26fefdf1 Pod spec.containers{controller-manager} Warning BackOff kubelet, c1-master1 Back-off restarting failed container", "31m 31m 2 controller-manager-f9x6r.1576adb04e1cad13 Pod Warning FailedMount kubelet, c1-master2 MountVolume.SetUp failed for volume \"service-catalog-ssl\" : secrets \"controllermanager-ssl\" not found", "31m 31m 1 controller-manager-f9x6r.1576adb1140ace29 Pod spec.containers{controller-manager} Normal Pulling kubelet, c1-master2 pulling image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\"", "31m 31m 1 controller-manager-f9x6r.1576adb221da68ff Pod spec.containers{controller-manager} Normal Pulled kubelet, c1-master2 Successfully pulled image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\"", "30m 31m 3 controller-manager-f9x6r.1576adb223959028 Pod spec.containers{controller-manager} Normal Created kubelet, c1-master2 Created container", "30m 31m 3 controller-manager-f9x6r.1576adb229fe9766 Pod spec.containers{controller-manager} Normal Started kubelet, c1-master2 Started container", "30m 31m 5 controller-manager-f9x6r.1576adb90744884f Pod spec.containers{controller-manager} Warning Unhealthy kubelet, c1-master2 Liveness probe failed: HTTP probe failed with statuscode: 500", "21m 31m 19 controller-manager-f9x6r.1576adb94265dbd0 Pod spec.containers{controller-manager} Warning Unhealthy kubelet, c1-master2 Readiness probe failed: HTTP probe failed with statuscode: 500", "30m 30m 2 controller-manager-f9x6r.1576adbdc435d424 Pod spec.containers{controller-manager} Normal Killing kubelet, c1-master2 Killing container with id docker://controller-manager:Container failed liveness probe.. Container will be killed and recreated.", "30m 30m 2 controller-manager-f9x6r.1576adbdc4579b63 Pod spec.containers{controller-manager} Normal Pulled kubelet, c1-master2 Container image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\" already present on machine", "11m 27m 63 controller-manager-f9x6r.1576adec54e761bc Pod spec.containers{controller-manager} Warning BackOff kubelet, c1-master2 Back-off restarting failed container", "6m 6m 1 controller-manager-hb2mx.1576af15e496129d Pod spec.containers{controller-manager} Normal Pulling kubelet, c1-master3 pulling image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\"", "5m 5m 1 controller-manager-hb2mx.1576af1764b4fe16 Pod spec.containers{controller-manager} Normal Pulled kubelet, c1-master3 Successfully pulled image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\"", "4m 5m 3 controller-manager-hb2mx.1576af1766d4ddc4 Pod spec.containers{controller-manager} Normal Created kubelet, c1-master3 Created container", "4m 5m 3 controller-manager-hb2mx.1576af176d903578 Pod spec.containers{controller-manager} Normal Started kubelet, c1-master3 Started container", "4m 5m 5 controller-manager-hb2mx.1576af1ecd63438d Pod spec.containers{controller-manager} Warning Unhealthy kubelet, c1-master3 Liveness probe failed: HTTP probe failed with statuscode: 500", "1m 5m 23 controller-manager-hb2mx.1576af1f7e4ff5e4 Pod spec.containers{controller-manager} Warning Unhealthy kubelet, c1-master3 Readiness probe failed: HTTP probe failed with statuscode: 500", "4m 5m 2 controller-manager-hb2mx.1576af238ab6a287 Pod spec.containers{controller-manager} Normal Killing kubelet, c1-master3 Killing container with id docker://controller-manager:Container failed liveness probe.. Container will be killed and recreated.", "4m 5m 2 controller-manager-hb2mx.1576af238ad7c50f Pod spec.containers{controller-manager} Normal Pulled kubelet, c1-master3 Container image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\" already present on machine", " 57s 22 controller-manager-hb2mx.1576af5dbfcf48fc Pod spec.containers{controller-manager} Warning BackOff kubelet, c1-master3 Back-off restarting failed container", "10m 10m 1 controller-manager-j8tvz.1576aed1f507e917 Pod spec.containers{controller-manager} Normal Pulling kubelet, c1-master1 pulling image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\"", "10m 10m 1 controller-manager-j8tvz.1576aed2563293f3 Pod spec.containers{controller-manager} Normal Pulled kubelet, c1-master1 Successfully pulled image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\"", "9m 10m 3 controller-manager-j8tvz.1576aed257b9e48c Pod spec.containers{controller-manager} Normal Created kubelet, c1-master1 Created container", "10m 10m 2 controller-manager-j8tvz.1576aed25e24a8f8 Pod spec.containers{controller-manager} Normal Started kubelet, c1-master1 Started container", "31s 10m 30 controller-manager-j8tvz.1576aed95f28601a Pod spec.containers{controller-manager} Warning Unhealthy kubelet, c1-master1 Readiness probe failed: HTTP probe failed with statuscode: 500", "9m 10m 5 controller-manager-j8tvz.1576aed9de72d48d Pod spec.containers{controller-manager} Warning Unhealthy kubelet, c1-master1 Liveness probe failed: HTTP probe failed with statuscode: 500", "9m 10m 2 controller-manager-j8tvz.1576aede9bff3150 Pod spec.containers{controller-manager} Normal Killing kubelet, c1-master1 Killing container with id docker://controller-manager:Container failed liveness probe.. Container will be killed and recreated.", "9m 10m 2 controller-manager-j8tvz.1576aede9c231012 Pod spec.containers{controller-manager} Normal Pulled kubelet, c1-master1 Container image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\" already present on machine", "10m 10m 1 controller-manager-lgt6t.1576aed1dd0cfec8 Pod spec.containers{controller-manager} Normal Pulling kubelet, c1-master2 pulling image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\"", "10m 10m 1 controller-manager-lgt6t.1576aed34cab9a98 Pod spec.containers{controller-manager} Normal Pulled kubelet, c1-master2 Successfully pulled image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\"", "9m 10m 3 controller-manager-lgt6t.1576aed34e78bc67 Pod spec.containers{controller-manager} Normal Created kubelet, c1-master2 Created container", "9m 10m 2 controller-manager-lgt6t.1576aed354e7b8fa Pod spec.containers{controller-manager} Normal Started kubelet, c1-master2 Started container", "5m 10m 23 controller-manager-lgt6t.1576aedb3172558b Pod spec.containers{controller-manager} Warning Unhealthy kubelet, c1-master2 Readiness probe failed: HTTP probe failed with statuscode: 500", "9m 10m 5 controller-manager-lgt6t.1576aedbaa3193ba Pod spec.containers{controller-manager} Warning Unhealthy kubelet, c1-master2 Liveness probe failed: HTTP probe failed with statuscode: 500", "9m 9m 2 controller-manager-lgt6t.1576aee0673bc8b2 Pod spec.containers{controller-manager} Normal Killing kubelet, c1-master2 Killing container with id docker://controller-manager:Container failed liveness probe.. Container will be killed and recreated.", "47s 9m 7 controller-manager-lgt6t.1576aee067674fab Pod spec.containers{controller-manager} Normal Pulled kubelet, c1-master2 Container image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10.83\" already present on machine", "26m 26m 2 controller-manager-th4s7.1576adf434e9b925 Pod Warning FailedMount kubelet, c1-master3 MountVolume.SetUp failed for volume \"service-catalog-ssl\" : secrets \"controllermanager-ssl\" not found", "26m 26m 1 controller-manager-th4s7.1576adf4e87c129d Pod spec.containers{controller-manager} Normal Pulling kubelet, c1-master3 pulling image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\"", "26m 26m 1 controller-manager-th4s7.1576adf5fb022057 Pod spec.containers{controller-manager} Normal Pulled kubelet, c1-master3 Successfully pulled image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\"", "25m 26m 3 controller-manager-th4s7.1576adf5fccf0f8b Pod spec.containers{controller-manager} Normal Created kubelet, c1-master3 Created container", "25m 26m 2 controller-manager-th4s7.1576adf6022b788a Pod spec.containers{controller-manager} Normal Started kubelet, c1-master3 Started container", "16m 26m 21 controller-manager-th4s7.1576adfdfa9fa1d6 Pod spec.containers{controller-manager} Warning Unhealthy kubelet, c1-master3 Readiness probe failed: HTTP probe failed with statuscode: 500", "25m 26m 5 controller-manager-th4s7.1576adfe34d17cc3 Pod spec.containers{controller-manager} Warning Unhealthy kubelet, c1-master3 Liveness probe failed: HTTP probe failed with statuscode: 500", "25m 25m 2 controller-manager-th4s7.1576ae02f2422eba Pod spec.containers{controller-manager} Normal Killing kubelet, c1-master3 Killing container with id docker://controller-manager:Container failed liveness probe.. Container will be killed and recreated.", "25m 25m 2 controller-manager-th4s7.1576ae02f25f6d43 Pod spec.containers{controller-manager} Normal Pulled kubelet, c1-master3 Container image \"registry.access.redhat.com/openshift3/ose-service-catalog:v3.10\" already present on machine", "6m 22m 63 controller-manager-th4s7.1576ae31835052fd Pod spec.containers{controller-manager} Warning BackOff kubelet, c1-master3 Back-off restarting failed container", "31m 31m 1 controller-manager.1576adb046fb4674 DaemonSet Normal SuccessfulCreate daemonset-controller Created pod: controller-manager-f9x6r", "31m 31m 1 controller-manager.1576adb0474a85fb DaemonSet Normal SuccessfulCreate daemonset-controller Created pod: controller-manager-6j9s4", "31m 31m 1 controller-manager.1576adb04756e982 DaemonSet Normal SuccessfulCreate daemonset-controller Created pod: controller-manager-th4s7", "11m 11m 1 controller-manager.1576aecf18b5fe96 DaemonSet Normal SuccessfulDelete daemonset-controller Deleted pod: controller-manager-f9x6r", "11m 11m 1 controller-manager.1576aecf18cb3ae1 DaemonSet Normal SuccessfulDelete daemonset-controller Deleted pod: controller-manager-th4s7", "11m 11m 1 controller-manager.1576aecf18d64545 DaemonSet Normal SuccessfulDelete daemonset-controller Deleted pod: controller-manager-6j9s4", "11m 11m 1 controller-manager.1576aed15268b4df DaemonSet Normal SuccessfulCreate daemonset-controller Created pod: controller-manager-lgt6t", "11m 11m 1 controller-manager.1576aed152c6b1d0 DaemonSet Normal SuccessfulCreate daemonset-controller Created pod: controller-manager-hb2mx", "11m 11m 1 controller-manager.1576aed152cb1617 DaemonSet Normal SuccessfulCreate daemonset-controller Created pod: controller-manager-j8tvz", "26m 26m 1 service-catalog-controller-manager.1576adf299a6939e ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-6j9s4-external-service-catalog-controller became leader", "26m 26m 1 service-catalog-controller-manager.1576adf605f5b736 ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-th4s7-external-service-catalog-controller became leader", "25m 25m 1 service-catalog-controller-manager.1576ae02fcf61860 ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-th4s7-external-service-catalog-controller became leader", "25m 25m 1 service-catalog-controller-manager.1576ae063ffe010b ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-f9x6r-external-service-catalog-controller became leader", "25m 25m 1 service-catalog-controller-manager.1576ae0c4ea9e957 ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-th4s7-external-service-catalog-controller became leader", "24m 24m 1 service-catalog-controller-manager.1576ae1270ff96de ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-6j9s4-external-service-catalog-controller became leader", "24m 24m 1 service-catalog-controller-manager.1576ae159d6ffd4e ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-th4s7-external-service-catalog-controller became leader", "23m 23m 1 service-catalog-controller-manager.1576ae1eed4efe4c ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-th4s7-external-service-catalog-controller became leader", "23m 23m 1 service-catalog-controller-manager.1576ae283f464aa4 ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-th4s7-external-service-catalog-controller became leader", "21m 21m 1 service-catalog-controller-manager.1576ae3812f94c33 ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-f9x6r-external-service-catalog-controller became leader", "21m 21m 1 service-catalog-controller-manager.1576ae44f73a1581 ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-6j9s4-external-service-catalog-controller became leader", "16m 16m 1 service-catalog-controller-manager.1576ae8a44cbdfef ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-f9x6r-external-service-catalog-controller became leader", "15m 15m 1 service-catalog-controller-manager.1576ae91aeb9683a ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-f9x6r-external-service-catalog-controller became leader", "14m 14m 1 service-catalog-controller-manager.1576aea138b3f5b3 ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-6j9s4-external-service-catalog-controller became leader", "10m 10m 1 service-catalog-controller-manager.1576aed66e74f18e ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-j8tvz-external-service-catalog-controller became leader", "10m 10m 1 service-catalog-controller-manager.1576aedea70e16eb ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-j8tvz-external-service-catalog-controller became leader", "9m 9m 1 service-catalog-controller-manager.1576aeea4ceca3d5 ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-j8tvz-external-service-catalog-controller became leader", "8m 8m 1 service-catalog-controller-manager.1576aef5f0a6c093 ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-j8tvz-external-service-catalog-controller became leader", "7m 7m 1 service-catalog-controller-manager.1576af0194afdb03 ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-j8tvz-external-service-catalog-controller became leader", "6m 6m 1 service-catalog-controller-manager.1576af0d392f8ad7 ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-j8tvz-external-service-catalog-controller became leader", "4m 4m 1 service-catalog-controller-manager.1576af2b9acf8f36 ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-j8tvz-external-service-catalog-controller became leader", "1m 1m 1 service-catalog-controller-manager.1576af5cf81f2ebf ConfigMap Normal LeaderElection service-catalog-controller-manager controller-manager-j8tvz-external-service-catalog-controller became leader" ] }

TASK [openshift_service_catalog : Get pod logs] ***** Friday 04 January 2019 09:39:56 -0600 (0:00:00.047) 0:11:37.868 **** changed: [c1-master1.sql.hpe.com]

TASK [openshift_service_catalog : debug] **** Friday 04 January 2019 09:39:56 -0600 (0:00:00.267) 0:11:38.136 **** ok: [c1-master1.sql.hpe.com] => { "msg": [ "I0104 15:39:20.854178 1 feature_gate.go:190] feature gates: map[OriginatingIdentity:true]", "I0104 15:39:20.854274 1 hyperkube.go:192] Service Catalog version v3.10.83 (built 2018-12-01T13:14:54Z)", "W0104 15:39:21.555149 1 util.go:111] OpenAPI spec will not be served", "I0104 15:39:21.556087 1 util.go:181] Admission control plugin names: [NamespaceLifecycle MutatingAdmissionWebhook ValidatingAdmissionWebhook ServicePlanChangeValidator BrokerAuthSarCheck KubernetesNamespaceLifecycle DefaultServicePlan ServiceBindingsLifecycle]", "I0104 15:39:21.556398 1 plugins.go:149] Loaded 8 admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ServicePlanChangeValidator,BrokerAuthSarCheck,KubernetesNamespaceLifecycle,DefaultServicePlan,ServiceBindingsLifecycle.", "I0104 15:39:21.558139 1 storage_factory.go:285] storing {servicecatalog.k8s.io clusterservicebrokers} in servicecatalog.k8s.io/v1beta1, reading as servicecatalog.k8s.io/internal from storagebackend.Config{Type:\"\", Prefix:\"/registry\", ServerList:[]string{\"https://c1-master1.sql.hpe.com:2379\", \"https://c1-master2.sql.hpe.com:2379\", \"https://c1-master3.sql.hpe.com:2379\"}, KeyFile:\"/etc/origin/master/master.etcd-client.key\", CertFile:\"/etc/origin/master/master.etcd-client.crt\", CAFile:\"/etc/origin/master/master.etcd-ca.crt\", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}", "I0104 15:39:21.558190 1 storage_factory.go:285] storing {servicecatalog.k8s.io clusterserviceclasses} in servicecatalog.k8s.io/v1beta1, reading as servicecatalog.k8s.io/__internal from storagebackend.Config{Type:\"\", Prefix:\"/registry\", ServerList:[]string{\"https://c1-master1.sql.hpe.com:2379\", \"https://c1-master2.sql.hpe.com:2379\", \"https://c1-master3.sql.hpe.com:2379\"}, KeyFile:\"/etc/origin/master/master.etcd-client.key\", CertFile:\"/etc/origin/master/master.etcd-client.crt\", CAFile:\"/etc/origin/master/master.etcd-ca.crt\", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}", "I0104 15:39:21.558222 1 storage_factory.go:285] storing {servicecatalog.k8s.io clusterserviceplans} in servicecatalog.k8s.io/v1beta1, reading as servicecatalog.k8s.io/internal from storagebackend.Config{Type:\"\", Prefix:\"/registry\", ServerList:[]string{\"https://c1-master1.sql.hpe.com:2379\", \"https://c1-master2.sql.hpe.com:2379\", \"https://c1-master3.sql.hpe.com:2379\"}, KeyFile:\"/etc/origin/master/master.etcd-client.key\", CertFile:\"/etc/origin/master/master.etcd-client.crt\", CAFile:\"/etc/origin/master/master.etcd-ca.crt\", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}", "I0104 15:39:21.558248 1 storage_factory.go:285] storing {servicecatalog.k8s.io serviceinstances} in servicecatalog.k8s.io/v1beta1, reading as servicecatalog.k8s.io/internal from storagebackend.Config{Type:\"\", Prefix:\"/registry\", ServerList:[]string{\"https://c1-master1.sql.hpe.com:2379\", \"https://c1-master2.sql.hpe.com:2379\", \"https://c1-master3.sql.hpe.com:2379\"}, KeyFile:\"/etc/origin/master/master.etcd-client.key\", CertFile:\"/etc/origin/master/master.etcd-client.crt\", CAFile:\"/etc/origin/master/master.etcd-ca.crt\", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}", "I0104 15:39:21.558273 1 storage_factory.go:285] storing {servicecatalog.k8s.io servicebindings} in servicecatalog.k8s.io/v1beta1, reading as servicecatalog.k8s.io/__internal from storagebackend.Config{Type:\"\", Prefix:\"/registry\", ServerList:[]string{\"https://c1-master1.sql.hpe.com:2379\", \"https://c1-master2.sql.hpe.com:2379\", \"https://c1-master3.sql.hpe.com:2379\"}, KeyFile:\"/etc/origin/master/master.etcd-client.key\", CertFile:\"/etc/origin/master/master.etcd-client.crt\", CAFile:\"/etc/origin/master/master.etcd-ca.crt\", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}", "I0104 15:39:21.564227 1 storage_factory.go:285] storing {settings.servicecatalog.k8s.io podpresets} in settings.servicecatalog.k8s.io/v1alpha1, reading as settings.servicecatalog.k8s.io/internal from storagebackend.Config{Type:\"\", Prefix:\"/registry\", ServerList:[]string{\"https://c1-master1.sql.hpe.com:2379\", \"https://c1-master2.sql.hpe.com:2379\", \"https://c1-master3.sql.hpe.com:2379\"}, KeyFile:\"/etc/origin/master/master.etcd-client.key\", CertFile:\"/etc/origin/master/master.etcd-client.crt\", CAFile:\"/etc/origin/master/master.etcd-ca.crt\", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}", "W0104 15:39:21.564259 1 genericapiserver.go:342] Skipping API settings.servicecatalog.k8s.io/v1alpha1 because it has no resources.", "I0104 15:39:21.564301 1 etcd_config.go:132] Finished installing API groups", "I0104 15:39:21.564310 1 run_server.go:119] Running the API server", "[restful] 2019/01/04 15:39:21 log.go:33: [restful/swagger] listing is available at https://:6443/swaggerapi", "[restful] 2019/01/04 15:39:21 log.go:33: [restful/swagger] https://:6443/swaggerui/ is mapped to folder /swagger-ui/", "I0104 15:39:21.566351 1 serve.go:96] Serving securely on [::]:6443", "I0104 15:39:21.566422 1 util.go:231] Starting shared informers", "I0104 15:39:21.566441 1 util.go:236] Started shared informers", "I0104 15:39:21.566590 1 reflector.go:202] Starting reflector v1beta1.MutatingWebhookConfiguration (10m0s) from github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/client-go/informers/factory.go:87", "I0104 15:39:21.566616 1 reflector.go:240] Listing and watching v1beta1.MutatingWebhookConfiguration from github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/client-go/informers/factory.go:87", "I0104 15:39:21.566695 1 reflector.go:202] Starting reflector servicecatalog.ClusterServicePlan (10m0s) from github.com/kubernetes-incubator/service-catalog/pkg/client/informers_generated/internalversion/factory.go:75", "I0104 15:39:21.566706 1 reflector.go:240] Listing and watching servicecatalog.ClusterServicePlan from github.com/kubernetes-incubator/service-catalog/pkg/client/informers_generated/internalversion/factory.go:75", "I0104 15:39:21.566813 1 reflector.go:202] Starting reflector v1.Namespace (10m0s) from github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/client-go/informers/factory.go:87", "I0104 15:39:21.566827 1 reflector.go:240] Listing and watching v1.Namespace from github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/client-go/informers/factory.go:87", "I0104 15:39:21.566836 1 reflector.go:202] Starting reflector v1beta1.ValidatingWebhookConfiguration (10m0s) from github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/client-go/informers/factory.go:87", "I0104 15:39:21.566850 1 reflector.go:240] Listing and watching v1beta1.ValidatingWebhookConfiguration from github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/client-go/informers/factory.go:87", "I0104 15:39:21.566850 1 reflector.go:202] Starting reflector servicecatalog.ClusterServiceClass (10m0s) from github.com/kubernetes-incubator/service-catalog/pkg/client/informers_generated/internalversion/factory.go:75", "I0104 15:39:21.566868 1 reflector.go:240] Listing and watching servicecatalog.ClusterServiceClass from github.com/kubernetes-incubator/service-catalog/pkg/client/informers_generated/internalversion/factory.go:75", "I0104 15:39:21.566866 1 reflector.go:202] Starting reflector servicecatalog.ServiceInstance (10m0s) from github.com/kubernetes-incubator/service-catalog/pkg/client/informers_generated/internalversion/factory.go:75", "I0104 15:39:21.566958 1 reflector.go:240] Listing and watching servicecatalog.ServiceInstance from github.com/kubernetes-incubator/service-catalog/pkg/client/informers_generated/internalversion/factory.go:75", "I0104 15:39:51.983134 1 run_server.go:136] etcd checker called", "E0104 15:39:54.626433 1 run_server.go:145] etcd failed to reach any server", "I0104 15:39:54.627258 1 wrap.go:42] GET /healthz: (2.650074519s) 500", "goroutine 1129 [running]:", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/httplog.(respLogger).recordStatus(0xc420aba0e0, 0x1f4)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/httplog.(respLogger).WriteHeader(0xc420aba0e0, 0x1f4)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/filters.(baseTimeoutWriter).WriteHeader(0xc420d2c180, 0x1f4)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac", "net/http.Error(0x7f701b043580, 0xc420f4a008, 0xc420e1e0c0, 0xb3, 0x1f4)", "\t/usr/lib/golang/src/net/http/server.go:1930 +0xda", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/healthz/healthz.go:121 +0x508", "net/http.HandlerFunc.ServeHTTP(0xc4204452c0, 0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/usr/lib/golang/src/net/http/server.go:1918 +0x44", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/mux.(pathHandler).ServeHTTP(0xc42077e540, 0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:241 +0x55a", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/mux.(PathRecorderMux).ServeHTTP(0xc4202a1d50, 0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x1b44be1, 0x19, 0xc4200c7b00, 0xc4202a1d50, 0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/handler.go:160 +0x6ad", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server.(director).ServeHTTP(0xc4206fe4e0, 0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t:1 +0x75", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:52 +0x37d", "net/http.HandlerFunc.ServeHTTP(0xc4206e6d20, 0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/usr/lib/golang/src/net/http/server.go:1918 +0x44", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:165 +0x42a", "net/http.HandlerFunc.ServeHTTP(0xc420417dc0, 0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/usr/lib/golang/src/net/http/server.go:1918 +0x44", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:49 +0x203a", "net/http.HandlerFunc.ServeHTTP(0xc4206e6d70, 0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/usr/lib/golang/src/net/http/server.go:1918 +0x44", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:78 +0x2b1", "net/http.HandlerFunc.ServeHTTP(0xc4206e6dc0, 0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/usr/lib/golang/src/net/http/server.go:1918 +0x44", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb", "net/http.HandlerFunc.ServeHTTP(0xc4206fe500, 0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/usr/lib/golang/src/net/http/server.go:1918 +0x44", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/filters.WithCORS.func1(0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/filters/cors.go:75 +0x17b", "net/http.HandlerFunc.ServeHTTP(0xc4203eede0, 0x7f701b043580, 0xc420f4a008, 0xc42020d800)", "\t/usr/lib/golang/src/net/http/server.go:1918 +0x44", "github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/filters.(timeoutHandler).ServeHTTP.func1(0xc4206fe580, 0x33ff5c0, 0xc420f4a008, 0xc42020d800, 0xc420a2c2a0)", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d", "created by github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/filters.(timeoutHandler).ServeHTTP", "\t/builddir/build/BUILD/atomic-enterprise-service-catalog-git-1450.074e221/_output/local/go/src/github.com/kubernetes-incubator/service-catalog/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab", "", "logging error output: \"[+]ping ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-service-catalog-apiserver-informers ok\n[-]etcd failed: reason withheld\nhealthz check failed\n\"", " [[kube-probe/1.10+] 10.128.0.1:49784]", "I0104 15:39:56.317521 1 run_server.go:136] etcd checker called" ] }

TASK [openshift_service_catalog : Report errors] **** Friday 04 January 2019 09:39:56 -0600 (0:00:00.041) 0:11:38.177 **** fatal: [c1-master1.sql.hpe.com]: FAILED! => {"changed": false, "failed": true, "msg": "Catalog install failed."}

PLAY RECAP ** c1-lb.sql.hpe.com : ok=1 changed=0 unreachable=0 failed=0 c1-master1.sql.hpe.com : ok=92 changed=24 unreachable=0 failed=1 c1-master2.sql.hpe.com : ok=31 changed=0 unreachable=0 failed=0 c1-master3.sql.hpe.com : ok=31 changed=0 unreachable=0 failed=0 c1-node5.sql.hpe.com : ok=0 changed=0 unreachable=0 failed=0 c1-node6.sql.hpe.com : ok=0 changed=0 unreachable=0 failed=0 c1-node7.sql.hpe.com : ok=0 changed=0 unreachable=0 failed=0 c1-node8.sql.hpe.com : ok=0 changed=0 unreachable=0 failed=0 localhost : ok=12 changed=0 unreachable=0 failed=0


For long output or logs, consider using a [gist](https://gist.github.com/)

##### Additional Information

Provide any additional information which may help us diagnose the
issue.

* Your operating system and version, ie: RHEL 7.2, Fedora 23 (`$ cat /etc/redhat-release`)
* Your inventory file (especially any non-standard configuration parameters)
* Sample code, etc

Red Hat Enterprise Linux Server release 7.6 (Maipo)

/etc/Ansible/hosts

[OSEv3:children] masters nodes etcd lb

[OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=openshift-enterprise deployment_type=openshift-enterprise openshift_release=v3.10

set networking variables

openshift_master_cluster_method=native openshift_master_cluster_hostname=osint.sql.hpe.com openshift_master_cluster_public_hostname=ospub.sql.hpe.com openshift_master_api_port=8443 openshift_master_console_port=8443 openshift_portal_net=172.30.0.0/16 osm_cluster_network_cidr=10.128.0.0/14 osm_host_subnet_length=9

sswd auth

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]

Defining htpasswd users

openshift_master_htpasswd_users={'admin': '$apr1$.FOXBFF.$buE/syq17WoBGXbtzJ3ec/', 'moe': '$apr1$HpfeYYbZ$mNvQhP6IUPnrizsTyhwsT.'}

Wildcard DNS entry for Infrastructure(Router) nodes

A wildcard DNS entry needs to exist under a unique subdomain

openshift_master_default_subdomain=sql.hpe.com

[masters] c1-master1.sql.hpe.com ansible_connection=local c1-master2.sql.hpe.com c1-master3.sql.hpe.com

[etcd] c1-master1.sql.hpe.com ansible_connection=local c1-master2.sql.hpe.com c1-master3.sql.hpe.com

Specify load balancer host

[lb] c1-lb.sql.hpe.com

host group for nodes, includes region info

[nodes] c1-master1.sql.hpe.com ansible_connection=local openshift_node_group_name='node-config-master' c1-master2.sql.hpe.com openshift_node_group_name='node-config-master' c1-master3.sql.hpe.com openshift_node_group_name='node-config-master' c1-node5.sql.hpe.com openshift_node_group_name='node-config-compute' c1-node6.sql.hpe.com openshift_node_group_name='node-config-compute' c1-node7.sql.hpe.com openshift_node_group_name='node-config-infra' c1-node8.sql.hpe.com openshift_node_group_name='node-config-infra'

taisph commented 5 years ago

@dalisani What was the problem? I seem to have a similar issue.

dalisani commented 5 years ago

Once I enabled etcd PV for service broker, then Service Catalog got installed. https://docs.openshift.com/container-platform/3.11/install/configuring_inventory_file.html#configuring-oab-storage.

I added these lines to the host file:

Service catalog entries

openshift_service_catalog_version=v3.11

Enable registry storage

openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_storage_nfs_directory=/exports openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)' openshift_hosted_registry_storage_volume_name=registry openshift_hosted_registry_storage_volume_size=10Gi

Enable etcd PV for service broker

openshift_hosted_etcd_storage_kind=nfs openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)" openshift_hosted_etcd_storage_nfs_directory=/opt/osev3-etcd openshift_hosted_etcd_storage_volume_name=etcd-vol2 openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] openshift_hosted_etcd_storage_volume_size=1G openshift_hosted_etcd_storage_labels={'storage': 'etcd'}

[nfs] c4-lb.cluster4.hpe.com