Closed Tirke closed 6 years ago
@openshift/sig-ansible-service-broker
(Adding @fabianvf and @jmontleon so they are aware)
@Tirke sorry for delay, just saw this issue was open.
Sounds like you ran into an issue with the registry adapter configuration in the Broker.
The Broker can be configured to talk to several different types of registries, for example dockerhub looking at upstream ansibleplaybookbundle org or downstream registry.access.redhat.com.
Here's an example of the variables used to talk to upstream dockerhub and the ansibleplaybookbundle org from openshift-ansible
If you are still seeing problems please let us know.
I see the same issue.
# oc version
oc v3.7.0+7ed6862
kubernetes v1.7.6+a08f5eeb62
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server
openshift v3.7.0+7ed6862
kubernetes v1.7.6+a08f5eeb62
# oc describe clusterservicebroker ansible-service-broker
Name: ansible-service-broker
Namespace:
Labels: <none>
Annotations: <none>
API Version: servicecatalog.k8s.io/v1beta1
Kind: ClusterServiceBroker
Metadata:
Creation Timestamp: 2018-03-14T08:38:40Z
Finalizers:
kubernetes-incubator/service-catalog
Generation: 1
Resource Version: 3792
Self Link: /apis/servicecatalog.k8s.io/v1beta1/clusterservicebrokers/ansible-service-broker
UID: 18b03efa-2763-11e8-8d38-0a580a800071
Spec:
Auth Info:
Bearer:
Secret Ref:
Name: asb-client
Namespace: openshift-ansible-service-broker
Ca Bundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2akNDQWRLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFtTVNRd0lnWURWUVFEREJ0dmNHVnUKYzJocFpuUXRjMmxuYm1WeVFERTFNakV3TVRRNE5qSXdIaGNOTVRnd016RTBNRGd3TnpReVdoY05Nak13TXpFegpNRGd3TnpReldqQW1NU1F3SWdZRFZRUUREQnR2Y0dWdWMyaHBablF0YzJsbmJtVnlRREUxTWpFd01UUTROakl3CmdnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUR6bHpSMk51aUhGSFVjVnF2bkJ4UnAKNGMrMjliNDR3WjZ6bmFacjUxNFFYaHdSNFg2MDdqTXMrVkpVUG0wazVFMDY0YnBlc1RKZm1kS2tjSFlyZVlPNQorOVZWSVNDcVhGSzNVSG11cXE1VFRkb1duV2VXSk1VZUYyb0lQTmtFbDMwSkx2bTZBbUp1VTFrRmE3VkNiTUJsClJ1V3ZjbXFqcExhSUJBUlJEVnNlVTlZcklYZUFEZmJZM1RBMHFxdG9relNNU3QySURFRHVCSTliRGpXVUdJRUwKN3JSVS9ZZWNJWXRnQ2xmaVljTmJiUFFEUmtRcXVEaUtyMXpETmlFQk1GVjdjL0ErNm9BeVhyQjVQQkp6WnhXVAo4Ui9MOXRUc3UzMWhJWlgvMHVqTUN6Nng0Sm5PNGlBa1VxSzBXenA2em4vUFJSWWpISEEzZERsNlpyVGI3cDlMCkFnTUJBQUdqSXpBaE1BNEdBMVVkRHdFQi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUcKU0liM0RRRUJDd1VBQTRJQkFRQmZsTVZHaXZsY1ZtaTh3Umo3cHRFdUg5YTM0R3Z0clJJQUVlbmRmci84a0FVbQpqekZJRFNkaFEwMXV4RUVrZ1JBaS9tZUszU1FJNVd0U21tbjE5ejd3N01IelVxTU5NcXpxQStjRDI2eEo3R2VIClV6NVhLT3pBejdFU1FlbmhVTkhqNkVyL0dYa3AzVkZEWmNkZGNKTDdkcVZNb251YXdMWWc4K0pZV25kenBxaEUKVlNTRzEwTGZ6TVl4TVVvUENMZUxUN0IyNzJOOStoZHMrOG9HalkzMDBvTTlnQ241RWxUb3JwWGFDWGhyUmtQMgpJa2dENEVMa1lyR242NXdQLyt3eXlDWEwwTE03MzBsUzZzcldxdUdMaTdIc090Z2xISE5GTThJbWlDeXBkQUNGClNvSm9ZK3NOUElDdnNCQUVSZjdkbGNJcitqbE4yTE1abHc2dkVYVjMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoKLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDakNDQWZLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREEyTVRRd01nWURWUVFEREN0dmNHVnUKYzJocFpuUXRjMlZ5ZG1salpTMXpaWEoyYVc1bkxYTnBaMjVsY2tBeE5USXhNREUwT0RZek1CNFhEVEU0TURNeApOREE0TURjME0xb1hEVEl6TURNeE16QTRNRGMwTkZvd05qRTBNRElHQTFVRUF3d3JiM0JsYm5Ob2FXWjBMWE5sCmNuWnBZMlV0YzJWeWRtbHVaeTF6YVdkdVpYSkFNVFV5TVRBeE5EZzJNekNDQVNJd0RRWUpLb1pJaHZjTkFRRUIKQlFBRGdnRVBBRENDQVFvQ2dnRUJBSzZxeVpSdTJVUFd6NnFzOGJ4ZG8vTGlDWGRsQ3RiYmhzMHVUUkM0RVR4agpybjhPaGZuZUtyVExydnZudUJmMUdnVE0ycXdObnFOM25JQUs1Sms3U2VOV09vUEhPVlMwYnJWQWFadWJId3ptCnphSTlFSUhhVkp2VllEUmdoaUZrZ2xNTTNTb29TMUFyWWszelBMem9FenZTaGcra2tUbisyZEYzeVFRbVVtS2UKRExESDJIY2lKdnNuMUEvc2lDNTF5VFpOSnl2Q0k3TkdYWGtIbTVSRHhiaTVldU04Tm43dUZ5L09YYzJRTFNuUQpodERUejJQVGxLREFlWkI3emhiK0UyM0R4d0pQb0Vqc0FPeURXVzQwN3ZuZGdFQzZpaGI0djVLbFM4N2wrOThiCmZSMDN4QnVnQWhEeVREbnJIQVRQVXFsUGlNUHYyb1BFV0dCSGJ1S3JxZ2NDQXdFQUFhTWpNQ0V3RGdZRFZSMFAKQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUZUdApjNDdPTkxYQk9sR0gzMXpJMFVJQ3hObmU1enR2Ky9FaER4UWVtYmplZnNHbHBEdTFxYlZhWlZNVXRuY3E5UExNClA3UDBDc3NGZWxzL1VFc2pubGRvdUhnTmMvNmFTUjEvTVRKZllKSTlOazljN3d0MXZHQzZLZ0Uvek9nak9zT3cKRWVEL3dFVklWUTZsQ1lMYndxWkFxa0lQQ3htckxsZE5BNEM1a2t0N0gwS0hmZENEK2djcEdFMkJ3aGUrYU5ZUgpzVUFnaFhWMHZPSElnaEdsdkJYZ2NwazFqL2ZqUXYybDNTOWV0YkJSelRyUUJDMnBrT0YrcmxDOGwxOC9EWGJSCmhVNUdJZDd5ZkpiUUJuaEtrVE91RU82aEVRR3YrYWVBTll0UFJBdDRTS2FmV2hDdU04Uit1WFBsSzhNRlJ0Uk8KVDNCNE13UzFONmJNWFFLMkhSTT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
Relist Behavior: Duration
Relist Duration: 15m0s
Relist Requests: 0
URL: https://asb.openshift-ansible-service-broker.svc:1338/ansible-service-broker
Status:
Conditions:
Last Transition Time: 2018-03-14T08:38:43Z
Message: Error syncing catalog from ServiceBroker. Error getting catalog payload for broker "ansible-service-broker"; received zero services; at least one service is required
Reason: ErrorSyncingCatalog
Status: False
Type: Ready
Reconciled Generation: 0
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
21m 20m 13 service-catalog-controller-manager Warning ErrorFetchingCatalog Error getting broker catalog: Get https://asb.openshift-ansible-service-broker.svc:1338/ansible-service-broker/v2/catalog: dial tcp 172.30.209.230:1338: getsockopt: no route to host
19m 1m 79 service-catalog-controller-manager Warning ErrorSyncingCatalog Error getting catalog payload for broker "ansible-service-broker"; received zero services; at least one service is required
# oc logs -n openshift-ansible-service-broker asb-etcd-1-dmlwp
2018-03-14 08:38:47.615313 I | etcdmain: etcd Version: 3.3.1
2018-03-14 08:38:47.626002 I | etcdmain: Git SHA: 28f3f26c0
2018-03-14 08:38:47.626010 I | etcdmain: Go Version: go1.9.4
2018-03-14 08:38:47.626017 I | etcdmain: Go OS/Arch: linux/amd64
2018-03-14 08:38:47.626028 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2018-03-14 08:38:47.627001 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2018-03-14 08:38:47.634750 I | embed: listening for peers on http://localhost:2380
2018-03-14 08:38:47.634873 I | embed: listening for client requests on 0.0.0.0:2379
2018-03-14 08:38:47.756766 I | etcdserver: recovered store from snapshot at index 4100041
2018-03-14 08:38:47.760792 I | etcdserver: name = default
2018-03-14 08:38:47.760820 I | etcdserver: data dir = /data
2018-03-14 08:38:47.760828 I | etcdserver: member dir = /data/member
2018-03-14 08:38:47.760838 I | etcdserver: heartbeat = 100ms
2018-03-14 08:38:47.760842 I | etcdserver: election = 1000ms
2018-03-14 08:38:47.760845 I | etcdserver: snapshot count = 100000
2018-03-14 08:38:47.760860 I | etcdserver: advertise client URLs = https://asb-etcd.openshift-ansible-service-broker.svc:2379
2018-03-14 08:38:48.106412 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 4110265
2018-03-14 08:38:48.106905 I | raft: 8e9e05c52164694d became follower at term 11
2018-03-14 08:38:48.106928 I | raft: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 11, commit: 4110265, applied: 4100041, lastindex: 4110265, lastterm: 11]
2018-03-14 08:38:48.107079 I | etcdserver/api: enabled capabilities for version 3.3
2018-03-14 08:38:48.107099 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 from store
2018-03-14 08:38:48.107108 I | etcdserver/membership: set the cluster version to 3.3 from store
2018-03-14 08:38:48.119255 W | auth: simple token is not cryptographically signed
2018-03-14 08:38:48.132170 I | etcdserver: starting server... [version: 3.3.1, cluster version: 3.3]
2018-03-14 08:38:48.136284 I | embed: ClientTLS: cert = /etc/tls/private/tls.crt, key = /etc/tls/private/tls.key, ca = , trusted-ca = /var/run/etcd-auth-secret/ca.crt, client-cert-auth = true, crl-file =
2018-03-14 08:38:48.612334 I | raft: 8e9e05c52164694d is starting a new election at term 11
2018-03-14 08:38:48.612370 I | raft: 8e9e05c52164694d became candidate at term 12
2018-03-14 08:38:48.612392 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 12
2018-03-14 08:38:48.612410 I | raft: 8e9e05c52164694d became leader at term 12
2018-03-14 08:38:48.612420 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 12
2018-03-14 08:38:48.612813 I | etcdserver: published {Name:default ClientURLs:[https://asb-etcd.openshift-ansible-service-broker.svc:2379]} to cluster cdf818194e3a8c32
2018-03-14 08:38:48.612833 I | embed: ready to serve client requests
2018-03-14 08:38:48.614904 I | embed: serving client requests on [::]:2379
2018-03-14 08:38:48.627537 I | embed: rejected connection from "127.0.0.1:36616" (error "tls: failed to verify client's certificate: x509: certificate signed by unknown authority", ServerName "")
WARNING: 2018/03/14 08:38:48 Failed to dial 0.0.0.0:2379: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate"; please retry.
@gouyang please look at the registry configuration for your Broker instance.
Here is a documentation snippet showing an example of what it would look like if using the upstream APB examples from dockerhub: https://github.com/openshift/ansible-service-broker/blob/master/docs/config.md#dockerhub-registry
Here is another example of how we create the config map for our run_latest script: https://github.com/openshift/ansible-service-broker/blob/master/templates/deploy-ansible-service-broker.template.yaml#L309-L316
Sorry for the late answer, after a fresh install on a stable openshift-ansible tag (openshift-ansible-3.9.0-0.37.0), I still had the exact same problem.
I fixed it by editing the broker-config config map and setting the white_list entry to ['.*'] for both dockerhub and local_openshift. After redeployment everything was working without any error. Unfortunately I can't remember what was the previous value for both white_list params.
registry:
- type: dockerhub
name: dh
url:
org: ansibleplaybookbundle
tag: latest
white_list: ['.*']
auth_type: ""
auth_name: ""
- type: local_openshift
name: localregistry
namespaces: ['openshift']
white_list: ['.*']
Previous value:
registry:
- type: dockerhub
name: dh
url:
org: ansibleplaybookbundle
tag: latest
white_list:
+ - ".*-apb$"
auth_type: ""
auth_name: ""
- type: local_openshift
name: localregistry
namespaces: ['openshift']
white_list:
+ - ".*-apb$"
The issue does not exists on openshift-ansible-3.9.0-0.40.0, I didn't test it with 3.7 yet.
The asb and asb-etcd pod are running in 3.9, but when to provision a apb on openshift web console, it failed. The error of provision apb exactly same with https://github.com/openshift/ansible-service-broker/issues/813.
The broker configmap is
# oc describe cm broker-config -n openshift-ansible-service-broker
Name: broker-config
Namespace: openshift-ansible-service-broker
Labels: app=openshift-ansible-service-broker
Annotations: <none>
Data
====
broker-config:
----
registry:
- type: dockerhub
name: dh
url: registry.hub.docker.com
org: ansibleplaybookbundle
tag: latest
white_list: [.*]
auth_type: ""
auth_name: ""
- type: local_openshift
name: localregistry
namespaces: ['openshift']
white_list: [.*]
dao:
etcd_host: asb-etcd.openshift-ansible-service-broker.svc
etcd_port: 2379
etcd_ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
etcd_client_cert: /var/run/asb-etcd-auth/client.crt
etcd_client_key: /var/run/asb-etcd-auth/client.key
log:
stdout: true
....
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten /remove-lifecycle stale
/remove-lifecycle rotten
To work around this problem, I added two lines to my ansible inventory. Maybe we should use this values as a the default.
ansible_service_broker_local_registry_whitelist=['.*-apb$']
ansible_service_broker_registry_whitelist=['.*-apb$']
This is reproducible on OKD 3.10, even after doing the workaround that @Reamer informed.
This is what the broker reports:
Status:
Conditions:
Last Transition Time: 2018-08-19T00:53:31Z
Message: Error fetching catalog. Error getting broker catalog: Status: 404; ErrorMessage: <nil>; Description: <nil>; ResponseError: <nil>
Reason: ErrorFetchingCatalog
Status: False
Type: Ready
Operation Start Time: 2018-08-19T00:53:33Z
Reconciled Generation: 0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrorGettingAuthCredentials 20m service-catalog-controller-manager Error getting broker auth credentials: secrets "asb-client" not found
Warning ErrorFetchingCatalog 14m service-catalog-controller-manager Error getting broker catalog: Get https://asb.openshift-ansible-service-broker.svc:1338/ansible-service-broker/v2/catalog: dial tcp 172.30.163.112:1338: getsockopt: no route to host
Warning ErrorFetchingCatalog 4m (x161 over 1h) service-catalog-controller-manager Error getting broker catalog: Status: 404; ErrorMessage: <nil>; Description: <nil>; ResponseError: <nil>
It seems that the container with tag "latest" is broken. Currently I use the tag "ansible-service-broker-1.3.7-1". This tag is working on my cluster.
@Reamer @slaterx I'll be picking up this issue to see what's going on. A lot of the information above seems quite old so I'd like to get a better idea of what folks are doing to get the okd and broker running.
1) What are you using to launch okd 3.10? oc cluster up
? if so what parameters? if using openshift-ansible what did you give it for input?
2) what registries are you trying to connect to?
@jmrodri I've used openshift-ansible on a multi-master cluster. This is what I've added to my hosts file:
openshift_enable_service_catalog=true
openshift_template_service_broker_namespaces=['openshift']
template_service_broker_selector={"env":"infra"}
ansible_service_broker_local_registry_whitelist=['.*-apb$']
ansible_service_broker_registry_whitelist=['.*-apb$']
I am accessing the default registries, dockerhub and local.
@slaterx @Reamer so far things are working for me, so I need a little more information to see if my setup might have fixed something.
What is the value of ansible_service_broker_registry_tag
and ansible_service_broker_image_tag
?
It would also be helpful to see the configmap:
oc describe configmap broker-config -n openshift-ansible-service-broker
So in this comment https://github.com/openshift/origin/issues/18332#issuecomment-414107904, the asb-client
warning is suspicious:
Warning ErrorGettingAuthCredentials 20m service-catalog-controller-manager Error getting broker auth credentials: secrets "asb-client" not found
That's how the catalog and the broker are configured to talk to each other.
[jesusr@transam aos-3.10{master}]$ oc get clusterservicebrokers ansible-service-broker -o yaml
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ClusterServiceBroker
metadata:
creationTimestamp: 2018-08-23T15:34:20Z
finalizers:
- kubernetes-incubator/service-catalog
generation: 1
name: ansible-service-broker
resourceVersion: "43445"
selfLink: /apis/servicecatalog.k8s.io/v1beta1/clusterservicebrokers/ansible-service-broker
uid: 00c5015f-a6ea-11e8-b683-0a580a800008
spec:
authInfo:
bearer:
secretRef:
name: asb-client
namespace: openshift-ansible-service-broker
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2akNDQWRLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFtTVNRd0lnWURWUVFEREJ0dmNHVnUKYzJocFpuUXRjMmxuYm1WeVFERTFNelV3TXpnd05USXdIaGNOTVRnd09ESXpNVFV5TnpNeFdoY05Nak13T0RJeQpNVFV5TnpNeVdqQW1NU1F3SWdZRFZRUUREQnR2Y0dWdWMyaHBablF0YzJsbmJtVnlRREUxTXpVd016Z3dOVEl3CmdnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNyVlgzOHQzck0xQ3h0Q0VZaWtiYm0KbHdXN0JkMjZyemw3RWVsdlF6SHdNS3B5cktXbGNaVzZYTlVCQ1RTWVRKOTAxaFhpenZPY21KMnlWRTd4VVJtSgpxTE0rOWZreVQxRUY0WGJ5VDVjQWhoNzl6dGdxeDVkN2dzSE9IYzN6RUYxcEQ0eUlGdVNSZDRrSUx1dXhCbnN1CmpTN0ViSlg4a0FEdHRqUGxlNkR3SmN4LzVlckZHMG5FdjVuTC9sY2sxVHpFaVRFV0xQWmFHKzA4Y3E1ZE9EeSsKQy9iNlk3RFJJYzFzRGo5cGxhNEZtcS9sR2Y0UWhSTUVoMW9yN0EvVzFoTm5DSFIwTXM0THlSbTJ4L2lyRU14WApLT2hLRXRBakVXUkI2QktnRHppbjM5ZXJKYkl0RDV2VkdhWUJwY3oyLzlVYXZYcVVvU3lxNVlLYnJlSlAxZzgxCkFnTUJBQUdqSXpBaE1BNEdBMVVkRHdFQi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUcKU0liM0RRRUJDd1VBQTRJQkFRQ1dGMDhaTlFoUzRFNHJTRWNsd0dXV3VGbnd5dkh3QUYvcnNpM1ArWmZmanBqNApvK2l4SE1xYTNlWUsydlBoWk1LOHF4THRBb1dsVWNsa0huaGhIMENpOVpBZVMzSk5UNVFVaE0xUEgraEQzRnMzCkYyWkRMbENrNWR3OWtuSUJKT0ppdGpYUnF0bkxGT2wrand0MWlvTG0rMndZOGtIUk9LQ2wrczVkTnhJNXViR1IKbkpyckc2aTVjZjNqNE54NVlmb1ZVTnZNQ3VTNXFLOWE4RkVGQ0c0L3pGUC9wQWdEaHRVVFFFZmZXL2RFUjRPbgpUNzR5ektDMkNWb1VWNzM4d3piejh4OGVYaWhmUHZkSkV4R3QzN2MrTndkQXpHd2lNZTAwM0kzTWdDQ3BUMWFkCkF0WnY2WUxBeXA4RERMQm14UDZTeHZqQk5la0xmSE9hYzAyelVPUGIKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoKLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDakNDQWZLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREEyTVRRd01nWURWUVFEREN0dmNHVnUKYzJocFpuUXRjMlZ5ZG1salpTMXpaWEoyYVc1bkxYTnBaMjVsY2tBeE5UTTFNRE00TURVek1CNFhEVEU0TURneQpNekUxTWpjek1sb1hEVEl6TURneU1qRTFNamN6TTFvd05qRTBNRElHQTFVRUF3d3JiM0JsYm5Ob2FXWjBMWE5sCmNuWnBZMlV0YzJWeWRtbHVaeTF6YVdkdVpYSkFNVFV6TlRBek9EQTFNekNDQVNJd0RRWUpLb1pJaHZjTkFRRUIKQlFBRGdnRVBBRENDQVFvQ2dnRUJBSzFEdE5iRFJ1MmRRbjdZNHlEdWJQanJ2NC9QeXQ4Rm5vVTRmMjJlcER2TwpvY0R2K3RnblBLRlJyNVlhTG1nYmg1WmRNbjh1TGJ1Vi9KR09tbnpPOGtUMjNOcW82K2hvMFVpK2xyenZJWlJGCk14bERvNm9UeEZoVjVXdDMrM3M1cHBjS3pJb0tCeTJHcG5md3lDd0tDMnBEd1dqUVBrY2xZaEo4VVl4WTh0QkoKRkdMMTBoemJmYmdKcWxkRnYxZnpXZHVzRGljZGJCQ1hFWFFuOFVwekZqaDBES01tTWlleFlYU01DTFJZU2RadAptYWJNNC9yeCt3bG44UGlOMUhEVW1UQkwzaXFTZ3MzL202R2NHWUh0T2pKcWJ5NjZ4c1BITTVUZERHV05QR2ZLClFFTE9nSStSUGxXMEdCbHp0UDI4dGpXRVhnVzJtc0ZUSGtlK0tZcTkxS01DQXdFQUFhTWpNQ0V3RGdZRFZSMFAKQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUVMeQpzS2U2ZGFVdVQ5SW5QYzA5Z2hXNUdsSlhoZXQvNDhYU3ErQzZKZ2dUWjJJd0R1Vkx5K2VtWUVYaHkwbnh6Wk84CkJJdkRKUGpQMTAwMmFHS1RpbDhQeWRVUEx4UE4xdVYxTzJYazhIRy8vN0Mza0pScll1UUN0YTk3UElCYWFib0YKeXdNNzhsaHhjaWw2SXNROUVCSXhjckJqTUNSWFZhYUY5QnhIQzJ0UW9MVENqbHlUUDhDM1hnMGhuRi83Z0pxeApiUG80c1gzbTNQVHhGNStyWjZoSUVLRG1KZ0ZWWlhCdnFod1krN3BuMTdsbXJqMUY5Uko5WXJyckUyY2lSMkNYCmNIeG8yQnNPTzRUcDJXMUEyK2QvTERpNk1HMlF0U1V6dlorN1pNL204aW1PWW9rRXo4cmNMYmpQaU1tS0tlU1kKNFFDY2cvMm1CWTdaUnVCMWduTT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
relistBehavior: Duration
relistDuration: 15m0s
relistRequests: 0
url: https://asb.openshift-ansible-service-broker.svc:1338/ansible-service-broker
status:
conditions:
- lastTransitionTime: 2018-08-23T15:35:04Z
message: Successfully fetched catalog entries from broker.
reason: FetchedCatalog
status: "True"
type: Ready
lastCatalogRetrievalTime: 2018-08-23T21:31:14Z
reconciledGeneration: 1
I would not expect that to be an error if things are going well. It should look more like the one above.
I've successfully run the service-catalog and the broker with no issues. Our QE has had no problems as well. I'm going to close this bug as fixed in current release.
Sorry for the delay, @jmrodri, I was on leave.
So, I've raw twice our playbook from scratch (new VMs) against two different clusters and we were still not able to get the asb running - this time with no route to host
. This was the report from oc client:
Conditions:
Last Transition Time: 2018-09-02T13:59:31Z
Message: Error fetching catalog. Error getting broker catalog: Status: 404; ErrorMessage: <nil>; Description: <nil>; ResponseError: <nil>
Reason: ErrorFetchingCatalog
Status: False
Type: Ready
Operation Start Time: 2018-09-02T13:59:31Z
Reconciled Generation: 0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrorFetchingCatalog 2m (x4686 over 23h) service-catalog-controller-manager Error getting broker catalog: Get https://asb.openshift-ansible-service-broker.svc:1338/ansible-service-broker/v2/catalog: dial tcp 172.30.239.64:1338: getsockopt: no route to host
I do not have ansible_service_broker_registry_tag
and ansible_service_broker_image_tag
in my hosts file. I have added ansible_service_broker_local_registry_whitelist=['.*-apb$']
and ansible_service_broker_registry_whitelist=['.*-apb$']
as suggested by @Reamer.
The config map is below:
Name: broker-config
Namespace: openshift-ansible-service-broker
Labels: app=openshift-ansible-service-broker
Annotations: <none>
Data
====
broker-config:
----
registry:
- type: dockerhub
name: dh
url:
org: ansibleplaybookbundle
tag: latest
white_list: [.*-apb$]
auth_type: ""
auth_name: ""
- type: local_openshift
name: localregistry
namespaces: ['openshift']
white_list: [.*-apb$]
dao:
type: crd
log:
stdout: true
level: info
color: true
openshift:
host: ""
ca_file: ""
bearer_token_file: ""
namespace: openshift-ansible-service-broker
sandbox_role: edit
image_pull_policy: Always
keep_namespace: false
keep_namespace_on_error: true
broker:
dev_broker: false
bootstrap_on_startup: true
refresh_interval: 600s
launch_apb_on_bind: false
output_request: false
recovery: true
ssl_cert_key: /etc/tls/private/tls.key
ssl_cert: /etc/tls/private/tls.crt
auto_escalate: False
auth:
- type: basic
enabled: false
Events: <none>
And the secrets error is not here anymore in both of my installations.
Troubleshooting the no route to host
error, I noticed that the health checks were restarting the container before the startup. Inside the container, I could see all playbooks being whitelisted and the container being killed before that could be completed.
So, I increased the readiness/liveliness checks to 60/15 (previously, it was 15/1). With that, I managed to get asb container running. Then, 404 error came back:
Status:
Conditions:
Last Transition Time: 2018-09-02T13:59:31Z
Message: Error fetching catalog. Error getting broker catalog: Status: 404; ErrorMessage: <nil>; Description: <nil>; ResponseError: <nil>
Reason: ErrorFetchingCatalog
Status: False
Type: Ready
Operation Start Time: 2018-09-02T13:59:31Z
Reconciled Generation: 0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrorFetchingCatalog 6m (x4686 over 23h) service-catalog-controller-manager Error getting broker catalog: Get https://asb.openshift-ansible-service-broker.svc:1338/ansible-service-broker/v2/catalog: dial tcp 172.30.239.64:1338: getsockopt: no route to host
Warning ErrorFetchingCatalog 1m (x18 over 4m) service-catalog-controller-manager Error getting broker catalog: Status: 404; ErrorMessage: <nil>; Description: <nil>; ResponseError: <nil>
Therefore, the whitelist made the container configuration good enough for the container to start up, but somehow getting the catalog still returns 404.
I've tried to curl https://asb.openshift-ansible-service-broker.svc:1338/ansible-service-broker/v2/catalog
from inside a container in openshift and I've obtained a 403, which means that the component trying to obtain the catalog has the valid credentials.
Thus, my suggestion is that we should reopen this bug.
Let me know if you would like any additional testing.
Can we get the full inventory so we can take another try at reproducing this?
Here it is, the second cluster is pretty much like this one but on other VMs and another valid domains/certs/etc:
# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd
glusterfs
lb
infra
workers
[OSEv3:vars]
#
# OpenShift installation properties
#
openshift_deployment_type=origin
deployment_type=origin
openshift_release=v3.10
openshift_pkg_version=-3.10.0
ansible_ssh_user=root
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
openshift_install_examples=true
openshift_set_hostname=true
openshift_use_dnsmasq=true
os_firewall_use_firewalld=True
openshift_docker_use_system_container=false
openshift_use_etcd_system_container=True
openshift_disable_check=memory_availability,disk_availability
#
# Multi-master and HA properties
#
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-dev.example.com
openshift_master_cluster_public_hostname=openshift-dev.example.com
openshift_master_api_port=443
openshift_master_console_port=443
openshift_rolling_restart_mode=system
os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant
openshift_master_default_subdomain=subdomain.example.com
openshift_master_named_certificates=[{"certfile": "./cert.crt", "keyfile": "./cert.key", "names": ["openshift-dev.example.com"], "cafile": "ca.crt"}]
openshift_master_overwrite_named_certificates=true
osm_default_node_selector='node-role.kubernetes.io/compute=true'
osm_custom_cors_origins=['wlg-openshift-dev-master-01.example.com', 'wlg-openshift-dev-master-02.example.com', 'wlg-openshift-dev-master-03.example.com', '127.0.0.1', 'kubernetes.default.svc.cluster.local']
openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true',]}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true']}, {'name': 'node-config-routing', 'labels': ['node-role.kubernetes.io/routing=true']}, {'name': 'node-config-monitoring', 'labels': ['node-role.kubernetes.io/monitoring=true']},{'name': 'node-config-gluster', 'labels': ['node-role.kubernetes.io/gluster=true']},{'name': 'node-config-infra-routing', 'labels': ['node-role.kubernetes.io/infra=true,node-role.kubernetes.io/routing=true']},{'name': 'node-config-infra-monitoring', 'labels': ['node-role.kubernetes.io/infra=true,node-role.kubernetes.io/monitoring=true']}]
openshift_master_identity_providers=[{some-ldap-settings}]
#
# Routing and Registry
#
openshift_hosted_manage_registry=true
openshift_hosted_manage_router=true
openshift_router_selector=node-role.kubernetes.io/infra=true
openshift_registry_selector=node-role.kubernetes.io/infra=true
openshift_hosted_router_replicas=2
openshift_hosted_router_force_subdomain='${name}.subdomain.example.com'
openshift_hosted_registry_routehost=registry.subdomain.example.com
openshift_hosted_registry_routetermination=passthrough
openshift_hosted_registry_storage_kind=glusterfs
openshift_hosted_registry_storage_volume_size=20Gi
#
# Cockpit
#
osm_use_cockpit=true
osm_cockpit_plugins=['cockpit-kubernetes']
#
# Logging
#
openshift_logging_install_logging=false
openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_curator_replica_count=2
openshift_logging_kibana_hostname=logging.subdomain.example.com
openshift_logging_kibana_replica_count=2
openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_es_cluster_size=1
openshift_logging_es_memory_limit=4G
openshift_logging_es_pvc_dynamic=true
#
# Dynamic Storage
#
osn_storage_plugin_deps=['ceph','glusterfs']
openshift_storage_glusterfs_namespace=dynamic-storage
openshift_storage_glusterfs_name=storage
openshift_storage_glusterfs_nodeselector='node-role.kubernetes.io/gluster=true'
openshift_storage_glusterfs_image='gluster/gluster-centos'
openshift_storage_glusterfs_storageclass=true
openshift_storage_glusterfs_storageclass_default=true
#
# Service Broker
#
openshift_enable_service_catalog=false
template_service_broker_install=false
openshift_template_service_broker_namespaces=['openshift']
template_service_broker_selector='node-role.kubernetes.io/infra=true'
ansible_service_broker_local_registry_whitelist=['.*-apb$']
ansible_service_broker_registry_whitelist=['.*-apb$']
#
# Metrics
#
openshift_metrics_install_metrics=false
openshift_hosted_metrics_deployer_version=v3.10.0-rc.0
#openshift_hosted_metrics_deployer_prefix=openshift/origin-
openshift_override_hostname_check=true
openshift_metrics_hawkular_hostname=metrics.subdomain.example.com
#openshift_hosted_metrics_public_url=https://metrics.subdomain.example.com/hawkular/metrics
openshift_metrics_master_url='https://openshift-dev.example.com'
openshift_metrics_startup_timeout=900
openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_metrics_cassandra_storage_type=dynamic
openshift_metrics_cassandra_image=docker.io/openshift/origin-metrics-cassandra:v3.10.0-rc.0
openshift_metrics_hawkular_metrics_image=docker.io/openshift/origin-metrics-hawkular-metrics:v3.10.0-rc.0
openshift_metrics_heapster_image=docker.io/openshift/origin-metrics-heapster:v3.10.0-rc.0
#
# Prometheus
#
openshift_hosted_prometheus_deploy=false
openshift_cluster_monitoring_operator_install=true
openshift_cluster_monitoring_operator_node_selector={"node-role.kubernetes.io/infra": "true"}
openshift_prometheus_storage_type=pvc
openshift_prometheus_storage_class=glusterfs-storage
openshift_prometheus_alertmanager_pvc_name=alertmanager
openshift_prometheus_alertbuffer_pvc_size=10G
openshift_prometheus_pvc_access_modes=['ReadWriteOnce']
openshift_prometheus_node_selector={"node-role.kubernetes.io/infra": "true"}
[masters:children]
master01
master02
master03
# host group for masters
[master01]
wlg-openshift-dev-master-01.example.com openshift_node_group_name='node-config-master'
[master02]
wlg-openshift-dev-master-02.example.com openshift_node_group_name='node-config-master'
[master03]
wlg-openshift-dev-master-03.example.com openshift_node_group_name='node-config-master'
# host group for etcd
[etcd]
wlg-openshift-dev-master-01.example.com openshift_node_group_name='node-config-master'
wlg-openshift-dev-master-02.example.com openshift_node_group_name='node-config-master'
wlg-openshift-dev-master-03.example.com openshift_node_group_name='node-config-master'
[glusterfs]
wlg-openshift-dev-storage-01.example.com gluster_ip=1.1.1.2 glusterfs_devices='[ "/dev/vdc"]' openshift_node_group_name='node-config-gluster'
wlg-openshift-dev-storage-02.example.com gluster_ip=1.1.1.3 glusterfs_devices='[ "/dev/vdc"]' openshift_node_group_name='node-config-gluster'
wlg-openshift-dev-storage-03.example.com gluster_ip=1.1.1.4 glusterfs_devices='[ "/dev/vdc"]' openshift_node_group_name='node-config-gluster'
# host group for load balancing
[lb]
wlg-openshift-dev-lb-01.example.com
wlg-openshift-dev-lb-02.example.com
[infra]
wlg-openshift-dev-infra-01.example.com openshift_node_group_name='node-config-infra'
wlg-openshift-dev-infra-02.example.com openshift_node_group_name='node-config-infra'
[workers]
wlg-openshift-dev-worker-01.example.com openshift_node_group_name='node-config-compute'
wlg-openshift-dev-worker-02.example.com openshift_node_group_name='node-config-compute'
wlg-openshift-dev-worker-03.example.com openshift_node_group_name='node-config-compute'
# host group for nodes, includes region info
[nodes]
wlg-openshift-dev-master-01.example.com openshift_node_group_name='node-config-master'
wlg-openshift-dev-master-02.example.com openshift_node_group_name='node-config-master'
wlg-openshift-dev-master-03.example.com openshift_node_group_name='node-config-master'
wlg-openshift-dev-infra-01.example.com openshift_node_group_name='node-config-infra'
wlg-openshift-dev-infra-02.example.com openshift_node_group_name='node-config-infra'
wlg-openshift-dev-storage-01.example.com openshift_node_group_name='node-config-gluster'
wlg-openshift-dev-storage-02.example.com openshift_node_group_name='node-config-gluster'
wlg-openshift-dev-storage-03.example.com openshift_node_group_name='node-config-gluster'
wlg-openshift-dev-worker-01.example.com openshift_node_group_name='node-config-compute'
wlg-openshift-dev-worker-02.example.com openshift_node_group_name='node-config-compute'
wlg-openshift-dev-worker-03.example.com openshift_node_group_name='node-config-compute'
Is openshift_enable_service_catalog=false
intentional? This should be true.
@slaterx I noticed that @Reamer mentioned that version 1.3.7-1 works for him. Looking at our commit history, 1.3.7-1 and earlier used the /ansible-service-broker
prefix. Later versions use /osb
. I'm wondering if you might be hitting this scenario where the broker is using a route with /osb
but the service-catalog is being told to use /ansible-service-broker
.
@slaterx could we get the logs from the ansible-service-broker
that will help narrow this down.
@jmontleon, that was me trying to troubleshoot the installation. You can disregard it.
@jmrodri, maybe you're right, the URI could be the reason why we're returning 404. Here are the clusterservicebroker description from oc, and the endpoint is definitely the old one:
➜ oc describe clusterservicebroker ansible-service-broker
Name: ansible-service-broker
Namespace:
Labels: <none>
Annotations: <none>
API Version: servicecatalog.k8s.io/v1beta1
Kind: ClusterServiceBroker
Metadata:
Creation Timestamp: 2018-09-01T10:15:32Z
Finalizers:
kubernetes-incubator/service-catalog
Generation: 1
Resource Version: 704840
Self Link: /apis/servicecatalog.k8s.io/v1beta1/clusterservicebrokers/ansible-service-broker
UID: f5d56063-adcf-11e8-8705-0a580a810005
Spec:
Auth Info:
Bearer:
Secret Ref:
Name: asb-client
Namespace: openshift-ansible-service-broker
Ca Bundle: <ca-bundle-here>
Relist Behavior: Duration
Relist Duration: 15m0s
Relist Requests: 0
URL: https://asb.openshift-ansible-service-broker.svc:1338/ansible-service-broker
Status:
Conditions:
Last Transition Time: 2018-09-01T10:15:37Z
Message: Error fetching catalog. Error getting broker catalog: Status: 404; ErrorMessage: <nil>; Description: <nil>; ResponseError: <nil>
Reason: ErrorFetchingCatalog
Status: False
Type: Ready
Operation Start Time: 2018-09-01T10:15:37Z
Reconciled Generation: 0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrorFetchingCatalog 3m (x11183 over 2d) service-catalog-controller-manager Error getting broker catalog: Status: 404; ErrorMessage: <nil>; Description: <nil>; ResponseError: <nil>
My next questions are, how do we configure that and can we configure it also on openshift-ansible?
Anyway, here are the ansible-service-broker
logs - just catalog refresh and no hits:
time="2018-09-06T09:53:58Z" level=info msg="Broker configured to refresh specs every 10m0s seconds"
--
| time="2018-09-06T09:53:58Z" level=info msg="Attempting bootstrap at 2018-09-06 09:53:58.945703794 +0000 UTC"
| time="2018-09-06T09:53:58Z" level=info msg="AnsibleBroker::Bootstrap"
| time="2018-09-06T09:53:59Z" level=info msg="0 specs deleted"
| time="2018-09-06T09:54:02Z" level=info msg="Bundles filtered by white/blacklist filter:\n\t-> ansibleplaybookbundle/hello-world\n\t-> ansibleplaybookbundle/origin-ansible-service-broker\n\t-> ansibleplaybookbundle/origin-service-catalog\n\t-> ansibleplaybookbundle/mediawiki123\n\t-> ansibleplaybookbundle/asb-installer\n\t-> ansibleplaybookbundle/apb-assets-base\n\t-> ansibleplaybookbundle/apb-base\n\t-> ansibleplaybookbundle/photo-album-demo-app\n\t-> ansibleplaybookbundle/ansible-service-broker\n\t-> ansibleplaybookbundle/helm-bundle-base\n\t-> ansibleplaybookbundle/apb-tools\n\t-> ansibleplaybookbundle/origin\n\t-> ansibleplaybookbundle/manageiq-apb-runner\n\t-> ansibleplaybookbundle/py-zip-demo\n\t-> ansibleplaybookbundle/photo-album-demo-api\n\t-> ansibleplaybookbundle/mediawiki\n\t-> ansibleplaybookbundle/vnc-desktop\n\t-> ansibleplaybookbundle/deploy-broker\n\t-> ansibleplaybookbundle/origin-deployer\n\t-> ansibleplaybookbundle/vnc-client\n\t-> ansibleplaybookbundle/origin-docker-registry\n\t-> ansibleplaybookbundle/origin-sti-builder\n\t-> ansibleplaybookbundle/origin-recycler\n\t-> ansibleplaybookbundle/origin-haproxy-router\n\t-> ansibleplaybookbundle/kubevirt-ansible\n\t-> ansibleplaybookbundle/origin-pod\n"
| time="2018-09-06T09:54:05Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/prometheus-apb:latest runtime is 2"
| time="2018-09-06T09:54:06Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/v2v-apb:latest runtime is 2"
| time="2018-09-06T09:54:06Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/standalone-cinder-apb:latest runtime is 2"
| time="2018-09-06T09:54:07Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/mariadb-apb:latest runtime is 2"
| time="2018-09-06T09:54:07Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/mssql-apb:latest runtime is 2"
| time="2018-09-06T09:54:08Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/iscsi-demo-target-apb:latest runtime is 2"
| time="2018-09-06T09:54:08Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/thelounge-apb:latest runtime is 2"
| time="2018-09-06T09:54:09Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/etherpad-apb:latest runtime is 2"
| time="2018-09-06T09:54:09Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/eclipse-che-apb:latest runtime is 2"
| time="2018-09-06T09:54:10Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/import-vm-apb:latest runtime is 2"
| time="2018-09-06T09:54:10Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/kubevirt-apb:latest runtime is 2"
| time="2018-09-06T09:54:12Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/galera-apb:latest runtime is 2"
| time="2018-09-06T09:54:12Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/mysql-apb:latest runtime is 2"
| time="2018-09-06T09:54:14Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/photo-album-demo-ext-api-apb:latest runtime is 2"
| time="2018-09-06T09:54:14Z" level=info msg="Didn't find encoded Spec label. Assuming image is not APB and skipping"
| time="2018-09-06T09:54:15Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/kibana-apb:latest runtime is 2"
| time="2018-09-06T09:54:15Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/mssql-remote-apb:latest runtime is 2"
| time="2018-09-06T09:54:16Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/hastebin-apb:latest runtime is 2"
| time="2018-09-06T09:54:16Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/hello-world-apb:latest runtime is 2"
| time="2018-09-06T09:54:17Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/manageiq-apb:latest runtime is 2"
| time="2018-09-06T09:54:18Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/jenkins-apb:latest runtime is 2"
| time="2018-09-06T09:54:18Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/virtualmachines-apb:latest runtime is 2"
| time="2018-09-06T09:54:19Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/keycloak-apb:latest runtime is 2"
| time="2018-09-06T09:54:20Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/proxy-config-apb:latest runtime is 2"
| time="2018-09-06T09:54:20Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/photo-album-demo-app-apb:latest runtime is 2"
| time="2018-09-06T09:54:21Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/mongodb-apb:latest runtime is 2"
| time="2018-09-06T09:54:21Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/tiller-apb:latest runtime is 2"
| time="2018-09-06T09:54:22Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/hello-world-db-apb:latest runtime is 2"
| time="2018-09-06T09:54:22Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/es-apb:latest runtime is 2"
| time="2018-09-06T09:54:23Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/rhpam-apb:latest runtime is 2"
| time="2018-09-06T09:54:24Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/wordpress-ha-apb:latest runtime is 2"
| time="2018-09-06T09:54:24Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/openshift-logging-apb:latest runtime is 2"
| time="2018-09-06T09:54:25Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/nginx-apb:latest runtime is 2"
| time="2018-09-06T09:54:25Z" level=info msg="Didn't find encoded Spec label. Assuming image is not APB and skipping"
| time="2018-09-06T09:54:26Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/gluster-s3object-apb:latest runtime is 2"
| time="2018-09-06T09:54:26Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/dynamic-apb:latest runtime is 2"
| time="2018-09-06T09:54:26Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/awx-apb:latest runtime is 2"
| time="2018-09-06T09:54:27Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/photo-album-demo-api-apb:latest runtime is 2"
| time="2018-09-06T09:54:28Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/rds-postgres-apb:latest runtime is 2"
| time="2018-09-06T09:54:29Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/pyzip-demo-apb:latest runtime is 2"
| time="2018-09-06T09:54:30Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/postgresql-apb:latest runtime is 2"
| time="2018-09-06T09:54:30Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/mediawiki-apb:latest runtime is 2"
| time="2018-09-06T09:54:31Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/pyzip-demo-db-apb:latest runtime is 2"
| time="2018-09-06T09:54:31Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/rocketchat-apb:latest runtime is 2"
| time="2018-09-06T09:54:32Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/vnc-desktop-apb:latest runtime is 2"
| time="2018-09-06T09:54:32Z" level=info msg="Validating specs..."
| time="2018-09-06T09:54:32Z" level=info msg="All specs passed validation!"
| time="2018-09-06T09:54:32Z" level=info msg="Bundles filtered by white/blacklist filter:\n\t-> openshift/dotnet\n\t-> openshift/dotnet-runtime\n\t-> openshift/httpd\n\t-> openshift/jenkins\n\t-> openshift/mariadb\n\t-> openshift/mongodb\n\t-> openshift/mysql\n\t-> openshift/nginx\n\t-> openshift/nodejs\n\t-> openshift/perl\n\t-> openshift/php\n\t-> openshift/postgresql\n\t-> openshift/python\n\t-> openshift/redis\n\t-> openshift/ruby\n\t-> openshift/wildfly\n"
| time="2018-09-06T09:54:32Z" level=info msg="Validating specs..."
| time="2018-09-06T09:54:32Z" level=info msg="All specs passed validation!"
| time="2018-09-06T09:54:32Z" level=info msg="update spec: 9519424a4ffdd1d2f77797987166a9f8\|dh-blankvm-apb to crd"
| time="2018-09-06T09:54:32Z" level=info msg="update spec: ddd528762894b277001df310a126d5ad\|dh-mysql-apb to crd"
| time="2018-09-06T09:54:32Z" level=info msg="update spec: ab24ffd54da0aefdea5277e0edce8425\|dh-hastebin-apb to crd"
| time="2018-09-06T09:54:32Z" level=info msg="update spec: 192097962f2955b0582b5d53ddb942e4\|dh-galera-apb to crd"
| time="2018-09-06T09:54:32Z" level=info msg="update spec: e9c042c4925dd0c7c25ceca4f5179e1c\|dh-mongodb-apb to crd"
| time="2018-09-06T09:54:32Z" level=info msg="update spec: a946a139a9308a59bf642ac52b4ba317\|dh-wordpress-ha-apb to crd"
| time="2018-09-06T09:54:33Z" level=info msg="update spec: 0e991006d21029e47abe71acc255e807\|dh-pyzip-demo-apb to crd"
| time="2018-09-06T09:54:33Z" level=info msg="update spec: ba9c2d4db404ce97111bea80225de968\|dh-rocketchat-apb to crd"
| time="2018-09-06T09:54:33Z" level=info msg="update spec: c4ef25f81a0c275c8f1bee1b736f3068\|dh-mssql-apb to crd"
| time="2018-09-06T09:54:34Z" level=info msg="update spec: 595db86e75325f430afa4fa3f7d69af9\|dh-iscsi-demo-target-apb to crd"
| time="2018-09-06T09:54:34Z" level=info msg="update spec: 11bbd6c120e197ea6acacf7165749629\|dh-openshift-logging-apb to crd"
| time="2018-09-06T09:54:35Z" level=info msg="update spec: 08ccf37be271fba38b1a70f87302297f\|dh-rhpam-apb to crd"
| time="2018-09-06T09:54:35Z" level=info msg="update spec: d889087d9f39d5b09a06842518f5d9e2\|dh-dynamic-apb to crd"
| time="2018-09-06T09:54:35Z" level=info msg="update spec: 4408b368ae2f09a5358340a2a10e197b\|dh-v2v to crd"
| time="2018-09-06T09:54:36Z" level=info msg="update spec: 60836f0ce3bd7d325587211dd7791f5b\|dh-import-vm-apb to crd"
| time="2018-09-06T09:54:36Z" level=info msg="update spec: b95513950bb3f132de25d58fb75f8dca\|dh-keycloak-apb to crd"
| time="2018-09-06T09:54:37Z" level=info msg="update spec: f830fb63f6df99c7bfae34b295b43108\|dh-tiller-apb to crd"
| time="2018-09-06T09:54:37Z" level=info msg="update spec: 880ef3b4ba5fa8d80908e9974228e603\|dh-awx-apb to crd"
| time="2018-09-06T09:54:37Z" level=info msg="update spec: 1882ffca5d72b1084e9107e3485f5066\|dh-eclipse-che-apb to crd"
| time="2018-09-06T09:54:38Z" level=info msg="update spec: fd9b21a9caa8bf8b42b27bb0c90d3b74\|dh-virtualization to crd"
| time="2018-09-06T09:54:38Z" level=info msg="update spec: eebf92c7670f30007a4b8db3a8166d5c\|dh-thelounge-apb to crd"
| time="2018-09-06T09:54:39Z" level=info msg="update spec: 3f85c20e073a9c761d3f8560b4c5180b\|dh-demo-ext-api-apb to crd"
| time="2018-09-06T09:54:39Z" level=info msg="update spec: 5d0062cce443e5ecb8438ca5f664dcd7\|dh-kibana-apb to crd"
| time="2018-09-06T09:54:39Z" level=info msg="update spec: 2c79572fbf83125231198451c26e7cf9\|dh-mssql-remote-apb to crd"
| time="2018-09-06T09:54:40Z" level=info msg="update spec: f4509733ca0636df3d69b6af53260160\|dh-jenkins-apb to crd"
| time="2018-09-06T09:54:40Z" level=info msg="update spec: aff6d7bb9c7f57c9ce8b742228e4caa3\|dh-es-apb to crd"
| time="2018-09-06T09:54:41Z" level=info msg="update spec: f755257efed3e3d43c8b82afd0db1181\|dh-prometheus-apb to crd"
| time="2018-09-06T09:54:41Z" level=info msg="update spec: 53046edd737292ba731e28556bec3a38\|dh-standalone-cinder-apb to crd"
| time="2018-09-06T09:54:41Z" level=info msg="update spec: 21e1bfbf09d5a7fb8a54042f504f26be\|dh-demo-api-apb to crd"
| time="2018-09-06T09:54:42Z" level=info msg="update spec: c65fbd4e701cb71d74fd2cc35e14432b\|dh-rds-postgres-apb to crd"
| time="2018-09-06T09:54:42Z" level=info msg="update spec: 1dda1477cace09730bd8ed7a6505607e\|dh-postgresql-apb to crd"
| time="2018-09-06T09:54:43Z" level=info msg="update spec: 1dd62d51c52cc2ac404d58abc0c8fa94\|dh-vnc-desktop-apb to crd"
| time="2018-09-06T09:54:43Z" level=info msg="update spec: 135bd0df0401e2fdd52fd136935014fb\|dh-nginx-apb to crd"
| time="2018-09-06T09:54:43Z" level=info msg="update spec: 09628db4757fd1a2db85d465106b9f82\|dh-gluster-s3-apb to crd"
| time="2018-09-06T09:54:44Z" level=info msg="update spec: 9f7da06f179b895a8ee5f9a3ce4af7ef\|dh-hello-world-apb to crd"
| time="2018-09-06T09:54:44Z" level=info msg="update spec: d4684c1b61cd094af9aa6ec4a90b4d69\|dh-demo-app-apb to crd"
| time="2018-09-06T09:54:45Z" level=info msg="update spec: b43a4272a6efcaaa3e0b9616324f1099\|dh-hello-world-db-apb to crd"
| time="2018-09-06T09:54:45Z" level=info msg="update spec: f6c4486b7fb0cdac4b58e193607f7011\|dh-mediawiki-apb to crd"
| time="2018-09-06T09:54:45Z" level=info msg="update spec: 693cb128e68365830c913631300deac0\|dh-pyzip-demo-db-apb to crd"
| time="2018-09-06T09:54:46Z" level=info msg="update spec: 67042296c7c95e84142f21f58da2ebfe\|dh-mariadb-apb to crd"
| time="2018-09-06T09:54:46Z" level=info msg="update spec: ca91b61da8476984f18fc13883ae2fdb\|dh-etherpad-apb to crd"
| time="2018-09-06T09:54:47Z" level=info msg="update spec: 6df7afbd132c094704b4a8bfd44378c0\|dh-manageiq-apb to crd"
| time="2018-09-06T09:54:47Z" level=info msg="update spec: 1830d9181b425e281b36efbf22f378a4\|dh-proxy-config-apb to crd"
| time="2018-09-06T09:54:47Z" level=info msg="Broker successfully bootstrapped"
| time="2018-09-06T10:03:58Z" level=info msg="Broker configured to refresh specs every 10m0s seconds"
| time="2018-09-06T10:03:58Z" level=info msg="Attempting bootstrap at 2018-09-06 10:03:58.945701574 +0000 UTC"
| time="2018-09-06T10:03:58Z" level=info msg="AnsibleBroker::Bootstrap"
| time="2018-09-06T10:03:59Z" level=info msg="0 specs deleted"
| time="2018-09-06T10:04:01Z" level=info msg="Bundles filtered by white/blacklist filter:\n\t-> ansibleplaybookbundle/origin-ansible-service-broker\n\t-> ansibleplaybookbundle/hello-world\n\t-> ansibleplaybookbundle/mediawiki123\n\t-> ansibleplaybookbundle/apb-base\n\t-> ansibleplaybookbundle/ansible-service-broker\n\t-> ansibleplaybookbundle/apb-tools\n\t-> ansibleplaybookbundle/manageiq-apb-runner\n\t-> ansibleplaybookbundle/py-zip-demo\n\t-> ansibleplaybookbundle/mediawiki\n\t-> ansibleplaybookbundle/asb-installer\n\t-> ansibleplaybookbundle/apb-assets-base\n\t-> ansibleplaybookbundle/photo-album-demo-app\n\t-> ansibleplaybookbundle/helm-bundle-base\n\t-> ansibleplaybookbundle/origin\n\t-> ansibleplaybookbundle/photo-album-demo-api\n\t-> ansibleplaybookbundle/deploy-broker\n\t-> ansibleplaybookbundle/vnc-client\n\t-> ansibleplaybookbundle/origin-service-catalog\n\t-> ansibleplaybookbundle/vnc-desktop\n\t-> ansibleplaybookbundle/origin-deployer\n\t-> ansibleplaybookbundle/origin-docker-registry\n\t-> ansibleplaybookbundle/origin-haproxy-router\n\t-> ansibleplaybookbundle/origin-pod\n\t-> ansibleplaybookbundle/origin-sti-builder\n\t-> ansibleplaybookbundle/origin-recycler\n\t-> ansibleplaybookbundle/kubevirt-ansible\n"
| time="2018-09-06T10:04:04Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/thelounge-apb:latest runtime is 2"
| time="2018-09-06T10:04:05Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/keycloak-apb:latest runtime is 2"
| time="2018-09-06T10:04:05Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/mssql-apb:latest runtime is 2"
| time="2018-09-06T10:04:06Z" level=info msg="Didn't find encoded Spec label. Assuming image is not APB and skipping"
| time="2018-09-06T10:04:07Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/hello-world-db-apb:latest runtime is 2"
| time="2018-09-06T10:04:09Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/pyzip-demo-apb:latest runtime is 2"
| time="2018-09-06T10:04:09Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/jenkins-apb:latest runtime is 2"
| time="2018-09-06T10:04:10Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/es-apb:latest runtime is 2"
| time="2018-09-06T10:04:10Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/mediawiki-apb:latest runtime is 2"
| time="2018-09-06T10:04:11Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/virtualmachines-apb:latest runtime is 2"
| time="2018-09-06T10:04:11Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/proxy-config-apb:latest runtime is 2"
| time="2018-09-06T10:04:12Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/galera-apb:latest runtime is 2"
| time="2018-09-06T10:04:12Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/mariadb-apb:latest runtime is 2"
| time="2018-09-06T10:04:13Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/hastebin-apb:latest runtime is 2"
| time="2018-09-06T10:04:13Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/openshift-logging-apb:latest runtime is 2"
| time="2018-09-06T10:04:14Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/postgresql-apb:latest runtime is 2"
| time="2018-09-06T10:04:14Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/vnc-desktop-apb:latest runtime is 2"
| time="2018-09-06T10:04:16Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/rhpam-apb:latest runtime is 2"
| time="2018-09-06T10:04:16Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/prometheus-apb:latest runtime is 2"
| time="2018-09-06T10:04:17Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/photo-album-demo-app-apb:latest runtime is 2"
| time="2018-09-06T10:04:17Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/wordpress-ha-apb:latest runtime is 2"
| time="2018-09-06T10:04:18Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/standalone-cinder-apb:latest runtime is 2"
| time="2018-09-06T10:04:18Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/gluster-s3object-apb:latest runtime is 2"
| time="2018-09-06T10:04:19Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/photo-album-demo-ext-api-apb:latest runtime is 2"
| time="2018-09-06T10:04:20Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/iscsi-demo-target-apb:latest runtime is 2"
| time="2018-09-06T10:04:20Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/tiller-apb:latest runtime is 2"
| time="2018-09-06T10:04:21Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/awx-apb:latest runtime is 2"
| time="2018-09-06T10:04:21Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/mysql-apb:latest runtime is 2"
| time="2018-09-06T10:04:22Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/rocketchat-apb:latest runtime is 2"
| time="2018-09-06T10:04:22Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/manageiq-apb:latest runtime is 2"
| time="2018-09-06T10:04:23Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/pyzip-demo-db-apb:latest runtime is 2"
| time="2018-09-06T10:04:23Z" level=info msg="Didn't find encoded Spec label. Assuming image is not APB and skipping"
| time="2018-09-06T10:04:24Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/eclipse-che-apb:latest runtime is 2"
| time="2018-09-06T10:04:24Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/hello-world-apb:latest runtime is 2"
| time="2018-09-06T10:04:25Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/kubevirt-apb:latest runtime is 2"
| time="2018-09-06T10:04:26Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/photo-album-demo-api-apb:latest runtime is 2"
| time="2018-09-06T10:04:26Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/import-vm-apb:latest runtime is 2"
| time="2018-09-06T10:04:26Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/v2v-apb:latest runtime is 2"
| time="2018-09-06T10:04:27Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/mongodb-apb:latest runtime is 2"
| time="2018-09-06T10:04:28Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/nginx-apb:latest runtime is 2"
| time="2018-09-06T10:04:28Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/kibana-apb:latest runtime is 2"
| time="2018-09-06T10:04:29Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/mssql-remote-apb:latest runtime is 2"
| time="2018-09-06T10:04:29Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/dynamic-apb:latest runtime is 2"
| time="2018-09-06T10:04:30Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/rds-postgres-apb:latest runtime is 2"
| time="2018-09-06T10:04:30Z" level=info msg="adapter::configToSpec -> Image docker.io/ansibleplaybookbundle/etherpad-apb:latest runtime is 2"
| time="2018-09-06T10:04:30Z" level=info msg="Validating specs..."
| time="2018-09-06T10:04:30Z" level=info msg="All specs passed validation!"
| time="2018-09-06T10:04:30Z" level=info msg="Bundles filtered by white/blacklist filter:\n\t-> openshift/dotnet\n\t-> openshift/dotnet-runtime\n\t-> openshift/httpd\n\t-> openshift/jenkins\n\t-> openshift/mariadb\n\t-> openshift/mongodb\n\t-> openshift/mysql\n\t-> openshift/nginx\n\t-> openshift/nodejs\n\t-> openshift/perl\n\t-> openshift/php\n\t-> openshift/postgresql\n\t-> openshift/python\n\t-> openshift/redis\n\t-> openshift/ruby\n\t-> openshift/wildfly\n"
| time="2018-09-06T10:04:30Z" level=info msg="Validating specs..."
| time="2018-09-06T10:04:30Z" level=info msg="All specs passed validation!"
| time="2018-09-06T10:04:30Z" level=info msg="update spec: 135bd0df0401e2fdd52fd136935014fb\|dh-nginx-apb to crd"
| time="2018-09-06T10:04:30Z" level=info msg="update spec: 1dda1477cace09730bd8ed7a6505607e\|dh-postgresql-apb to crd"
| time="2018-09-06T10:04:30Z" level=info msg="update spec: f755257efed3e3d43c8b82afd0db1181\|dh-prometheus-apb to crd"
| time="2018-09-06T10:04:30Z" level=info msg="update spec: 1882ffca5d72b1084e9107e3485f5066\|dh-eclipse-che-apb to crd"
| time="2018-09-06T10:04:30Z" level=info msg="update spec: fd9b21a9caa8bf8b42b27bb0c90d3b74\|dh-virtualization to crd"
| time="2018-09-06T10:04:30Z" level=info msg="update spec: 60836f0ce3bd7d325587211dd7791f5b\|dh-import-vm-apb to crd"
| time="2018-09-06T10:04:31Z" level=info msg="update spec: 4408b368ae2f09a5358340a2a10e197b\|dh-v2v to crd"
| time="2018-09-06T10:04:31Z" level=info msg="update spec: e9c042c4925dd0c7c25ceca4f5179e1c\|dh-mongodb-apb to crd"
| time="2018-09-06T10:04:32Z" level=info msg="update spec: ddd528762894b277001df310a126d5ad\|dh-mysql-apb to crd"
| time="2018-09-06T10:04:32Z" level=info msg="update spec: ba9c2d4db404ce97111bea80225de968\|dh-rocketchat-apb to crd"
| time="2018-09-06T10:04:32Z" level=info msg="update spec: ca91b61da8476984f18fc13883ae2fdb\|dh-etherpad-apb to crd"
| time="2018-09-06T10:04:33Z" level=info msg="update spec: 11bbd6c120e197ea6acacf7165749629\|dh-openshift-logging-apb to crd"
| time="2018-09-06T10:04:33Z" level=info msg="update spec: c65fbd4e701cb71d74fd2cc35e14432b\|dh-rds-postgres-apb to crd"
| time="2018-09-06T10:04:34Z" level=info msg="update spec: 53046edd737292ba731e28556bec3a38\|dh-standalone-cinder-apb to crd"
| time="2018-09-06T10:04:34Z" level=info msg="update spec: 880ef3b4ba5fa8d80908e9974228e603\|dh-awx-apb to crd"
| time="2018-09-06T10:04:34Z" level=info msg="update spec: b95513950bb3f132de25d58fb75f8dca\|dh-keycloak-apb to crd"
| time="2018-09-06T10:04:35Z" level=info msg="update spec: 21e1bfbf09d5a7fb8a54042f504f26be\|dh-demo-api-apb to crd"
| time="2018-09-06T10:04:35Z" level=info msg="update spec: 67042296c7c95e84142f21f58da2ebfe\|dh-mariadb-apb to crd"
| time="2018-09-06T10:04:36Z" level=info msg="update spec: ab24ffd54da0aefdea5277e0edce8425\|dh-hastebin-apb to crd"
| time="2018-09-06T10:04:36Z" level=info msg="update spec: 09628db4757fd1a2db85d465106b9f82\|dh-gluster-s3-apb to crd"
| time="2018-09-06T10:04:36Z" level=info msg="update spec: c4ef25f81a0c275c8f1bee1b736f3068\|dh-mssql-apb to crd"
| time="2018-09-06T10:04:37Z" level=info msg="update spec: 9519424a4ffdd1d2f77797987166a9f8\|dh-blankvm-apb to crd"
| time="2018-09-06T10:04:37Z" level=info msg="update spec: 5d0062cce443e5ecb8438ca5f664dcd7\|dh-kibana-apb to crd"
| time="2018-09-06T10:04:38Z" level=info msg="update spec: d4684c1b61cd094af9aa6ec4a90b4d69\|dh-demo-app-apb to crd"
| time="2018-09-06T10:04:38Z" level=info msg="update spec: a946a139a9308a59bf642ac52b4ba317\|dh-wordpress-ha-apb to crd"
| time="2018-09-06T10:04:38Z" level=info msg="update spec: 595db86e75325f430afa4fa3f7d69af9\|dh-iscsi-demo-target-apb to crd"
| time="2018-09-06T10:04:39Z" level=info msg="update spec: 9f7da06f179b895a8ee5f9a3ce4af7ef\|dh-hello-world-apb to crd"
| time="2018-09-06T10:04:39Z" level=info msg="update spec: d889087d9f39d5b09a06842518f5d9e2\|dh-dynamic-apb to crd"
| time="2018-09-06T10:04:40Z" level=info msg="update spec: 1dd62d51c52cc2ac404d58abc0c8fa94\|dh-vnc-desktop-apb to crd"
| time="2018-09-06T10:04:40Z" level=info msg="update spec: 08ccf37be271fba38b1a70f87302297f\|dh-rhpam-apb to crd"
| time="2018-09-06T10:04:40Z" level=info msg="update spec: eebf92c7670f30007a4b8db3a8166d5c\|dh-thelounge-apb to crd"
| time="2018-09-06T10:04:41Z" level=info msg="update spec: 3f85c20e073a9c761d3f8560b4c5180b\|dh-demo-ext-api-apb to crd"
| time="2018-09-06T10:04:41Z" level=info msg="update spec: f830fb63f6df99c7bfae34b295b43108\|dh-tiller-apb to crd"
| time="2018-09-06T10:04:42Z" level=info msg="update spec: f4509733ca0636df3d69b6af53260160\|dh-jenkins-apb to crd"
| time="2018-09-06T10:04:42Z" level=info msg="update spec: 1830d9181b425e281b36efbf22f378a4\|dh-proxy-config-apb to crd"
| time="2018-09-06T10:04:42Z" level=info msg="update spec: 6df7afbd132c094704b4a8bfd44378c0\|dh-manageiq-apb to crd"
| time="2018-09-06T10:04:43Z" level=info msg="update spec: 693cb128e68365830c913631300deac0\|dh-pyzip-demo-db-apb to crd"
| time="2018-09-06T10:04:43Z" level=info msg="update spec: 192097962f2955b0582b5d53ddb942e4\|dh-galera-apb to crd"
| time="2018-09-06T10:04:44Z" level=info msg="update spec: 2c79572fbf83125231198451c26e7cf9\|dh-mssql-remote-apb to crd"
| time="2018-09-06T10:04:44Z" level=info msg="update spec: b43a4272a6efcaaa3e0b9616324f1099\|dh-hello-world-db-apb to crd"
| time="2018-09-06T10:04:44Z" level=info msg="update spec: 0e991006d21029e47abe71acc255e807\|dh-pyzip-demo-apb to crd"
| time="2018-09-06T10:04:45Z" level=info msg="update spec: aff6d7bb9c7f57c9ce8b742228e4caa3\|dh-es-apb to crd"
| time="2018-09-06T10:04:45Z" level=info msg="update spec: f6c4486b7fb0cdac4b58e193607f7011\|dh-mediawiki-apb to crd"
| time="2018-09-06T10:04:45Z" level=info msg="Broker successfully bootstrapped"
@slaterx thanks, what do your routes look like? I'm trying to see what routes we set for the broker. Clearly the broker is starting up but the catalog can't make the connection. Something is amiss and I can't seem to tell why it's happening.
@jmrodri you can find route/service description from project openshift-ansible-service-broker
below. I believe we do not set them anywhere but inside the container. How can we configure server-side the broker? That is the component that requires an update if the broker hosts on a new path.
➜ oc describe service asb
Name: asb
Namespace: openshift-ansible-service-broker
Labels: app=openshift-ansible-service-broker
service=asb
Annotations: service.alpha.openshift.io/serving-cert-secret-name=asb-tls
service.alpha.openshift.io/serving-cert-signed-by=openshift-service-serving-signer@1535795421
Selector: app=openshift-ansible-service-broker,service=asb
Type: ClusterIP
IP: 172.30.70.254
Port: port-1338 1338/TCP
TargetPort: 1338/TCP
Endpoints: 10.129.2.23:1338
Port: port-1337 1337/TCP
TargetPort: 1337/TCP
Endpoints: 10.129.2.23:1337
Session Affinity: None
Events: <none>
➜ oc describe route asb-1338
Name: asb-1338
Namespace: openshift-ansible-service-broker
Created: 5 days ago
Labels: app=openshift-ansible-service-broker
service=asb
Annotations: openshift.io/host.generated=true
Requested Host: asb-1338-openshift-ansible-service-broker.example.com
exposed on router router 5 days ago
Path: <none>
TLS Termination: reencrypt
Insecure Policy: <none>
Endpoint Port: 1338
Service: asb
Weight: 100 (100%)
Endpoints: 10.129.2.23:1338, 10.129.2.23:1337
Hi @jmrodri, further to this, I've edited openshift-ansible-service-broker and replaced all wrong entries with /osb. However, only the URL was successfully edited. And following that, I'm getting now a 403:
oc describe clusterservicebroker ansible-service-broker
Name: ansible-service-broker
Namespace:
Labels: <none>
Annotations: <none>
API Version: servicecatalog.k8s.io/v1beta1
Kind: ClusterServiceBroker
Metadata:
Creation Timestamp: 2018-09-01T10:15:32Z
Finalizers:
kubernetes-incubator/service-catalog
Generation: 2
Resource Version: 2547046
Self Link: /apis/servicecatalog.k8s.io/v1beta1/clusterservicebrokers/ansible-service-broker
UID: f5d56063-adcf-11e8-8705-0a580a810005
Spec:
Auth Info:
Bearer:
Secret Ref:
Name: asb-client
Namespace: openshift-ansible-service-broker
Ca Bundle: <ca-bundle>
Relist Behavior: Duration
Relist Duration: 15m0s
Relist Requests: 0
URL: https://asb.openshift-ansible-service-broker.svc:1338/osb
Status:
Conditions:
Last Transition Time: 2018-09-01T10:15:37Z
Message: Error fetching catalog. Error getting broker catalog: Status: 403; ErrorMessage: <nil>; Description: <nil>; ResponseError: <nil>
Reason: ErrorFetchingCatalog
Status: False
Type: Ready
Operation Start Time: 2018-09-08T10:16:00Z
Reconciled Generation: 1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrorFetchingCatalog 12s (x16 over 1m) service-catalog-controller-manager Error getting broker catalog: Status: 403; ErrorMessage: <nil>; Description: <nil>; ResponseError: <nil>
Probably, if I could configure the URI to /osb during installation I would be able to fix this 403 issue?
I found an issue open in openshift-ansible: https://github.com/openshift/openshift-ansible/issues/9960
Basically, the installer refers to latest version of image ansibleplaybookbundle/origin-ansible-service-broker, when it should actually point to ansible-service-broker-1.2.17-1.
Manually replacing the version inside the deployment config fixed the issue:
➜ oc describe clusterservicebroker ansible-service-broker
Name: ansible-service-broker
Namespace:
Labels: <none>
Annotations: <none>
API Version: servicecatalog.k8s.io/v1beta1
Kind: ClusterServiceBroker
Metadata:
Creation Timestamp: 2018-09-01T10:15:32Z
Finalizers:
kubernetes-incubator/service-catalog
Generation: 3
Resource Version: 2552256
Self Link: /apis/servicecatalog.k8s.io/v1beta1/clusterservicebrokers/ansible-service-broker
UID: f5d56063-adcf-11e8-8705-0a580a810005
Spec:
Auth Info:
Bearer:
Secret Ref:
Name: asb-client
Namespace: openshift-ansible-service-broker
Ca Bundle: <ca-bundle>
Relist Behavior: Duration
Relist Duration: 15m0s
Relist Requests: 0
URL: https://asb.openshift-ansible-service-broker.svc:1338/ansible-service-broker
Status:
Conditions:
Last Transition Time: 2018-09-10T23:42:52Z
Message: Successfully fetched catalog entries from broker.
Reason: FetchedCatalog
Status: True
Type: Ready
Last Catalog Retrieval Time: 2018-09-10T23:42:52Z
Reconciled Generation: 3
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FetchedCatalog 12m service-catalog-controller-manager Successfully fetched catalog entries from broker.
Hi there,
I would like to say I came across this issue in a brand new clean setup.
After deploying cluster with deploy_cluster.yml, I noticed openshift-ansible-service-broker had failed to deploy.
So I edited and changed the image from:
docker.io/ansibleplaybookbundle/origin-ansible-service-broker:v3.11.0
to
docker.io/ansibleplaybookbundle/origin-ansible-service-broker:ansible-service-broker-1.2.17-1
And it got fixed. But this issue is still live.
# oc version
oc v3.11.0+62803d0-1
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
openshift v3.11.0+e5dbec2-186
kubernetes v1.11.0+d4cacc0
This image dos not works for me: docker.io/ansibleplaybookbundle/origin-ansible-service-broker:ansible-service-broker-1.2.17-1
but this fixed my issues: docker.io/ansibleplaybookbundle/origin-ansible-service-broker:ansible-service-broker-1.3.23-1
Hello,
I deployed Openshift Origin with Ansible using tag openshift-ansible-3.9.0-0.22.0 because the latest (openshift-ansible-3.9.0-0.31.0) fails to deploy the web-console. After the install is over I get the usual broken asb because asb-etcd pvc fail to bind a volume because openshift-ansible doesn't take account of this line in my inventory :
openshift_hosted_etcd_storage_kind=glusterfs
I then manually fix asb by deleting the existing etcd pvc then creating a new one with the same name using my glusterfs storageclass.
After that, I restart the two deployments (asb + asb-etcd). Everything is ok asb-etcd works well with the new pvc.
Problem is that I have a recurrent error when service catalog tries to get ansible service broker catalog.
Here is
oc describe clusterservicebroker ansible-service-broker
outputThis issue references the same problem : Bug 1509366 - Error syncing catalog from ServiceBroker.
But :
Version
Expected Result
OAB working with no errors ...