redhat-iot / iot-assettracking-demo

IoT Asset Tracking Demo
Eclipse Public License 1.0
22 stars 26 forks source link

Not able to access dashboard through its url #9

Closed kamranIoTDeveloper closed 6 years ago

kamranIoTDeveloper commented 6 years ago

All Application deployed successfully, but not able to access dashboard through its url i-e http://dashboard-redhat-iot.domain (server not found)

jamesfalkner commented 6 years ago

Need more info. Can you share the output of

oc describe nodes
oc describe routes
oc describe pods

Also, double-check that your domain is resolvable and step through Troubleshooting OpenShift SDN

kamranIoTDeveloper commented 6 years ago

Error: The route is not accepting traffic yet because it has not been admitted by a router.

Thanks @jamesfalkner , will share logs soon

kamranIoTDeveloper commented 6 years ago

# oc describe nodes

Name: tekfocal-precision-workstation-t3500 Role:
Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=tekfocal-precision-workstation-t3500 Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true Taints: CreationTimestamp: Thu, 26 Oct 2017 19:56:20 +0500 Phase:
Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message


OutOfDisk False Fri, 27 Oct 2017 13:26:14 +0500 Thu, 26 Oct 2017 19:56:20 +0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Fri, 27 Oct 2017 13:26:14 +0500 Thu, 26 Oct 2017 19:56:20 +0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 27 Oct 2017 13:26:14 +0500 Thu, 26 Oct 2017 19:56:20 +0500 KubeletHasNoDiskPressure kubelet has no disk pressure Ready True Fri, 27 Oct 2017 13:26:14 +0500 Thu, 26 Oct 2017 19:56:30 +0500 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: 192.168.10.3,192.168.10.3,tekfocal-precision-workstation-t3500 Capacity: cpu: 8 memory: 16424668Ki pods: 80 Allocatable: cpu: 8 memory: 16322268Ki pods: 80 System Info: Machine ID: 4463f4bbbf01403785925dcb2a794d1a System UUID: 44454C4C-5000-105A-8044-C7C04F373253 Boot ID: 39e5e72d-2f53-49ad-926c-601082b5f273 Kernel Version: 4.10.0-37-generic OS Image: Ubuntu 16.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://Unknown Kubelet Version: v1.6.1+5115d708d7 Kube-Proxy Version: v1.6.1+5115d708d7 ExternalID: tekfocal-precision-workstation-t3500 Non-terminated Pods: (11 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits


default docker-registry-1-p06lp 100m (1%) 0 (0%) 256Mi (1%) 0 (0%) default docker-registry-1-vw677 100m (1%) 0 (0%) 256Mi (1%) 0 (0%) redhat-iot dashboard-1-px705 0 (0%) 0 (0%) 0 (0%) 0 (0%) redhat-iot datastore-1-bq62n 0 (0%) 0 (0%) 0 (0%) 0 (0%) redhat-iot datastore-proxy-1-zfk74 0 (0%) 0 (0%) 0 (0%) 0 (0%) redhat-iot elasticsearch-1-wlfc8 0 (0%) 0 (0%) 0 (0%) 0 (0%) redhat-iot kapua-api-1-48cdh 0 (0%) 0 (0%) 0 (0%) 0 (0%) redhat-iot kapua-broker-1-tcvml 0 (0%) 0 (0%) 0 (0%) 0 (0%) redhat-iot kapua-console-1-3htgf 0 (0%) 0 (0%) 0 (0%) 0 (0%) redhat-iot simulator-2-7t9dr 0 (0%) 0 (0%) 0 (0%) 0 (0%) redhat-iot sql-1-gvhh6 0 (0%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits


200m (2%) 0 (0%) 512Mi (3%) 0 (0%) Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


37m 37m 1 kubelet, tekfocal-precision-workstation-t3500 Normal Starting Starting kubelet. 37m 37m 1 kubelet, tekfocal-precision-workstation-t3500 Warning ImageGCFailed unable to find data for container / 37m 37m 1 kubelet, tekfocal-precision-workstation-t3500 Normal NodeHasSufficientDisk Node tekfocal-precision-workstation-t3500 status is now: NodeHasSufficientDisk 37m 37m 1 kubelet, tekfocal-precision-workstation-t3500 Normal NodeHasSufficientMemory Node tekfocal-precision-workstation-t3500 status is now: NodeHasSufficientMemory 37m 37m 1 kubelet, tekfocal-precision-workstation-t3500 Normal NodeHasNoDiskPressure Node tekfocal-precision-workstation-t3500 status is now: NodeHasNoDiskPressure 12m 12m 1 kubelet, tekfocal-precision-workstation-t3500 Warning ImageGCFailed wanted to free 561745920, but freed 704147051 space with errors in image deletion: [rpc error: code = 2 desc = Error response from daemon: {"message":"conflict: unable to delete b0948ecacc39 (cannot be forced) - image has dependent child images"}, rpc error: code = 2 desc = Error response from daemon: {"message":"conflict: unable to delete b32bed17196d (cannot be forced) - image has dependent child images"}, rpc error: code = 2 desc = Error response from daemon: {"message":"conflict: unable to delete 77b5b3e452aa (cannot be forced) - image is being used by running container fb0cfe3c4f44"}, rpc error: code = 2 desc = Error response from daemon: {"message":"conflict: unable to delete 2ba7189700c8 (must be forced) - image is being used by stopped container 21aa1b66c0fc"}] 7m 7m 1 kubelet, tekfocal-precision-workstation-t3500 Warning ImageGCFailed wanted to free 406884352, but freed 704147232 space with errors in image deletion: [rpc error: code = 2 desc = Error response from daemon: {"message":"conflict: unable to delete 77b5b3e452aa (cannot be forced) - image is being used by running container ee0ea62abb49"}, rpc error: code = 2 desc = Error response from daemon: {"message":"conflict: unable to delete b0948ecacc39 (cannot be forced) - image has dependent child images"}, rpc error: code = 2 desc = Error response from daemon: {"message":"conflict: unable to delete b32bed17196d (cannot be forced) - image has dependent child images"}]

kamranIoTDeveloper commented 6 years ago

# oc describe routes

Nothing appears

kamranIoTDeveloper commented 6 years ago

# oc describe pods Name: docker-registry-1-p06lp Namespace: default Security Policy: restricted Node: tekfocal-precision-workstation-t3500/192.168.10.3 Start Time: Thu, 26 Oct 2017 20:30:43 +0500 Labels: deployment=docker-registry-1 deploymentconfig=docker-registry docker-registry=default Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"docker-registry-1","uid":"9ff1bf01-ba62-11e7-9229-b8a... openshift.io/deployment-config.latest-version=1 openshift.io/deployment-config.name=docker-registry openshift.io/deployment.name=docker-registry-1 openshift.io/scc=restricted Status: Running IP: 172.17.0.5 Controllers: ReplicationController/docker-registry-1 Containers: registry: Container ID: docker://2608f91c4ccf652757377758734acdd4cf2172fff9b76e08ab5f934199f91a54 Image: openshift/origin-docker-registry:v3.6.1 Image ID: docker-pullable://openshift/origin-docker-registry@sha256:462acb1b568584fa6d84e4dd2b1ccea637f73b474ab6d0f3c87faa62aea322ab Port: 5000/TCP State: Running Started: Fri, 27 Oct 2017 13:06:56 +0500 Last State: Terminated Reason: Error Exit Code: 2 Started: Fri, 27 Oct 2017 13:02:01 +0500 Finished: Fri, 27 Oct 2017 13:06:32 +0500 Ready: True Restart Count: 9 Requests: cpu: 100m memory: 256Mi Liveness: http-get http://:5000/healthz delay=10s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:5000/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Environment: REGISTRY_HTTP_ADDR: :5000 REGISTRY_HTTP_NET: tcp REGISTRY_HTTP_SECRET: rgDcskYMD64CfQw+9cwIgPyPgS9y1HBvZgBLsqBey0g= REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA: false Mounts: /registry from registry-storage (rw) /var/run/secrets/kubernetes.io/serviceaccount from registry-token-9r8js (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: registry-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: registry-token-9r8js: Type: Secret (a volume populated by a Secret) SecretName: registry-token-9r8js Optional: false QoS Class: Burstable Node-Selectors: Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


2h 2h 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Liveness probe failed: Get http://172.17.0.5:5000/healthz: dial tcp 172.17.0.5:5000: getsockopt: connection refused 2h 2h 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Readiness probe failed: Get http://172.17.0.5:5000/healthz: dial tcp 172.17.0.5:5000: getsockopt: connection refused 2h 2h 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Liveness probe failed: Get http://172.17.0.4:5000/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 2h 2h 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Readiness probe failed: Get http://172.17.0.4:5000/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 2h 57m 5 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Normal Pulled Container image "openshift/origin-docker-registry:v3.6.1" already present on machine 2h 57m 5 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Normal Created Created container 2h 57m 5 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Normal Started Started container 2h 51m 10 kubelet, tekfocal-precision-workstation-t3500 Normal SandboxChangedPod sandbox changed, it will be killed and re-created. 2h 51m 10 kubelet, tekfocal-precision-workstation-t3500 Warning FailedSync Error syncing pod 2h 51m 6 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning BackOff Back-off restarting failed container 46m 46m 4 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning BackOff Back-off restarting failed container 35m 35m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Liveness probe failed: Get http://172.17.0.2:5000/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 35m 35m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Readiness probe failed: Get http://172.17.0.2:5000/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 46m 35m 7 kubelet, tekfocal-precision-workstation-t3500 Warning FailedSync Error syncing pod 48m 35m 3 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Normal Pulled Container image "openshift/origin-docker-registry:v3.6.1" already present on machine 48m 35m 3 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Normal Created Created container 48m 35m 3 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Normal Started Started container 46m 30m 6 kubelet, tekfocal-precision-workstation-t3500 Normal SandboxChangedPod sandbox changed, it will be killed and re-created.

Name: docker-registry-1-vw677 Namespace: default Security Policy: restricted Node: tekfocal-precision-workstation-t3500/192.168.10.3 Start Time: Thu, 26 Oct 2017 20:30:43 +0500 Labels: deployment=docker-registry-1 deploymentconfig=docker-registry docker-registry=default Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"docker-registry-1","uid":"9ff1bf01-ba62-11e7-9229-b8a... openshift.io/deployment-config.latest-version=1 openshift.io/deployment-config.name=docker-registry openshift.io/deployment.name=docker-registry-1 openshift.io/scc=restricted Status: Running IP: 172.17.0.11 Controllers: ReplicationController/docker-registry-1 Containers: registry: Container ID: docker://adfce541a1032b287574950432707e70180ad197c9a79975ebec812a50c51102 Image: openshift/origin-docker-registry:v3.6.1 Image ID: docker-pullable://openshift/origin-docker-registry@sha256:462acb1b568584fa6d84e4dd2b1ccea637f73b474ab6d0f3c87faa62aea322ab Port: 5000/TCP State: Running Started: Fri, 27 Oct 2017 13:08:46 +0500 Last State: Terminated Reason: Error Exit Code: 2 Started: Fri, 27 Oct 2017 13:02:02 +0500 Finished: Fri, 27 Oct 2017 13:06:32 +0500 Ready: True Restart Count: 8 Requests: cpu: 100m memory: 256Mi Liveness: http-get http://:5000/healthz delay=10s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:5000/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Environment: REGISTRY_HTTP_ADDR: :5000 REGISTRY_HTTP_NET: tcp REGISTRY_HTTP_SECRET: rgDcskYMD64CfQw+9cwIgPyPgS9y1HBvZgBLsqBey0g= REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA: false Mounts: /registry from registry-storage (rw) /var/run/secrets/kubernetes.io/serviceaccount from registry-token-9r8js (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: registry-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: registry-token-9r8js: Type: Secret (a volume populated by a Secret) SecretName: registry-token-9r8js Optional: false QoS Class: Burstable Node-Selectors: Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


2h 2h 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Readiness probe failed: Get http://172.17.0.7:5000/healthz: dial tcp 172.17.0.7:5000: getsockopt: connection refused 2h 2h 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Liveness probe failed: Get http://172.17.0.7:5000/healthz: dial tcp 172.17.0.7:5000: getsockopt: connection refused 2h 2h 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Readiness probe failed: Get http://172.17.0.5:5000/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 2h 2h 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Liveness probe failed: Get http://172.17.0.5:5000/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 2h 57m 4 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Normal Pulled Container image "openshift/origin-docker-registry:v3.6.1" already present on machine 2h 57m 4 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Normal Created Created container 2h 57m 4 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Normal Started Started container 2h 51m 12 kubelet, tekfocal-precision-workstation-t3500 Normal SandboxChangedPod sandbox changed, it will be killed and re-created. 2h 51m 10 kubelet, tekfocal-precision-workstation-t3500 Warning FailedSync Error syncing pod 51m 51m 3 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning BackOff Back-off restarting failed container 46m 46m 4 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning BackOff Back-off restarting failed container 35m 35m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Liveness probe failed: Get http://172.17.0.3:5000/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 35m 35m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Readiness probe failed: Get http://172.17.0.3:5000/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 46m 35m 7 kubelet, tekfocal-precision-workstation-t3500 Warning FailedSync Error syncing pod 46m 35m 5 kubelet, tekfocal-precision-workstation-t3500 Normal SandboxChangedPod sandbox changed, it will be killed and re-created. 48m 35m 3 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Normal Pulled Container image "openshift/origin-docker-registry:v3.6.1" already present on machine 48m 35m 3 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Normal Created Created container 48m 35m 3 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Normal Started Started container 30m 30m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{registry} Warning Unhealthy Liveness probe failed: Get http://172.17.0.4:5000/healthz: dial tcp 172.17.0.4:5000: getsockopt: connection refused

kamranIoTDeveloper commented 6 years ago

Dashboard is accessible through IP address, but not through hostname

jamesfalkner commented 6 years ago

Need to see output from above commands but in the context of the redhat-iot project.

so output of:

oc describe routes,services,pods,deploymentconfigs -n redhat-iot
kamranIoTDeveloper commented 6 years ago

root@tekfocal-Precision-WorkStation-T3500:~# oc describe routes,services,pods,deploymentconfigs -n redhat-iot Name: api Namespace: redhat-iot Created: 2 days ago Labels: app=kapua-api demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/host.generated=true Requested Host: api-redhat-iot.router.default.svc.cluster.local exposed on router router-west 2 days ago exposed on router router 2 days ago Path: TLS Termination: Insecure Policy: Endpoint Port: http

Service: kapua-api Weight: 100 (100%) Endpoints: 172.17.0.7:8080

Name: broker Namespace: redhat-iot Created: 2 days ago Labels: app=kapua-broker demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/host.generated=true Requested Host: broker-redhat-iot.router.default.svc.cluster.local exposed on router router-west 2 days ago exposed on router router 2 days ago Path: TLS Termination: Insecure Policy: Endpoint Port: mqtt-websocket-tcp

Service: kapua-broker Weight: 100 (100%) Endpoints: 172.17.0.5:1883, 172.17.0.5:61614, 172.17.0.5:8883

Name: console Namespace: redhat-iot Created: 2 days ago Labels: app=kapua-console demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/host.generated=true Requested Host: console-redhat-iot.router.default.svc.cluster.local exposed on router router-west 2 days ago exposed on router router 2 days ago Path: TLS Termination: Insecure Policy: Endpoint Port: http

Service: kapua-console Weight: 100 (100%) Endpoints: 172.17.0.12:8080

Name: dashboard Namespace: redhat-iot Created: 2 days ago Labels: application=dashboard demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/host.generated=true Requested Host: dashboard-redhat-iot.router.default.svc.cluster.local exposed on router router-west 2 days ago exposed on router router 2 days ago Path: TLS Termination: Insecure Policy: Endpoint Port: 8080-tcp

Service: dashboard Weight: 100 (100%) Endpoints:

Name: datastore-proxy Namespace: redhat-iot Created: 2 days ago Labels: application=datastore-proxy demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/host.generated=true Requested Host: datastore-proxy-redhat-iot.router.default.svc.cluster.local exposed on router router-west 2 days ago exposed on router router 2 days ago Path: TLS Termination: Insecure Policy: Endpoint Port: 8080-tcp

Service: datastore-proxy Weight: 100 (100%) Endpoints:

Name: search Namespace: redhat-iot Created: 2 days ago Labels: app=elasticsearch demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/host.generated=true Requested Host: search-redhat-iot.router.default.svc.cluster.local exposed on router router-west 2 days ago exposed on router router 2 days ago Path: TLS Termination: Insecure Policy: Endpoint Port: http

Service: elasticsearch Weight: 100 (100%) Endpoints: 172.17.0.3:9200, 172.17.0.3:9300

Name: dashboard Namespace: redhat-iot Labels: app=dashboard demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp Selector: deploymentconfig=dashboard Type: ClusterIP IP: 172.30.17.165 Port: 8080-tcp 8080/TCP Endpoints:
Session Affinity: None Events:

Name: datastore-hotrod Namespace: redhat-iot Labels: app=datastore demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp Selector: deploymentConfig=datastore Type: ClusterIP IP: 172.30.162.165 Port: 11333/TCP Endpoints: 172.17.0.2:11333 Session Affinity: None Events:

Name: datastore-proxy Namespace: redhat-iot Labels: app=datastore-proxy demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp Selector: deploymentConfig=datastore-proxy Type: ClusterIP IP: 172.30.208.19 Port: 8080-tcp 8080/TCP Endpoints:
Session Affinity: None Events:

Name: elasticsearch Namespace: redhat-iot Labels: app=elasticsearch demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp Selector: app=elasticsearch,deploymentconfig=elasticsearch Type: ClusterIP IP: 172.30.21.109 Port: http 9200/TCP Endpoints: 172.17.0.3:9200 Port: transport 9300/TCP Endpoints: 172.17.0.3:9300 Session Affinity: None Events:

Name: kapua-api Namespace: redhat-iot Labels: app=kapua-api demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp service.alpha.openshift.io/dependencies=[{"name": "sql", "kind": "Service"}] Selector: app=kapua-api,deploymentconfig=kapua-api Type: ClusterIP IP: 172.30.66.49 Port: http 8080/TCP Endpoints: 172.17.0.7:8080 Session Affinity: None Events:

Name: kapua-broker Namespace: redhat-iot Labels: app=kapua-broker demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp service.alpha.openshift.io/dependencies=[{"name": "sql", "kind": "Service"}, {"name": "elasticsearch", "kind": "Service"}] Selector: app=kapua-broker,deploymentconfig=kapua-broker Type: NodePort IP: 172.30.49.211 Port: mqtt-tcp 1883/TCP NodePort: mqtt-tcp 31883/TCP Endpoints: 172.17.0.5:1883 Port: mqtts-tcp 8883/TCP NodePort: mqtts-tcp 31885/TCP Endpoints: 172.17.0.5:8883 Port: mqtt-websocket-tcp 61614/TCP NodePort: mqtt-websocket-tcp 31614/TCP Endpoints: 172.17.0.5:61614 Session Affinity: None Events:

Name: kapua-console Namespace: redhat-iot Labels: app=kapua-console demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp service.alpha.openshift.io/dependencies=[{"name": "sql", "kind": "Service"}] Selector: app=kapua-console,deploymentconfig=kapua-console Type: ClusterIP IP: 172.30.14.234 Port: http 8080/TCP Endpoints: 172.17.0.12:8080 Session Affinity: None Events:

Name: sql Namespace: redhat-iot Labels: app=sql demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp Selector: app=sql,deploymentconfig=sql Type: ClusterIP IP: 172.30.177.38 Port: h2-sql 3306/TCP Endpoints: 172.17.0.4:3306 Port: h2-web 8181/TCP Endpoints: 172.17.0.4:8181 Session Affinity: None Events:

Name: dashboard-1-px705 Namespace: redhat-iot Security Policy: restricted Node: tekfocal-precision-workstation-t3500/192.168.10.5 Start Time: Fri, 27 Oct 2017 13:09:44 +0500 Labels: app=dashboard deployment=dashboard-1 deploymentconfig=dashboard Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"redhat-iot","name":"dashboard-1","uid":"2f05b75b-baee-11e7-99c7-b8ac6f... openshift.io/deployment-config.latest-version=1 openshift.io/deployment-config.name=dashboard openshift.io/deployment.name=dashboard-1 openshift.io/generated-by=OpenShiftNewApp openshift.io/scc=restricted Status: Running IP: 172.17.0.11 Controllers: ReplicationController/dashboard-1 Containers: dashboard: Container ID: docker://357bbe61d5cbe93f3d4cc5b9415738342a82b474e70dfa19b5d8c07ee8639573 Image: 172.30.120.0:5000/redhat-iot/dashboard@sha256:37c1ba8d47bd87c683dc237f61a225537c191f55f6b300403cb6e6cd84cc03a5 Image ID: docker-pullable://172.30.120.0:5000/redhat-iot/dashboard@sha256:37c1ba8d47bd87c683dc237f61a225537c191f55f6b300403cb6e6cd84cc03a5 Port: 8080/TCP State: Waiting Reason: ImagePullBackOff Last State: Terminated Reason: Error Exit Code: 255 Started: Fri, 27 Oct 2017 16:09:02 +0500 Finished: Fri, 27 Oct 2017 18:32:59 +0500 Ready: False Restart Count: 1 Liveness: http-get http://:8080/ delay=120s timeout=5s period=5s #success=1 #failure=5 Readiness: http-get http://:8080/ delay=15s timeout=1s period=5s #success=1 #failure=5 Environment: BROKER_WS_NAME: broker BROKER_USERNAME: demo-gw2 BROKER_PASSWORD: RedHat123 DATASTORE_PROXY_SERVICE: datastore-proxy GOOGLE_MAPS_API_KEY: AIzaSyDpDtvyzzdXDYk5nt6CuOtjxmvBvwGq5D4 DASHBOARD_WEB_TITLE: Red Hat + Eurotech IoT Fleet Telematics Demo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-jnf1g (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-jnf1g: Type: Secret (a volume populated by a Secret) SecretName: default-token-jnf1g Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 5m 2m 4 kubelet, tekfocal-precision-workstation-t3500 spec.containers{dashboard} Normal Pulling pulling image "172.30.120.0:5000/redhat-iot/dashboard@sha256:37c1ba8d47bd87c683dc237f61a225537c191f55f6b300403cb6e6cd84cc03a5" 4m 1m 10 kubelet, tekfocal-precision-workstation-t3500 Warning FailedSync Error syncing pod 4m 1m 6 kubelet, tekfocal-precision-workstation-t3500 spec.containers{dashboard} Normal BackOff Back-off pulling image "172.30.120.0:5000/redhat-iot/dashboard@sha256:37c1ba8d47bd87c683dc237f61a225537c191f55f6b300403cb6e6cd84cc03a5" 4m 11s 5 kubelet, tekfocal-precision-workstation-t3500 spec.containers{dashboard} Warning Failed Failed to pull image "172.30.120.0:5000/redhat-iot/dashboard@sha256:37c1ba8d47bd87c683dc237f61a225537c191f55f6b300403cb6e6cd84cc03a5": rpc error: code = 2 desc = Error response from daemon: {"message":"manifest for 172.30.120.0:5000/redhat-iot/dashboard@sha256:37c1ba8d47bd87c683dc237f61a225537c191f55f6b300403cb6e6cd84cc03a5 not found"}

Name: dashboard-3-build Namespace: redhat-iot Security Policy: privileged Node: tekfocal-precision-workstation-t3500/192.168.10.3 Start Time: Fri, 27 Oct 2017 13:06:52 +0500 Labels: openshift.io/build.name=dashboard-3 Annotations: openshift.io/build.name=dashboard-3 openshift.io/scc=privileged Status: Succeeded IP: 172.17.0.7 Controllers: Containers: sti-build: Container ID: docker://d04969059c5c2762ff2e50ebfde4ab139b7a1a580793f27f685cbd7c6fb5ff30 Image: openshift/origin-sti-builder:v3.6.1 Image ID: docker-pullable://openshift/origin-sti-builder@sha256:7cd8ae032a1814f24d5020f25a1ef942792bf906ebc7f567c63655b8ca222542 Port:
Args: --loglevel=0 State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 27 Oct 2017 13:06:58 +0500 Finished: Fri, 27 Oct 2017 13:09:41 +0500 Ready: False Restart Count: 0 Environment: BUILD: {"kind":"Build","apiVersion":"v1","metadata":{"name":"dashboard-3","namespace":"redhat-iot","selfLink":"/apis/build.openshift.io/v1/namespaces/redhat-iot/builds/dashboard-3","uid":"c9a47bc7-baed-11e7-99c7-b8ac6f99af47","resourceVersion":"7848","creationTimestamp":"2017-10-27T08:06:51Z","labels":{"application":"dashboard","buildconfig":"dashboard","demo":"summit2017","openshift.io/build-config.name":"dashboard","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"dashboard","openshift.io/build.number":"3"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"dashboard","uid":"a7461f4b-baeb-11e7-99c7-b8ac6f99af47","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/redhat-iot/summit2017","ref":"master"},"contextDir":"dashboard"},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"centos/nodejs-4-centos7@sha256:fff857dbce9c59af040b1462f0f4a5c991bda2c3f91d78edd1b0ac7184ee1f36"}}},"output":{"to":{"kind":"DockerImage","name":"172.30.120.0:5000/redhat-iot/dashboard:latest"},"pushSecret":{"name":"builder-dockercfg-mxl6v"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","outputDockerImageReference":"172.30.120.0:5000/redhat-iot/dashboard:latest","config":{"kind":"BuildConfig","namespace":"redhat-iot","name":"dashboard"},"output":{}}}

  SOURCE_REPOSITORY:    https://github.com/redhat-iot/summit2017
  SOURCE_URI:       https://github.com/redhat-iot/summit2017
  SOURCE_CONTEXT_DIR:   dashboard
  SOURCE_REF:       master
  ORIGIN_VERSION:       v3.6.1+008f2d5
  ALLOWED_UIDS:     1-
  DROP_CAPS:        KILL,MKNOD,SETGID,SETUID,SYS_CHROOT
  PUSH_DOCKERCFG_PATH:  /var/run/secrets/openshift.io/push
Mounts:
  /var/run/docker.sock from docker-socket (rw)
  /var/run/secrets/kubernetes.io/serviceaccount from builder-token-cvfgr (ro)
  /var/run/secrets/openshift.io/push from builder-dockercfg-mxl6v-push (ro)

Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: docker-socket: Type: HostPath (bare host directory volume) Path: /var/run/docker.sock builder-dockercfg-mxl6v-push: Type: Secret (a volume populated by a Secret) SecretName: builder-dockercfg-mxl6v Optional: false builder-token-cvfgr: Type: Secret (a volume populated by a Secret) SecretName: builder-token-cvfgr Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events:

Name: datastore-1-bq62n Namespace: redhat-iot Security Policy: restricted Node: tekfocal-precision-workstation-t3500/192.168.10.5 Start Time: Fri, 27 Oct 2017 12:51:42 +0500 Labels: application=datastore deployment=datastore-1 deploymentConfig=datastore deploymentconfig=datastore Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"redhat-iot","name":"datastore-1","uid":"a78f3628-baeb-11e7-99c7-b8ac6f... openshift.io/deployment-config.latest-version=1 openshift.io/deployment-config.name=datastore openshift.io/deployment.name=datastore-1 openshift.io/generated-by=OpenShiftNewApp openshift.io/scc=restricted Status: Running IP: 172.17.0.2 Controllers: ReplicationController/datastore-1 Containers: datastore: Container ID: docker://bd848a2acfb5cac837a7dcc063097677695d84b6fe253b4a4edb5a9e54b7852a Image: registry.access.redhat.com/jboss-datagrid-6/datagrid65-openshift@sha256:f86e02fb8c740b4ed1f59300e94be69783ee51a38cc9ce6ddb73b6f817e173b3 Image ID: docker-pullable://registry.access.redhat.com/jboss-datagrid-6/datagrid65-openshift@sha256:f86e02fb8c740b4ed1f59300e94be69783ee51a38cc9ce6ddb73b6f817e173b3 Ports: 8778/TCP, 8080/TCP, 8888/TCP, 11211/TCP, 11222/TCP, 11333/TCP State: Running Started: Mon, 30 Oct 2017 10:49:44 +0500 Last State: Terminated Reason: Error Exit Code: 255 Started: Fri, 27 Oct 2017 16:08:55 +0500 Finished: Fri, 27 Oct 2017 18:32:59 +0500 Ready: True Restart Count: 4 Environment: USERNAME: rhiot PASSWORD: redhatiot1! OPENSHIFT_KUBE_PING_LABELS: application=datastore OPENSHIFT_KUBE_PING_NAMESPACE: redhat-iot (v1:metadata.namespace) INFINISPAN_CONNECTORS: hotrod,rest CACHE_NAMES: customer,facility,operator,shipment,vehicle DATAVIRT_CACHE_NAMES:
ENCRYPTION_REQUIRE_SSL_CLIENT_AUTH:
HOTROD_SERVICE_NAME: datastore-hotrod MEMCACHED_CACHE: default REST_SECURITY_DOMAIN:
JGROUPS_CLUSTER_PASSWORD: YdbnXgne Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-jnf1g (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-jnf1g: Type: Secret (a volume populated by a Secret) SecretName: default-token-jnf1g Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{datastore} Normal Pulling pulling image "registry.access.redhat.com/jboss-datagrid-6/datagrid65-openshift@sha256:f86e02fb8c740b4ed1f59300e94be69783ee51a38cc9ce6ddb73b6f817e173b3" 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{datastore} Normal Pulled Successfully pulled image "registry.access.redhat.com/jboss-datagrid-6/datagrid65-openshift@sha256:f86e02fb8c740b4ed1f59300e94be69783ee51a38cc9ce6ddb73b6f817e173b3" 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{datastore} Normal Created Created container 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{datastore} Normal Started Started container

Name: datastore-proxy-2-build Namespace: redhat-iot Security Policy: privileged Node: tekfocal-precision-workstation-t3500/192.168.10.3 Start Time: Fri, 27 Oct 2017 13:10:32 +0500 Labels: openshift.io/build.name=datastore-proxy-2 Annotations: openshift.io/build.name=datastore-proxy-2 openshift.io/scc=privileged Status: Succeeded IP: 172.17.0.12 Controllers: Containers: sti-build: Container ID: docker://3ec8af7ef282ef18b315b7c0fe9a32d65fed8141a2dd8b9c1aa1845fb74e9405 Image: openshift/origin-sti-builder:v3.6.1 Image ID: docker-pullable://openshift/origin-sti-builder@sha256:7cd8ae032a1814f24d5020f25a1ef942792bf906ebc7f567c63655b8ca222542 Port:
Args: --loglevel=0 State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 27 Oct 2017 13:10:34 +0500 Finished: Fri, 27 Oct 2017 13:15:23 +0500 Ready: False Restart Count: 0 Environment: BUILD: {"kind":"Build","apiVersion":"v1","metadata":{"name":"datastore-proxy-2","namespace":"redhat-iot","selfLink":"/apis/build.openshift.io/v1/namespaces/redhat-iot/builds/datastore-proxy-2","uid":"4da87f15-baee-11e7-99c7-b8ac6f99af47","resourceVersion":"8047","creationTimestamp":"2017-10-27T08:10:32Z","labels":{"application":"datastore-proxy","buildconfig":"datastore-proxy","demo":"summit2017","openshift.io/build-config.name":"datastore-proxy","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"datastore-proxy","openshift.io/build.number":"2"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"datastore-proxy","uid":"a74e35e3-baeb-11e7-99c7-b8ac6f99af47","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/redhat-iot/summit2017","ref":"master"},"contextDir":"dgproxy"},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"openshift/wildfly-101-centos7@sha256:90cf6003f5b2854c62f6dbf227d16543ed394fc97135c3913d2dafae3e1bb7d6"},"env":[{"name":"MAVEN_MIRROR_URL"}]}},"output":{"to":{"kind":"DockerImage","name":"172.30.120.0:5000/redhat-iot/datastore-proxy:latest"},"pushSecret":{"name":"builder-dockercfg-mxl6v"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","outputDockerImageReference":"172.30.120.0:5000/redhat-iot/datastore-proxy:latest","config":{"kind":"BuildConfig","namespace":"redhat-iot","name":"datastore-proxy"},"output":{}}}

  SOURCE_REPOSITORY:    https://github.com/redhat-iot/summit2017
  SOURCE_URI:       https://github.com/redhat-iot/summit2017
  SOURCE_CONTEXT_DIR:   dgproxy
  SOURCE_REF:       master
  ORIGIN_VERSION:       v3.6.1+008f2d5
  ALLOWED_UIDS:     1-
  DROP_CAPS:        KILL,MKNOD,SETGID,SETUID,SYS_CHROOT
  PUSH_DOCKERCFG_PATH:  /var/run/secrets/openshift.io/push
Mounts:
  /var/run/docker.sock from docker-socket (rw)
  /var/run/secrets/kubernetes.io/serviceaccount from builder-token-cvfgr (ro)
  /var/run/secrets/openshift.io/push from builder-dockercfg-mxl6v-push (ro)

Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: docker-socket: Type: HostPath (bare host directory volume) Path: /var/run/docker.sock builder-dockercfg-mxl6v-push: Type: Secret (a volume populated by a Secret) SecretName: builder-dockercfg-mxl6v Optional: false builder-token-cvfgr: Type: Secret (a volume populated by a Secret) SecretName: builder-token-cvfgr Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events:

Name: elasticsearch-1-wlfc8 Namespace: redhat-iot Security Policy: restricted Node: tekfocal-precision-workstation-t3500/192.168.10.5 Start Time: Fri, 27 Oct 2017 12:51:39 +0500 Labels: app=elasticsearch deployment=elasticsearch-1 deploymentconfig=elasticsearch hawkular-openshift-agent=jolokia-kapua Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"redhat-iot","name":"elasticsearch-1","uid":"a696725b-baeb-11e7-99c7-b8... openshift.io/deployment-config.latest-version=1 openshift.io/deployment-config.name=elasticsearch openshift.io/deployment.name=elasticsearch-1 openshift.io/generated-by=OpenShiftNewApp openshift.io/scc=restricted Status: Running IP: 172.17.0.3 Controllers: ReplicationController/elasticsearch-1 Containers: elasticsearch: Container ID: docker://b88031baa5daa0e5ea2cb3c5ab675e2ee3021db9039e9156b763b8ab155e7f0e Image: elasticsearch:2.4 Image ID: docker-pullable://elasticsearch@sha256:b6e7542e20edf5b02cb82e82f2306379699de51c55df057fe55ee0aae1e6abe5 Ports: 9200/TCP, 9300/TCP State: Running Started: Mon, 30 Oct 2017 10:49:48 +0500 Last State: Terminated Reason: Error Exit Code: 255 Started: Fri, 27 Oct 2017 16:08:32 +0500 Finished: Fri, 27 Oct 2017 18:32:59 +0500 Ready: True Restart Count: 4 Readiness: http-get http://:9200/ delay=15s timeout=5s period=10s #success=1 #failure=3 Environment: ES_HEAP_SIZE: 256m ES_JAVA_OPTS: -Des.cluster.name=kapua-datastore -Des.http.cors.enabled=true -Des.http.cors.allow-origin=* Mounts: /usr/share/elasticsearch/data from elasticsearch-data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-jnf1g (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: hawkular-openshift-agent: Type: ConfigMap (a volume populated by a ConfigMap) Name: hawkular-openshift-agent-kapua Optional: false elasticsearch-data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-jnf1g: Type: Secret (a volume populated by a Secret) SecretName: default-token-jnf1g Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{elasticsearch} Normal Pulling pulling image "elasticsearch:2.4" 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{elasticsearch} Normal Pulled Successfully pulled image "elasticsearch:2.4" 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{elasticsearch} Normal Created Created container 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{elasticsearch} Normal Started Started container

Name: kapua-api-1-48cdh Namespace: redhat-iot Security Policy: restricted Node: tekfocal-precision-workstation-t3500/192.168.10.5 Start Time: Fri, 27 Oct 2017 12:51:38 +0500 Labels: app=kapua-api deployment=kapua-api-1 deploymentconfig=kapua-api hawkular-openshift-agent=jolokia-kapua Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"redhat-iot","name":"kapua-api-1","uid":"a6896a99-baeb-11e7-99c7-b8ac6f... openshift.io/deployment-config.latest-version=1 openshift.io/deployment-config.name=kapua-api openshift.io/deployment.name=kapua-api-1 openshift.io/generated-by=OpenShiftNewApp openshift.io/scc=restricted Status: Running IP: 172.17.0.7 Controllers: ReplicationController/kapua-api-1 Containers: kapua-console: Container ID: docker://8933d3a58effe9d4b505e4365476de42413cfccd28ace6716a0920e44b14751c Image: redhatiot/kapua-api-jetty:2017-04-08 Image ID: docker-pullable://redhatiot/kapua-api-jetty@sha256:be37b1d3f9504d8c53c84e26c151b7484dcc9ca34f2c57fb1d9300049016dffc Ports: 8778/TCP, 8080/TCP State: Running Started: Mon, 30 Oct 2017 10:49:58 +0500 Last State: Terminated Reason: Error Exit Code: 255 Started: Fri, 27 Oct 2017 16:09:00 +0500 Finished: Fri, 27 Oct 2017 18:32:58 +0500 Ready: True Restart Count: 4 Readiness: http-get http://:8080/ delay=15s timeout=5s period=10s #success=1 #failure=3 Environment: JAVA_OPTS: -Ddatastore.elasticsearch.nodes=$ELASTICSEARCH_PORT_9200_TCP_ADDR -Dcommons.db.connection.host=$SQL_SERVICE_HOST -Dcommons.db.connection.port=$SQL_PORT_3306_TCP_PORT -Dbroker.host=$KAPUA_BROKER_SERVICE_HOST -javaagent:/jolokia-jvm-agent.jar=port=8778,protocol=https,caCert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt,clientPrincipal=cn=system:master-proxy,useSslClientAuthentication=true,extraClientCheck=true,host=0.0.0.0,discoveryEnabled=false,user=jolokia,password=7ARcvvynVlbscd0 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-jnf1g (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: hawkular-openshift-agent: Type: ConfigMap (a volume populated by a ConfigMap) Name: hawkular-openshift-agent-kapua Optional: false default-token-jnf1g: Type: Secret (a volume populated by a Secret) SecretName: default-token-jnf1g Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-console} Normal Pulling pulling image "redhatiot/kapua-api-jetty:2017-04-08" 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-console} Normal Pulled Successfully pulled image "redhatiot/kapua-api-jetty:2017-04-08" 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-console} Normal Created Created container 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-console} Normal Started Started container 4m 4m 2 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-console} Warning Unhealthy Readiness probe failed: Get http://172.17.0.7:8080/: dial tcp 172.17.0.7:8080: getsockopt: connection refused

Name: kapua-broker-1-tcvml Namespace: redhat-iot Security Policy: restricted Node: tekfocal-precision-workstation-t3500/192.168.10.5 Start Time: Fri, 27 Oct 2017 12:51:40 +0500 Labels: app=kapua-broker deployment=kapua-broker-1 deploymentconfig=kapua-broker hawkular-openshift-agent=jolokia-kapua Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"redhat-iot","name":"kapua-broker-1","uid":"a68f5f4c-baeb-11e7-99c7-b8a... openshift.io/container.kapua-broker.image.entrypoint=["/maven/bin/activemq","console"] openshift.io/deployment-config.latest-version=1 openshift.io/deployment-config.name=kapua-broker openshift.io/deployment.name=kapua-broker-1 openshift.io/generated-by=OpenShiftNewApp openshift.io/scc=restricted Status: Running IP: 172.17.0.5 Controllers: ReplicationController/kapua-broker-1 Containers: kapua-broker: Container ID: docker://23fc436ad7dd022291bd3aa094aa98c78b6ee76c8658ba2a86db955b3b1905ec Image: redhatiot/kapua-broker:2017-04-08 Image ID: docker-pullable://redhatiot/kapua-broker@sha256:1ecbd6e760124cd5d47322d9b161fb1b20c4dd6d52c29e383ea46d64755cab51 Ports: 8778/TCP, 1883/TCP, 61614/TCP State: Running Started: Mon, 30 Oct 2017 10:50:54 +0500 Last State: Terminated Reason: Error Exit Code: 1 Started: Mon, 30 Oct 2017 10:49:54 +0500 Finished: Mon, 30 Oct 2017 10:50:28 +0500 Ready: True Restart Count: 9 Readiness: tcp-socket :1883 delay=15s timeout=1s period=10s #success=1 #failure=3 Environment: ACTIVEMQ_OPTS: -Dcommons.db.connection.host=$SQL_SERVICE_HOST -Dcommons.db.connection.port=$SQL_PORT_3306_TCP_PORT -Ddatastore.elasticsearch.nodes=$ELASTICSEARCH_PORT_9200_TCP_ADDR -javaagent:/jolokia-jvm-agent.jar=port=8778,protocol=https,caCert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt,clientPrincipal=cn=system:master-proxy,useSslClientAuthentication=true,extraClientCheck=true,host=0.0.0.0,discoveryEnabled=false,user=jolokia,password=7ARcvvynVlbscd0 Mounts: /maven/data from kapua-broker-volume-1 (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-jnf1g (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: kapua-broker-volume-1: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-jnf1g: Type: Secret (a volume populated by a Secret) SecretName: default-token-jnf1g Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 4m 4m 2 kubelet, tekfocal-precision-workstation-t3500 Warning FailedSync Error syncing pod 4m 4m 2 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-broker} Warning BackOff Back-off restarting failed container 5m 4m 2 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-broker} Normal Pulling pulling image "redhatiot/kapua-broker:2017-04-08" 5m 4m 2 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-broker} Normal Pulled Successfully pulled image "redhatiot/kapua-broker:2017-04-08" 5m 4m 2 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-broker} Normal Created Created container 5m 4m 2 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-broker} Normal Started Started container 4m 3m 4 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-broker} Warning Unhealthy Readiness probe failed: dial tcp 172.17.0.5:1883: getsockopt: connection refused

Name: kapua-console-1-3htgf Namespace: redhat-iot Security Policy: restricted Node: tekfocal-precision-workstation-t3500/192.168.10.5 Start Time: Fri, 27 Oct 2017 12:51:37 +0500 Labels: app=kapua-console deployment=kapua-console-1 deploymentconfig=kapua-console hawkular-openshift-agent=jolokia-kapua Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"redhat-iot","name":"kapua-console-1","uid":"a692dd04-baeb-11e7-99c7-b8... openshift.io/deployment-config.latest-version=1 openshift.io/deployment-config.name=kapua-console openshift.io/deployment.name=kapua-console-1 openshift.io/generated-by=OpenShiftNewApp openshift.io/scc=restricted Status: Running IP: 172.17.0.12 Controllers: ReplicationController/kapua-console-1 Containers: kapua-console: Container ID: docker://1d0abd7cfde38f6b2880bf83df2a921f5d3e9ad9bf85070932c952539935d403 Image: redhatiot/kapua-console-jetty:2017-04-08 Image ID: docker-pullable://redhatiot/kapua-console-jetty@sha256:f15dc1e66082c4bf521aa30ef19d644f49b354085af0da689779eaefeb8cd6dc Ports: 8778/TCP, 8080/TCP State: Running Started: Mon, 30 Oct 2017 10:50:35 +0500 Last State: Terminated Reason: Error Exit Code: 255 Started: Fri, 27 Oct 2017 16:08:47 +0500 Finished: Fri, 27 Oct 2017 18:32:58 +0500 Ready: True Restart Count: 4 Readiness: http-get http://:8080/ delay=15s timeout=5s period=10s #success=1 #failure=3 Environment: JAVA_OPTS: -Ddatastore.elasticsearch.nodes=$ELASTICSEARCH_PORT_9200_TCP_ADDR -Dcommons.db.connection.host=$SQL_SERVICE_HOST -Dcommons.db.connection.port=$SQL_PORT_3306_TCP_PORT -Dbroker.host=$KAPUA_BROKER_SERVICE_HOST -javaagent:/jolokia-jvm-agent.jar=port=8778,protocol=https,caCert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt,clientPrincipal=cn=system:master-proxy,useSslClientAuthentication=true,extraClientCheck=true,host=0.0.0.0,discoveryEnabled=false,user=jolokia,password=7ARcvvynVlbscd0 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-jnf1g (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: hawkular-openshift-agent: Type: ConfigMap (a volume populated by a ConfigMap) Name: hawkular-openshift-agent-kapua Optional: false default-token-jnf1g: Type: Secret (a volume populated by a Secret) SecretName: default-token-jnf1g Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-console} Normal Pulling pulling image "redhatiot/kapua-console-jetty:2017-04-08" 4m 4m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-console} Normal Pulled Successfully pulled image "redhatiot/kapua-console-jetty:2017-04-08" 4m 4m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-console} Normal Created Created container 4m 4m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{kapua-console} Normal Started Started container

Name: simulator-2-7t9dr Namespace: redhat-iot Security Policy: restricted Node: tekfocal-precision-workstation-t3500/192.168.10.5 Start Time: Fri, 27 Oct 2017 13:17:30 +0500 Labels: app=simulator deployment=simulator-2 deploymentconfig=simulator Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"redhat-iot","name":"simulator-2","uid":"41bc47a1-baef-11e7-99c7-b8ac6f... openshift.io/deployment-config.latest-version=2 openshift.io/deployment-config.name=simulator openshift.io/deployment.name=simulator-2 openshift.io/generated-by=OpenShiftNewApp openshift.io/scc=restricted Status: Running IP: 172.17.0.10 Controllers: ReplicationController/simulator-2 Containers: simulator: Container ID: docker://15597ef1bbe6df57efe247afd560ff14b296556a15dd323b635b1822e232ea7e Image: redhatiot/kura-simulator:2017-04-08 Image ID: docker-pullable://redhatiot/kura-simulator@sha256:8c1ce7a6d5f983327fa2de4487f97a2cdb3d7710b9d733cb4ba37f1d4f068220 Port:
State: Running Started: Mon, 30 Oct 2017 10:51:36 +0500 Last State: Terminated Reason: Error Exit Code: 1 Started: Mon, 30 Oct 2017 10:50:58 +0500 Finished: Mon, 30 Oct 2017 10:51:08 +0500 Ready: True Restart Count: 7 Environment: KSIM_BROKER_PROTO: $(KAPUA_BROKER_PORT_1883_TCP_PROTO) KSIM_BROKER_HOST: $(KAPUA_BROKER_SERVICE_HOST) KSIM_BROKER_PORT: $(KAPUA_BROKER_PORT_1883_TCP_PORT) KSIM_BROKER_USER: demo-gw2 KSIM_BROKER_PASSWORD: RedHat123 KSIM_BASE_NAME: truck- KSIM_NAME_FACTORY:
KSIM_NUM_GATEWAYS: 2 KSIM_ACCOUNT_NAME: Red-Hat KSIM_SIMULATION_CONFIGURATION: <set to the key 'ksim.simulator.configuration' of config map 'data-simulator-config'> Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-jnf1g (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-jnf1g: Type: Secret (a volume populated by a Secret) SecretName: default-token-jnf1g Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 4m 3m 3 kubelet, tekfocal-precision-workstation-t3500 spec.containers{simulator} Warning BackOff Back-off restarting failed container 4m 3m 3 kubelet, tekfocal-precision-workstation-t3500 Warning FailedSync Error syncing pod 5m 3m 3 kubelet, tekfocal-precision-workstation-t3500 spec.containers{simulator} Normal Pulling pulling image "redhatiot/kura-simulator:2017-04-08" 4m 3m 3 kubelet, tekfocal-precision-workstation-t3500 spec.containers{simulator} Normal Pulled Successfully pulled image "redhatiot/kura-simulator:2017-04-08" 4m 3m 3 kubelet, tekfocal-precision-workstation-t3500 spec.containers{simulator} Normal Created Created container 4m 3m 3 kubelet, tekfocal-precision-workstation-t3500 spec.containers{simulator} Normal Started Started container

Name: sql-1-gvhh6 Namespace: redhat-iot Security Policy: restricted Node: tekfocal-precision-workstation-t3500/192.168.10.5 Start Time: Fri, 27 Oct 2017 12:51:41 +0500 Labels: app=sql deployment=sql-1 deploymentconfig=sql hawkular-openshift-agent=jolokia-kapua Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"redhat-iot","name":"sql-1","uid":"a68968f5-baeb-11e7-99c7-b8ac6f99af47... openshift.io/deployment-config.latest-version=1 openshift.io/deployment-config.name=sql openshift.io/deployment.name=sql-1 openshift.io/generated-by=OpenShiftNewApp openshift.io/scc=restricted Status: Running IP: 172.17.0.4 Controllers: ReplicationController/sql-1 Containers: sql: Container ID: docker://64d0a4a513caa0bbf3bb74b07f12612ca595e6321f9d373818b4bfad014660a4 Image: redhatiot/kapua-sql:2017-04-08 Image ID: docker-pullable://redhatiot/kapua-sql@sha256:c81883457f96cbff79950563dffee29d60d350bb3220d340336fd32bb293fcfe Ports: 8778/TCP, 3306/TCP, 8181/TCP State: Running Started: Mon, 30 Oct 2017 10:49:51 +0500 Last State: Terminated Reason: Error Exit Code: 255 Started: Fri, 27 Oct 2017 16:08:40 +0500 Finished: Fri, 27 Oct 2017 18:32:58 +0500 Ready: True Restart Count: 4 Readiness: tcp-socket :3306 delay=15s timeout=1s period=10s #success=1 #failure=3 Environment: H2_OPTS: -javaagent:/jolokia-jvm-agent.jar=port=8778,protocol=https,caCert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt,clientPrincipal=cn=system:master-proxy,useSslClientAuthentication=true,extraClientCheck=true,host=0.0.0.0,discoveryEnabled=false,user=jolokia,password=7ARcvvynVlbscd0 Mounts: /opt/h2-data from sql-data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-jnf1g (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: sql-data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: hawkular-openshift-agent: Type: ConfigMap (a volume populated by a ConfigMap) Name: hawkular-openshift-agent-kapua Optional: false default-token-jnf1g: Type: Secret (a volume populated by a Secret) SecretName: default-token-jnf1g Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{sql} Normal Pulling pulling image "redhatiot/kapua-sql:2017-04-08" 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{sql} Normal Pulled Successfully pulled image "redhatiot/kapua-sql:2017-04-08" 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{sql} Normal Created Created container 5m 5m 1 kubelet, tekfocal-precision-workstation-t3500 spec.containers{sql} Normal Started Started container 4m 4m 2 kubelet, tekfocal-precision-workstation-t3500 spec.containers{sql} Warning Unhealthy Readiness probe failed: dial tcp 172.17.0.4:3306: getsockopt: connection refused

Name: dashboard Namespace: redhat-iot Created: 2 days ago Labels: app=dashboard demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp Latest Version: 1 Selector: deploymentconfig=dashboard Replicas: 1 Triggers: Image(dashboard@latest, auto=true), Config Strategy: Rolling Template: Pod Template: Labels: app=dashboard deploymentconfig=dashboard Annotations: openshift.io/generated-by=OpenShiftNewApp Containers: dashboard: Image: 172.30.120.0:5000/redhat-iot/dashboard@sha256:37c1ba8d47bd87c683dc237f61a225537c191f55f6b300403cb6e6cd84cc03a5 Port: 8080/TCP Liveness: http-get http://:8080/ delay=120s timeout=5s period=5s #success=1 #failure=5 Readiness: http-get http://:8080/ delay=15s timeout=1s period=5s #success=1 #failure=5 Environment: BROKER_WS_NAME: broker BROKER_USERNAME: demo-gw2 BROKER_PASSWORD: RedHat123 DATASTORE_PROXY_SERVICE: datastore-proxy GOOGLE_MAPS_API_KEY: AIzaSyDpDtvyzzdXDYk5nt6CuOtjxmvBvwGq5D4 DASHBOARD_WEB_TITLE: Red Hat + Eurotech IoT Fleet Telematics Demo Mounts: Volumes:

Deployment #1 (latest): Name: dashboard-1 Created: 2 days ago Status: Complete Replicas: 1 current / 1 desired Selector: deployment=dashboard-1,deploymentconfig=dashboard Labels: app=dashboard,demo=summit2017,openshift.io/deployment-config.name=dashboard Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed

Events:

Name: datastore Namespace: redhat-iot Created: 2 days ago Labels: app=datastore application=datastore demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp Latest Version: 1 Selector: deploymentConfig=datastore Replicas: 1 Triggers: Image(jboss-datagrid65-openshift@1.4, auto=true), Config Strategy: Recreate Template: Pod Template: Labels: application=datastore deploymentConfig=datastore Annotations: openshift.io/generated-by=OpenShiftNewApp Containers: datastore: Image: registry.access.redhat.com/jboss-datagrid-6/datagrid65-openshift@sha256:f86e02fb8c740b4ed1f59300e94be69783ee51a38cc9ce6ddb73b6f817e173b3 Ports: 8778/TCP, 8080/TCP, 8888/TCP, 11211/TCP, 11222/TCP, 11333/TCP Environment: USERNAME: rhiot PASSWORD: redhatiot1! OPENSHIFT_KUBE_PING_LABELS: application=datastore OPENSHIFT_KUBE_PING_NAMESPACE: (v1:metadata.namespace) INFINISPAN_CONNECTORS: hotrod,rest CACHE_NAMES: customer,facility,operator,shipment,vehicle DATAVIRT_CACHE_NAMES:
ENCRYPTION_REQUIRE_SSL_CLIENT_AUTH:
HOTROD_SERVICE_NAME: datastore-hotrod MEMCACHED_CACHE: default REST_SECURITY_DOMAIN:
JGROUPS_CLUSTER_PASSWORD: YdbnXgne Mounts: Volumes:

Deployment #1 (latest): Name: datastore-1 Created: 2 days ago Status: Complete Replicas: 1 current / 1 desired Selector: deployment=datastore-1,deploymentConfig=datastore,deploymentconfig=datastore Labels: app=datastore,application=datastore,demo=summit2017,openshift.io/deployment-config.name=datastore Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed

Events:

Name: datastore-proxy Namespace: redhat-iot Created: 2 days ago Labels: app=datastore-proxy application=datastore-proxy demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp Latest Version: 1 Selector: deploymentConfig=datastore-proxy Replicas: 1 Triggers: Image(datastore-proxy@latest, auto=true), Config Strategy: Recreate Template: Pod Template: Labels: application=datastore-proxy deploymentConfig=datastore-proxy Annotations: openshift.io/generated-by=OpenShiftNewApp Containers: datastore-proxy: Image: 172.30.120.0:5000/redhat-iot/datastore-proxy@sha256:fa1bdf1170b89a097f64c69c62ebd2b4f291dce257ff52849e95e7f1552d8b57 Ports: 8778/TCP, 8080/TCP, 8888/TCP Liveness: http-get http://:8080/api/utils/health delay=120s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8080/api/utils/health delay=15s timeout=1s period=10s #success=1 #failure=5 Environment: OPENSHIFT_KUBE_PING_LABELS: application=datastore-proxy OPENSHIFT_KUBE_PING_NAMESPACE: (v1:metadata.namespace) MQ_CLUSTER_PASSWORD: dR41F75A JGROUPS_CLUSTER_PASSWORD: YdbnXgne AUTO_DEPLOY_EXPLODED: false DATASTORE_HOST: datastore-hotrod DATASTORE_PORT: 11333 DATASTORE_CACHE: customer,facility,operator,shipment,vehicle BROKER_USERNAME: demo-gw2 BROKER_PASSWORD: RedHat123 ADDITIONAL_SENSOR_IDS:
Mounts: Volumes:

Deployment #1 (latest): Name: datastore-proxy-1 Created: 2 days ago Status: Complete Replicas: 1 current / 1 desired Selector: deployment=datastore-proxy-1,deploymentConfig=datastore-proxy,deploymentconfig=datastore-proxy Labels: app=datastore-proxy,application=datastore-proxy,demo=summit2017,openshift.io/deployment-config.name=datastore-proxy Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed

Events:

Name: elasticsearch Namespace: redhat-iot Created: 2 days ago Labels: app=elasticsearch demo=summit2017 hawkular-openshift-agent=jolokia-kapua Annotations: openshift.io/generated-by=OpenShiftNewApp Latest Version: 1 Selector: app=elasticsearch,deploymentconfig=elasticsearch Replicas: 1 Triggers: Config Strategy: Rolling Template: Pod Template: Labels: app=elasticsearch deploymentconfig=elasticsearch hawkular-openshift-agent=jolokia-kapua Annotations: openshift.io/generated-by=OpenShiftNewApp Containers: elasticsearch: Image: elasticsearch:2.4 Ports: 9200/TCP, 9300/TCP Readiness: http-get http://:9200/ delay=15s timeout=5s period=10s #success=1 #failure=3 Environment: ES_HEAP_SIZE: 256m ES_JAVA_OPTS: -Des.cluster.name=kapua-datastore -Des.http.cors.enabled=true -Des.http.cors.allow-origin=* Mounts: /usr/share/elasticsearch/data from elasticsearch-data (rw) Volumes: hawkular-openshift-agent: Type: ConfigMap (a volume populated by a ConfigMap) Name: hawkular-openshift-agent-kapua Optional: false elasticsearch-data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium:

Deployment #1 (latest): Name: elasticsearch-1 Created: 2 days ago Status: Complete Replicas: 1 current / 1 desired Selector: app=elasticsearch,deployment=elasticsearch-1,deploymentconfig=elasticsearch Labels: app=elasticsearch,demo=summit2017,hawkular-openshift-agent=jolokia-kapua,openshift.io/deployment-config.name=elasticsearch Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed

Events:

Name: kapua-api Namespace: redhat-iot Created: 2 days ago Labels: app=kapua-api demo=summit2017 hawkular-openshift-agent=jolokia-kapua Annotations: openshift.io/generated-by=OpenShiftNewApp Latest Version: 1 Selector: app=kapua-api,deploymentconfig=kapua-api Replicas: 1 Triggers: Config Strategy: Rolling Template: Pod Template: Labels: app=kapua-api deploymentconfig=kapua-api hawkular-openshift-agent=jolokia-kapua Annotations: openshift.io/generated-by=OpenShiftNewApp Containers: kapua-console: Image: redhatiot/kapua-api-jetty:2017-04-08 Ports: 8778/TCP, 8080/TCP Readiness: http-get http://:8080/ delay=15s timeout=5s period=10s #success=1 #failure=3 Environment: JAVA_OPTS: -Ddatastore.elasticsearch.nodes=$ELASTICSEARCH_PORT_9200_TCP_ADDR -Dcommons.db.connection.host=$SQL_SERVICE_HOST -Dcommons.db.connection.port=$SQL_PORT_3306_TCP_PORT -Dbroker.host=$KAPUA_BROKER_SERVICE_HOST -javaagent:/jolokia-jvm-agent.jar=port=8778,protocol=https,caCert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt,clientPrincipal=cn=system:master-proxy,useSslClientAuthentication=true,extraClientCheck=true,host=0.0.0.0,discoveryEnabled=false,user=jolokia,password=7ARcvvynVlbscd0 Mounts: Volumes: hawkular-openshift-agent: Type: ConfigMap (a volume populated by a ConfigMap) Name: hawkular-openshift-agent-kapua Optional: false

Deployment #1 (latest): Name: kapua-api-1 Created: 2 days ago Status: Complete Replicas: 1 current / 1 desired Selector: app=kapua-api,deployment=kapua-api-1,deploymentconfig=kapua-api Labels: app=kapua-api,demo=summit2017,hawkular-openshift-agent=jolokia-kapua,openshift.io/deployment-config.name=kapua-api Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed

Events:

Name: kapua-broker Namespace: redhat-iot Created: 2 days ago Labels: app=kapua-broker demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp Latest Version: 1 Selector: app=kapua-broker,deploymentconfig=kapua-broker Replicas: 1 Triggers: Config Strategy: Recreate Template: Pod Template: Labels: app=kapua-broker deploymentconfig=kapua-broker hawkular-openshift-agent=jolokia-kapua Annotations: openshift.io/container.kapua-broker.image.entrypoint=["/maven/bin/activemq","console"] openshift.io/generated-by=OpenShiftNewApp Containers: kapua-broker: Image: redhatiot/kapua-broker:2017-04-08 Ports: 8778/TCP, 1883/TCP, 61614/TCP Readiness: tcp-socket :1883 delay=15s timeout=1s period=10s #success=1 #failure=3 Environment: ACTIVEMQ_OPTS: -Dcommons.db.connection.host=$SQL_SERVICE_HOST -Dcommons.db.connection.port=$SQL_PORT_3306_TCP_PORT -Ddatastore.elasticsearch.nodes=$ELASTICSEARCH_PORT_9200_TCP_ADDR -javaagent:/jolokia-jvm-agent.jar=port=8778,protocol=https,caCert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt,clientPrincipal=cn=system:master-proxy,useSslClientAuthentication=true,extraClientCheck=true,host=0.0.0.0,discoveryEnabled=false,user=jolokia,password=7ARcvvynVlbscd0 Mounts: /maven/data from kapua-broker-volume-1 (rw) Volumes: kapua-broker-volume-1: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium:

Deployment #1 (latest): Name: kapua-broker-1 Created: 2 days ago Status: Complete Replicas: 1 current / 1 desired Selector: app=kapua-broker,deployment=kapua-broker-1,deploymentconfig=kapua-broker Labels: app=kapua-broker,demo=summit2017,openshift.io/deployment-config.name=kapua-broker Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed

Events:

Name: kapua-console Namespace: redhat-iot Created: 2 days ago Labels: app=kapua-console demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp Latest Version: 1 Selector: app=kapua-console,deploymentconfig=kapua-console Replicas: 1 Triggers: Config Strategy: Rolling Template: Pod Template: Labels: app=kapua-console deploymentconfig=kapua-console hawkular-openshift-agent=jolokia-kapua Annotations: openshift.io/generated-by=OpenShiftNewApp Containers: kapua-console: Image: redhatiot/kapua-console-jetty:2017-04-08 Ports: 8778/TCP, 8080/TCP Readiness: http-get http://:8080/ delay=15s timeout=5s period=10s #success=1 #failure=3 Environment: JAVA_OPTS: -Ddatastore.elasticsearch.nodes=$ELASTICSEARCH_PORT_9200_TCP_ADDR -Dcommons.db.connection.host=$SQL_SERVICE_HOST -Dcommons.db.connection.port=$SQL_PORT_3306_TCP_PORT -Dbroker.host=$KAPUA_BROKER_SERVICE_HOST -javaagent:/jolokia-jvm-agent.jar=port=8778,protocol=https,caCert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt,clientPrincipal=cn=system:master-proxy,useSslClientAuthentication=true,extraClientCheck=true,host=0.0.0.0,discoveryEnabled=false,user=jolokia,password=7ARcvvynVlbscd0 Mounts: Volumes: hawkular-openshift-agent: Type: ConfigMap (a volume populated by a ConfigMap) Name: hawkular-openshift-agent-kapua Optional: false

Deployment #1 (latest): Name: kapua-console-1 Created: 2 days ago Status: Complete Replicas: 1 current / 1 desired Selector: app=kapua-console,deployment=kapua-console-1,deploymentconfig=kapua-console Labels: app=kapua-console,demo=summit2017,openshift.io/deployment-config.name=kapua-console Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed

Events:

Name: simulator Namespace: redhat-iot Created: 2 days ago Labels: demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp Latest Version: 2 Selector: app=simulator,deploymentconfig=simulator Replicas: 1 Triggers: Config Strategy: Recreate Template: Pod Template: Labels: app=simulator deploymentconfig=simulator Annotations: openshift.io/generated-by=OpenShiftNewApp Containers: simulator: Image: redhatiot/kura-simulator:2017-04-08 Port:
Environment: KSIM_BROKER_PROTO: $(KAPUA_BROKER_PORT_1883_TCP_PROTO) KSIM_BROKER_HOST: $(KAPUA_BROKER_SERVICE_HOST) KSIM_BROKER_PORT: $(KAPUA_BROKER_PORT_1883_TCP_PORT) KSIM_BROKER_USER: demo-gw2 KSIM_BROKER_PASSWORD: RedHat123 KSIM_BASE_NAME: truck- KSIM_NAME_FACTORY:
KSIM_NUM_GATEWAYS: 2 KSIM_ACCOUNT_NAME: Red-Hat KSIM_SIMULATION_CONFIGURATION: <set to the key 'ksim.simulator.configuration' of config map 'data-simulator-config'> Optional: false Mounts: Volumes:

Deployment #2 (latest): Name: simulator-2 Created: 2 days ago Status: Complete Replicas: 1 current / 1 desired Selector: app=simulator,deployment=simulator-2,deploymentconfig=simulator Labels: demo=summit2017,openshift.io/deployment-config.name=simulator Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Deployment #1: Created: 2 days ago Status: Complete Replicas: 0 current / 0 desired

Events:

Name: sql Namespace: redhat-iot Created: 2 days ago Labels: app=sql demo=summit2017 Annotations: openshift.io/generated-by=OpenShiftNewApp Latest Version: 1 Selector: app=sql,deploymentconfig=sql Replicas: 1 Triggers: Config Strategy: Recreate Template: Pod Template: Labels: app=sql deploymentconfig=sql hawkular-openshift-agent=jolokia-kapua Annotations: openshift.io/generated-by=OpenShiftNewApp Containers: sql: Image: redhatiot/kapua-sql:2017-04-08 Ports: 8778/TCP, 3306/TCP, 8181/TCP Readiness: tcp-socket :3306 delay=15s timeout=1s period=10s #success=1 #failure=3 Environment: H2_OPTS: -javaagent:/jolokia-jvm-agent.jar=port=8778,protocol=https,caCert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt,clientPrincipal=cn=system:master-proxy,useSslClientAuthentication=true,extraClientCheck=true,host=0.0.0.0,discoveryEnabled=false,user=jolokia,password=7ARcvvynVlbscd0 Mounts: /opt/h2-data from sql-data (rw) Volumes: sql-data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: hawkular-openshift-agent: Type: ConfigMap (a volume populated by a ConfigMap) Name: hawkular-openshift-agent-kapua Optional: false

Deployment #1 (latest): Name: sql-1 Created: 2 days ago Status: Complete Replicas: 1 current / 1 desired Selector: app=sql,deployment=sql-1,deploymentconfig=sql Labels: app=sql,demo=summit2017,openshift.io/deployment-config.name=sql Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed

kamranIoTDeveloper commented 6 years ago

@jamesfalkner did u find issue in above logs ?

jamesfalkner commented 6 years ago

Yep.. all of your Routes use the hostname Requested Host: api-redhat-iot.router.default.svc.cluster.local which will never work since that hostname isn't resolvable externally. Your default router subdomain isn't set. You can change it by following the docs and restarting the master. I would expect it to be something like apps.<ipaddr>.nip.io where <ipaddr> is your public IP address (or 127.0.0.1 if you're using it locally).

kamranIoTDeveloper commented 6 years ago

Thanks @jamesfalkner, followed link, not finding /etc/openshift/master/master-config.yaml

jamesfalkner commented 6 years ago

Ok - can you provide exact steps you took to install openshift? Like, are you installing it on a CentOS/RHEL VM, or locally on a windows or mac os x box, etc? That file does exist, it's just inside the container image or VM in which the master server runs.

kamranIoTDeveloper commented 6 years ago

I am installing it on Ubunto, following steps

Installing and Running an All-in-One Server

Download the binary from the Releases page and untar it on your local system.

Add the directory you untarred the release into to your path:

$ export PATH="$(pwd)":$PATH

Launch the server:

$ sudo ./openshift start
kamranIoTDeveloper commented 6 years ago

@jamesfalkner Can u send me instructions regarding installation of openshift?

jamesfalkner commented 6 years ago

So I was able to configure the default by running these steps:

# write the default config files out to disk
$ ./openshift start --write-config=`pwd`

$ vi master/master-config.yaml
 ## replace the routingConfig.subdomain value from "router.default.svc.cluster.local" to "apps.127.0.0.1.nip.io"

# now run origin with the specified config file
$ ./openshift start --master=config=master/master-config.yaml --node-config=node-*/node-config.yaml
kamranIoTDeveloper commented 6 years ago

@jamesfalkner Thanks, subdomain issue resolve, Now following problem appears while creating node

oc adm create-node-config --node-dir=/openshift.local.config/node-tekfocal-precision-workstation-t3500 --node=node-config.yaml --hostnames=127.0.0.1 --certificate-authority=/openshift.local.config/master/ca.crt --signer-cert=/openshift.local.config/master/ca.cert --signer-key=/openshift.local.config/master/ca.key Generating node credentials ... error: --signer-cert, "/openshift.local.config/master/ca.cert" must be a valid certificate file

kamranIoTDeveloper commented 6 years ago

@jamesfalkner node created successfully, Now facing problem in building of dashboard and datastore-proxy

Cloning "https://github.com/redhat-iot/summit2017" ... WARNING: timed out waiting for git server, will wait 1m4s WARNING: timed out waiting for git server, will wait 4m16s error: build error: fatal: unable to access 'https://github.com/redhat-iot/summit2017/': Failed connect to github.com:443;

kamranIoTDeveloper commented 6 years ago

@jamesfalkner, there was error in deployment of Docker registry, problem solved by redeploying docker registry with default image and parameters. Thanks