eclipse-hono / hono

Eclipse Hono™ Project
https://eclipse.dev/hono
Eclipse Public License 2.0
452 stars 137 forks source link

(Minishift) failed to start containers: Crash Loop Back-off #566

Closed mohabh88 closed 6 years ago

mohabh88 commented 6 years ago

minishft version: minishift v1.14.0+1ec5877 OS: Ubuntu 17.10

Deploying Hono on top of minishift throws lots of Pods Crash Loop Back-offs

$oc project hono
Already on project "hono" on server "https://192.168.42.24:8443".
$ oc get pod
NAME                                    READY     STATUS             RESTARTS   AGE
grafana-543742999-8vthz                 1/1       Running            1          23m
hono-adapter-http-vertx-1-deploy        1/1       Running            0          23m
hono-adapter-http-vertx-1-v5r2k         0/1       CrashLoopBackOff   9          23m
hono-adapter-kura-1-deploy              1/1       Running            0          23m
hono-adapter-kura-1-r7r8m               0/1       CrashLoopBackOff   9          23m
hono-adapter-mqtt-vertx-1-deploy        1/1       Running            0          23m
hono-adapter-mqtt-vertx-1-xw6w4         0/1       CrashLoopBackOff   9          23m
hono-artemis-1-bjvf4                    1/1       Running            0          23m
hono-dispatch-router-1-64z8n            0/1       CrashLoopBackOff   9          23m
hono-dispatch-router-1-deploy           1/1       Running            0          23m
hono-service-auth-1-deploy              1/1       Running            0          23m
hono-service-auth-1-wwhcg               0/1       CrashLoopBackOff   9          23m
hono-service-device-registry-1-deploy   1/1       Running            0          23m
hono-service-device-registry-1-wdq6l    0/1       CrashLoopBackOff   8          23m
hono-service-messaging-1-deploy         1/1       Running            0          23m
hono-service-messaging-1-l9gr2          0/1       CrashLoopBackOff   9          23m
influxdb-106268175-qr4w7                1/1       Running            0          23m

The above behavior always happens whenever using minishift; also other than the CrashLoopBackOff errors, the ImgPullBackOff errors sometimes appear and couldn't be resolved.

The above started to appear when I tried to deploy Prometheus monitoring stack and minishift heapster addon for dashboard resource metrics.

ctron commented 6 years ago

Can you check in the logs what causes the crash?

sophokles73 commented 6 years ago

this might be the same problem as #558

mohabh88 commented 6 years ago

Those are the logs that I could retrieve from minishift minishift logs.zip; could you point me on how to find logs related to the pods?

mohabh88 commented 6 years ago

Yes, @sophokles73 The deployment varies every time using minishift; it is not stable at all (like minikube for example!) Most of the times minishift fails to have hono deployed on top of it normally.

ctron commented 6 years ago

A crash can happen due to multiple reasons. I never had any issues running Minishift. However I am still running minishift v1.13.1+75352e5.

From your comment:

The above started to appear when I tried to deploy Prometheus monitoring stack and minishift heapster addon for dashboard resource metrics.

I could imagine your are running out of resources. After all minishift runs OpenShift inside a virtual machine.

How much RAM and how many CPUs did you assigned to that virtual machine?

mohabh88 commented 6 years ago

I am running the below command to start downloading the minishift vm (without adding or removing any other parameter)

$minishift start

$ minishift start
-- Starting profile 'minishift'
-- Checking if requested OpenShift version 'v3.7.1' is valid ... OK
-- Checking if requested OpenShift version 'v3.7.1' is supported ... OK
-- Checking if requested hypervisor 'kvm' is supported on this platform ... OK
-- Checking if KVM driver is installed ... 
   Driver is available at /usr/local/bin/docker-machine-driver-kvm ... 
   Checking driver binary is executable ... OK
-- Checking if Libvirt is installed ... OK
-- Checking if Libvirt default network is present ... OK
-- Checking if Libvirt default network is active ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting local OpenShift cluster using 'kvm' hypervisor ...
-- Starting Minishift VM ............... OK
-- Checking for IP address ... OK
-- Checking if external host is reachable from the Minishift VM ... 
   Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ... 
   Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 18% used OK
-- OpenShift cluster will be configured with ...
   Version: v3.7.1
Starting OpenShift using openshift/origin:v3.7.1 ...
OpenShift server started.

The server is accessible via web console at:
    https://192.168.42.24:8443

As per the status of the disk usage (18%) is being used!

$minishift status
Minishift:  Running
Profile:    minishift
OpenShift:  Running (openshift v3.7.1+282e43f-42)
DiskUsage:  18% of 17.9G

Should anything else be included while starting minishift to avoid hitting such errors?

mohabh88 commented 6 years ago

After a bit of research, knew that the default memory for the vm is set to 2GB and the default stack workspace can reach 2GB, hence I started Minishift with 5GB as most of the discussions recommended

$ minishift start --memory=5GB
-- Starting profile 'minishift'
-- Checking if requested OpenShift version 'v3.7.1' is valid ... OK
-- Checking if requested OpenShift version 'v3.7.1' is supported ... OK
-- Checking if requested hypervisor 'kvm' is supported on this platform ... OK
-- Checking if KVM driver is installed ... 
   Driver is available at /usr/local/bin/docker-machine-driver-kvm ... 
   Checking driver binary is executable ... OK
-- Checking if Libvirt is installed ... OK
-- Checking if Libvirt default network is present ... OK
-- Checking if Libvirt default network is active ... OK
-- Checking the ISO URL ... OK
-- Downloading OpenShift binary 'oc' version 'v3.7.1'
 38.51 MiB / 38.51 MiB [==============================================================] 100.00% 0s-- Downloading OpenShift v3.7.1 checksums ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting local OpenShift cluster using 'kvm' hypervisor ...
-- Minishift VM will be configured with ...
   Memory:    5 GB
   vCPUs :    2
   Disk size: 20 GB

   Downloading ISO 'https://github.com/minishift/minishift-b2d-iso/releases/download/v1.2.0/minishift-b2d.iso'
 40.00 MiB / 40.00 MiB [==============================================================] 100.00% 0s
-- Starting Minishift VM ........................ OK
-- Checking for IP address ... OK
-- Checking if external host is reachable from the Minishift VM ... 
   Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ... 
   Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 0% used OK
   Importing 'openshift/origin:v3.7.1' . CACHE MISS
   Importing 'openshift/origin-docker-registry:v3.7.1'  CACHE MISS
   Importing 'openshift/origin-haproxy-router:v3.7.1'  CACHE MISS
-- OpenShift cluster will be configured with ...
   Version: v3.7.1
Starting OpenShift using openshift/origin:v3.7.1 ...
Pulling image openshift/origin:v3.7.1
Pulled 1/4 layers, 26% complete
Pulled 2/4 layers, 68% complete
Pulled 3/4 layers, 87% complete
Pulled 4/4 layers, 100% complete
Extracting
Image pull complete
OpenShift server started.

The server is accessible via web console at:
    https://192.168.42.152:8443

You are logged in as:
    User:     developer
    Password: <any value>

To login as administrator:
    oc login -u system:admin

-- Exporting of OpenShift images is occuring in background process with pid 18759.

but still no luck; the Crash Loop Back off states for the pods continue to exist

$ oc get pod
NAME                                    READY     STATUS                  RESTARTS   AGE
grafana-543742999-r92zs                 1/1       Running                 1          11m
hono-adapter-http-vertx-1-22wzk         0/1       CrashLoopBackOff        6          11m
hono-adapter-http-vertx-1-deploy        1/1       Running                 0          11m
hono-adapter-kura-1-deploy              1/1       Running                 0          11m
hono-adapter-kura-1-rc74r               0/1       CrashLoopBackOff        6          11m
hono-adapter-mqtt-vertx-1-deploy        1/1       Running                 0          11m
hono-adapter-mqtt-vertx-1-wsc9t         0/1       CrashLoopBackOff        6          11m
hono-artemis-1-hjkdn                    1/1       Running                 0          11m
hono-dispatch-router-1-deploy           1/1       Running                 0          11m
hono-dispatch-router-1-vkx8b            0/1       CrashLoopBackOff        6          11m
hono-service-auth-1-4knqv               0/1       CrashLoopBackOff        7          11m
hono-service-auth-1-deploy              1/1       Running                 0          11m
hono-service-device-registry-1-58vlp    0/1       Init:CrashLoopBackOff   6          11m
hono-service-device-registry-1-deploy   1/1       Running                 0          11m
hono-service-messaging-1-deploy         1/1       Running                 0          11m
hono-service-messaging-1-mqm2p          0/1       CrashLoopBackOff        7          11m
influxdb-106268175-h52b7                1/1       Running                 0          11m
$ minishift status
Minishift:  Running
Profile:    minishift
OpenShift:  Running (openshift v3.7.1+282e43f-42)
DiskUsage:  18% of 17.9G
ctron commented 6 years ago

Well I am typically running with with 16GB RAM and 4 CPUs. The number of CPUs is limited to 2 IIRC. Which limits the numbers of pods you can run. Adding more services requires more pods of course.

I am using the following command - https://github.com/ctron/hono-demo-1#create-a-minishift-instance

minishift start --cpus 4 --memory 16GB --disk-size 40GB
ctron commented 6 years ago

To get more information about a failing pod:

oc describe pod/hono-abc

e.g.:

oc describe pod/hono-adapter-http-vertx-1-5gknq

Which should bring up a section "events", showing past errors.

mohabh88 commented 6 years ago

Tried to use your recommended resources, but unfortunately I am limited to 15GB of physical memory on my machine and it throws the "cannot set up guest memory 'pc.ram': Cannot allocate memory" for the vm; so I used slightly lower ones

minishift start --cpus 4 --memory 10GB --disk-size 30GB

and still, that didn't help

$oc get pod
NAME                                    READY     STATUS                  RESTARTS   AGE
grafana-543742999-8f6zk                 1/1       Running                 1          10m
hono-adapter-http-vertx-1-deploy        1/1       Running                 0          9m
hono-adapter-http-vertx-1-jhtc2         0/1       CrashLoopBackOff        6          9m
hono-adapter-kura-1-deploy              1/1       Running                 0          9m
hono-adapter-kura-1-lwj6j               0/1       CrashLoopBackOff        6          9m
hono-adapter-mqtt-vertx-1-deploy        1/1       Running                 0          9m
hono-adapter-mqtt-vertx-1-j956g         0/1       CrashLoopBackOff        6          9m
hono-artemis-1-dl2bd                    1/1       Running                 0          9m
hono-dispatch-router-1-deploy           1/1       Running                 0          10m
hono-dispatch-router-1-wldb6            0/1       CrashLoopBackOff        6          9m
hono-service-auth-1-9t5vn               0/1       CrashLoopBackOff        6          9m
hono-service-auth-1-deploy              1/1       Running                 0          10m
hono-service-device-registry-1-deploy   1/1       Running                 0          9m
hono-service-device-registry-1-rbnwj    0/1       Init:CrashLoopBackOff   6          9m
hono-service-messaging-1-deploy         1/1       Running                 0          9m
hono-service-messaging-1-p4jk4          0/1       CrashLoopBackOff        6          9m
influxdb-106268175-5cx7m                1/1       Running                 0          10m

Logs related to the failed pods:

$ oc describe pod/hono-adapter-kura-1-lwj6j
Name:       hono-adapter-kura-1-lwj6j
Namespace:  hono
Node:       localhost/192.168.122.205
Start Time: Tue, 10 Apr 2018 10:53:21 -0400
Labels:     app=hono-adapter-kura
        deployment=hono-adapter-kura-1
        deploymentconfig=hono-adapter-kura
        group=org.eclipse.hono
        provider=fabric8
        version=0.6-M1-SNAPSHOT
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hono","name":"hono-adapter-kura-1","uid":"d7bd761a-3cce-11e8-b6b0-4e3a...
        openshift.io/deployment-config.latest-version=1
        openshift.io/deployment-config.name=hono-adapter-kura
        openshift.io/deployment.name=hono-adapter-kura-1
        openshift.io/scc=restricted
Status:     Running
IP:     172.17.0.19
Created By: ReplicationController/hono-adapter-kura-1
Controlled By:  ReplicationController/hono-adapter-kura-1
Containers:
  eclipse-hono-adapter-kura:
    Container ID:   docker://dca6cc4a15df717fd7d6c5c42379a6607fd5ac1d053619592434612e69055e72
    Image:      eclipse/hono-adapter-kura:0.6-M1-SNAPSHOT
    Image ID:       docker://sha256:59dc6fd623dad743c26f922a0da5242b9bcba9b68077e19a7bb0a62f719180e9
    Ports:      8088/TCP, 8883/TCP, 1883/TCP
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:53: mounting \\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e64c0098-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/aufs/mnt/63f831e912f5bc06731ff4ecf684d584f6433cac18a363090b3cd00eb18f33da\\\\\\\" at \\\\\\\"/var/lib/docker/aufs/mnt/63f831e912f5bc06731ff4ecf684d584f6433cac18a363090b3cd00eb18f33da/run/secrets/kubernetes.io/serviceaccount\\\\\\\" caused \\\\\\\"mkdir /var/lib/docker/aufs/mnt/63f831e912f5bc06731ff4ecf684d584f6433cac18a363090b3cd00eb18f33da/run/secrets/kubernetes.io: read-only file system\\\\\\\"\\\"\"\n"
      Exit Code:    128
      Started:      Tue, 10 Apr 2018 10:59:35 -0400
      Finished:     Tue, 10 Apr 2018 10:59:35 -0400
    Ready:      False
    Restart Count:  6
    Liveness:       http-get http://:8088/liveness delay=180s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8088/readiness delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      SPRING_CONFIG_LOCATION:   file:///run/secrets/
      SPRING_PROFILES_ACTIVE:   dev
      LOGGING_CONFIG:       classpath:logback-spring.xml
      _JAVA_OPTIONS:        -Xmx128m
      KUBERNETES_NAMESPACE: hono (v1:metadata.namespace)
    Mounts:
      /run/secrets from conf (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bzq9s (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  conf:
    Type:   Secret (a volume populated by a Secret)
    SecretName: hono-adapter-kura-conf
    Optional:   false
  default-token-bzq9s:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-bzq9s
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason          Message
  --------- --------    -----   ----            -------------               --------    ------          -------
  10m       10m     1   default-scheduler                       Normal      Scheduled       Successfully assigned hono-adapter-kura-1-lwj6j to localhost
  10m       10m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-bzq9s" 
  10m       10m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "conf" 
  10m       10m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-kura}  Warning     Failed          Error: failed to start container "eclipse-hono-adapter-kura": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e64c0098-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/100f93c13698e6bae75197620388ff4ff0c4bd8e883185657a2c56a0d3bf22a2\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/100f93c13698e6bae75197620388ff4ff0c4bd8e883185657a2c56a0d3bf22a2/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/100f93c13698e6bae75197620388ff4ff0c4bd8e883185657a2c56a0d3bf22a2/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  10m       10m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-kura}  Warning     Failed          Error: failed to start container "eclipse-hono-adapter-kura": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e64c0098-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/d72743d76ae942f92e12e71119941312a9a999fcb59f48d8f49c8be5418debc7\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/d72743d76ae942f92e12e71119941312a9a999fcb59f48d8f49c8be5418debc7/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/d72743d76ae942f92e12e71119941312a9a999fcb59f48d8f49c8be5418debc7/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  9m        9m      1   kubelet, localhost  spec.containers{eclipse-hono-adapter-kura}  Warning     Failed          Error: failed to start container "eclipse-hono-adapter-kura": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e64c0098-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/fbd5cf081311a0d7a1abb878ed2e74d384c8d2c5912e66e9eefcc656f65486ed\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/fbd5cf081311a0d7a1abb878ed2e74d384c8d2c5912e66e9eefcc656f65486ed/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/fbd5cf081311a0d7a1abb878ed2e74d384c8d2c5912e66e9eefcc656f65486ed/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  9m        9m      1   kubelet, localhost  spec.containers{eclipse-hono-adapter-kura}  Warning     Failed          Error: failed to start container "eclipse-hono-adapter-kura": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e64c0098-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/021d71498aff1e95f50d822097d62adac7db170f2c54d9d7553ca9b6686334ff\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/021d71498aff1e95f50d822097d62adac7db170f2c54d9d7553ca9b6686334ff/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/021d71498aff1e95f50d822097d62adac7db170f2c54d9d7553ca9b6686334ff/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  10m       8m      5   kubelet, localhost  spec.containers{eclipse-hono-adapter-kura}  Normal      Pulled          Container image "eclipse/hono-adapter-kura:0.6-M1-SNAPSHOT" already present on machine
  10m       8m      5   kubelet, localhost  spec.containers{eclipse-hono-adapter-kura}  Normal      Created         Created container
  8m        8m      1   kubelet, localhost  spec.containers{eclipse-hono-adapter-kura}  Warning     Failed          Error: failed to start container "eclipse-hono-adapter-kura": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e64c0098-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/165d08425f0394ddd7545bbe25932205de5c409632f84cabf5c2756f6146972f\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/165d08425f0394ddd7545bbe25932205de5c409632f84cabf5c2756f6146972f/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/165d08425f0394ddd7545bbe25932205de5c409632f84cabf5c2756f6146972f/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  9m        19s     48  kubelet, localhost  spec.containers{eclipse-hono-adapter-kura}  Warning     BackOff         Back-off restarting failed container

===============================================================================

$ oc describe pod/hono-adapter-http-vertx-1-jhtc2
Name:       hono-adapter-http-vertx-1-jhtc2
Namespace:  hono
Node:       localhost/192.168.122.205
Start Time: Tue, 10 Apr 2018 10:53:21 -0400
Labels:     app=hono-adapter-http-vertx
        deployment=hono-adapter-http-vertx-1
        deploymentconfig=hono-adapter-http-vertx
        group=org.eclipse.hono
        provider=fabric8
        version=0.6-M1-SNAPSHOT
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hono","name":"hono-adapter-http-vertx-1","uid":"d2f8c04e-3cce-11e8-b6b...
        openshift.io/deployment-config.latest-version=1
        openshift.io/deployment-config.name=hono-adapter-http-vertx
        openshift.io/deployment.name=hono-adapter-http-vertx-1
        openshift.io/scc=restricted
Status:     Running
IP:     172.17.0.20
Created By: ReplicationController/hono-adapter-http-vertx-1
Controlled By:  ReplicationController/hono-adapter-http-vertx-1
Containers:
  eclipse-hono-adapter-http-vertx:
    Container ID:   docker://90cde13f982745469363a2bbf688c26aaa572aa2f967ae557e1d4b473b03ee1e
    Image:      eclipse/hono-adapter-http-vertx:0.6-M1-SNAPSHOT
    Image ID:       docker://sha256:6a0fc89d23114d5a6a2fdf2c711f9044d31565c5e86e02c0012cbd222aad234e
    Ports:      8088/TCP, 8080/TCP, 8443/TCP
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:53: mounting \\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e64c36b4-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/aufs/mnt/bde441242dfb4b21036e3af6d5374d1c71fae74c2960bac7dcbe1582e7f0d360\\\\\\\" at \\\\\\\"/var/lib/docker/aufs/mnt/bde441242dfb4b21036e3af6d5374d1c71fae74c2960bac7dcbe1582e7f0d360/run/secrets/kubernetes.io/serviceaccount\\\\\\\" caused \\\\\\\"mkdir /var/lib/docker/aufs/mnt/bde441242dfb4b21036e3af6d5374d1c71fae74c2960bac7dcbe1582e7f0d360/run/secrets/kubernetes.io: read-only file system\\\\\\\"\\\"\"\n"
      Exit Code:    128
      Started:      Tue, 10 Apr 2018 10:59:40 -0400
      Finished:     Tue, 10 Apr 2018 10:59:40 -0400
    Ready:      False
    Restart Count:  6
    Liveness:       http-get http://:8088/liveness delay=180s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8088/readiness delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      SPRING_CONFIG_LOCATION:   file:///run/secrets/
      SPRING_PROFILES_ACTIVE:   dev
      LOGGING_CONFIG:       classpath:logback-spring.xml
      _JAVA_OPTIONS:        -Xmx128m
      KUBERNETES_NAMESPACE: hono (v1:metadata.namespace)
    Mounts:
      /run/secrets from conf (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bzq9s (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  conf:
    Type:   Secret (a volume populated by a Secret)
    SecretName: hono-adapter-http-vertx-conf
    Optional:   false
  default-token-bzq9s:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-bzq9s
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason          Message
  --------- --------    -----   ----            -------------               --------    ------          -------
  11m       11m     1   default-scheduler                       Normal      Scheduled       Successfully assigned hono-adapter-http-vertx-1-jhtc2 to localhost
  11m       11m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "conf" 
  11m       11m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-bzq9s" 
  11m       11m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Warning     Failed          Error: failed to start container "eclipse-hono-adapter-http-vertx": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e64c36b4-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/0b7a73360ca5c8b80789dd1f5d5f118be0b574f5f8d2b80ec9ae8cb050c99380\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/0b7a73360ca5c8b80789dd1f5d5f118be0b574f5f8d2b80ec9ae8cb050c99380/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/0b7a73360ca5c8b80789dd1f5d5f118be0b574f5f8d2b80ec9ae8cb050c99380/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  10m       10m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Warning     Failed          Error: failed to start container "eclipse-hono-adapter-http-vertx": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e64c36b4-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/79bbc3326c7b2776060fe0b58f55741d2e5a640dd07bd770954e0f53766243dc\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/79bbc3326c7b2776060fe0b58f55741d2e5a640dd07bd770954e0f53766243dc/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/79bbc3326c7b2776060fe0b58f55741d2e5a640dd07bd770954e0f53766243dc/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  10m       10m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Warning     Failed          Error: failed to start container "eclipse-hono-adapter-http-vertx": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e64c36b4-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/2a1562ce228fb8965d800f51018ff4f234c87336175797fdaabadcad93f2b8be\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/2a1562ce228fb8965d800f51018ff4f234c87336175797fdaabadcad93f2b8be/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/2a1562ce228fb8965d800f51018ff4f234c87336175797fdaabadcad93f2b8be/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  10m       10m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Warning     Failed          Error: failed to start container "eclipse-hono-adapter-http-vertx": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e64c36b4-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/5ee56fb4694df7d4d80cfedeb06afc38098def22cb175f2d2b3e68b34bb9b106\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/5ee56fb4694df7d4d80cfedeb06afc38098def22cb175f2d2b3e68b34bb9b106/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/5ee56fb4694df7d4d80cfedeb06afc38098def22cb175f2d2b3e68b34bb9b106/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  11m       9m      5   kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Normal      Pulled          Container image "eclipse/hono-adapter-http-vertx:0.6-M1-SNAPSHOT" already present on machine
  11m       9m      5   kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Normal      Created         Created container
  9m        9m      1   kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Warning     Failed          Error: failed to start container "eclipse-hono-adapter-http-vertx": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e64c36b4-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/5d3ac358bbc71ad0196846158b97a5974d78a4e822b89d7e7b138ee5668695a6\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/5d3ac358bbc71ad0196846158b97a5974d78a4e822b89d7e7b138ee5668695a6/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/5d3ac358bbc71ad0196846158b97a5974d78a4e822b89d7e7b138ee5668695a6/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  10m       1m      44  kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Warning     BackOff         Back-off restarting failed container

===============================================================================

$ oc describe pod/hono-adapter-mqtt-vertx-1-j956g
Name:       hono-adapter-mqtt-vertx-1-j956g
Namespace:  hono
Node:       localhost/192.168.122.205
Start Time: Tue, 10 Apr 2018 10:53:15 -0400
Labels:     app=hono-adapter-mqtt-vertx
        deployment=hono-adapter-mqtt-vertx-1
        deploymentconfig=hono-adapter-mqtt-vertx
        group=org.eclipse.hono
        provider=fabric8
        version=0.6-M1-SNAPSHOT
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hono","name":"hono-adapter-mqtt-vertx-1","uid":"d5e47a23-3cce-11e8-b6b...
        openshift.io/deployment-config.latest-version=1
        openshift.io/deployment-config.name=hono-adapter-mqtt-vertx
        openshift.io/deployment.name=hono-adapter-mqtt-vertx-1
        openshift.io/scc=restricted
Status:     Running
IP:     172.17.0.18
Created By: ReplicationController/hono-adapter-mqtt-vertx-1
Controlled By:  ReplicationController/hono-adapter-mqtt-vertx-1
Containers:
  eclipse-hono-adapter-mqtt-vertx:
    Container ID:   docker://a974fbab135d56576fc547cac51c474ea9ab9ad4a40dcb30caca895633a74559
    Image:      eclipse/hono-adapter-mqtt-vertx:0.6-M1-SNAPSHOT
    Image ID:       docker://sha256:150f6fc15f94745682509ccb3aa9f8cb9d630b2567201774a3ef53fbb5b652af
    Ports:      8088/TCP, 8883/TCP, 1883/TCP
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:53: mounting \\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e3efdf23-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/aufs/mnt/7fc3fedacfe26d75240eb8103744a29e06159445c16e3332cae47c52b2cccaa5\\\\\\\" at \\\\\\\"/var/lib/docker/aufs/mnt/7fc3fedacfe26d75240eb8103744a29e06159445c16e3332cae47c52b2cccaa5/run/secrets/kubernetes.io/serviceaccount\\\\\\\" caused \\\\\\\"mkdir /var/lib/docker/aufs/mnt/7fc3fedacfe26d75240eb8103744a29e06159445c16e3332cae47c52b2cccaa5/run/secrets/kubernetes.io: read-only file system\\\\\\\"\\\"\"\n"
      Exit Code:    128
      Started:      Tue, 10 Apr 2018 11:04:34 -0400
      Finished:     Tue, 10 Apr 2018 11:04:34 -0400
    Ready:      False
    Restart Count:  7
    Liveness:       http-get http://:8088/liveness delay=180s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8088/readiness delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      SPRING_CONFIG_LOCATION:   file:///run/secrets/
      SPRING_PROFILES_ACTIVE:   dev
      LOGGING_CONFIG:       classpath:logback-spring.xml
      _JAVA_OPTIONS:        -Xmx128m
      KUBERNETES_NAMESPACE: hono (v1:metadata.namespace)
    Mounts:
      /run/secrets from conf (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bzq9s (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  conf:
    Type:   Secret (a volume populated by a Secret)
    SecretName: hono-adapter-mqtt-vertx-conf
    Optional:   false
  default-token-bzq9s:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-bzq9s
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason          Message
  --------- --------    -----   ----            -------------               --------    ------          -------
  12m       12m     1   default-scheduler                       Normal      Scheduled       Successfully assigned hono-adapter-mqtt-vertx-1-j956g to localhost
  12m       12m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "conf" 
  12m       12m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-bzq9s" 
  12m       12m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Warning     Failed          Error: failed to start container "eclipse-hono-adapter-mqtt-vertx": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e3efdf23-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/2cd1c71aa5fc318100d1c73442d6ec977a7949b2718b7e5e9696b609e6a99328\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/2cd1c71aa5fc318100d1c73442d6ec977a7949b2718b7e5e9696b609e6a99328/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/2cd1c71aa5fc318100d1c73442d6ec977a7949b2718b7e5e9696b609e6a99328/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  11m       11m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Warning     Failed          Error: failed to start container "eclipse-hono-adapter-mqtt-vertx": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e3efdf23-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/c31bd4c5b704a7973f995ae6d7720bcce082015511db6664a3a935b650373058\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/c31bd4c5b704a7973f995ae6d7720bcce082015511db6664a3a935b650373058/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/c31bd4c5b704a7973f995ae6d7720bcce082015511db6664a3a935b650373058/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  11m       11m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Warning     Failed          Error: failed to start container "eclipse-hono-adapter-mqtt-vertx": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e3efdf23-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/9f41a3f7c9eabd88f0618ea7038c8f135053f5a59d0a2770ca62566ea7a89f24\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/9f41a3f7c9eabd88f0618ea7038c8f135053f5a59d0a2770ca62566ea7a89f24/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/9f41a3f7c9eabd88f0618ea7038c8f135053f5a59d0a2770ca62566ea7a89f24/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  11m       11m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Warning     Failed          Error: failed to start container "eclipse-hono-adapter-mqtt-vertx": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e3efdf23-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/7d4b0cebf6c072d90471d2fcbcb1014d094cce3fa04c93275a1c3052bfc31c0c\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/7d4b0cebf6c072d90471d2fcbcb1014d094cce3fa04c93275a1c3052bfc31c0c/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/7d4b0cebf6c072d90471d2fcbcb1014d094cce3fa04c93275a1c3052bfc31c0c/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  12m       10m     5   kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Normal      Pulled          Container image "eclipse/hono-adapter-mqtt-vertx:0.6-M1-SNAPSHOT" already present on machine
  12m       10m     5   kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Normal      Created         Created container
  10m       10m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Warning     Failed          Error: failed to start container "eclipse-hono-adapter-mqtt-vertx": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/e3efdf23-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/3f2ad6cfe0b0b4f37e01da82d49add2fce13314e76b49d1a624fd83c933d085d\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/3f2ad6cfe0b0b4f37e01da82d49add2fce13314e76b49d1a624fd83c933d085d/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/3f2ad6cfe0b0b4f37e01da82d49add2fce13314e76b49d1a624fd83c933d085d/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  11m       2m      44  kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Warning     BackOff         Back-off restarting failed container

===============================================================================

$ oc describe pod/hono-service-messaging-1-p4jk4
Name:       hono-service-messaging-1-p4jk4
Namespace:  hono
Node:       localhost/192.168.122.205
Start Time: Tue, 10 Apr 2018 10:53:11 -0400
Labels:     app=hono-service-messaging
        deployment=hono-service-messaging-1
        deploymentconfig=hono-service-messaging
        group=org.eclipse.hono
        provider=fabric8
        version=0.6-M1-SNAPSHOT
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hono","name":"hono-service-messaging-1","uid":"d1e92a3d-3cce-11e8-b6b0...
        openshift.io/deployment-config.latest-version=1
        openshift.io/deployment-config.name=hono-service-messaging
        openshift.io/deployment.name=hono-service-messaging-1
        openshift.io/scc=restricted
Status:     Running
IP:     172.17.0.17
Created By: ReplicationController/hono-service-messaging-1
Controlled By:  ReplicationController/hono-service-messaging-1
Containers:
  eclipse-hono-service-messaging:
    Container ID:   docker://31564be1e23d245f7d526bf1823d6e1f246282caf06a8066a1db40a6ad798e41
    Image:      eclipse/hono-service-messaging:0.6-M1-SNAPSHOT
    Image ID:       docker://sha256:56154e28c7a6298598b3892d45bc389d6202d36c4f03377d97f25f711d78c948
    Ports:      8088/TCP, 5671/TCP, 5672/TCP
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:53: mounting \\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d97d4379-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/aufs/mnt/431d399e09146d5c9f05eb5fc86203233caa7d7f50a76939f7afa3e5e4482853\\\\\\\" at \\\\\\\"/var/lib/docker/aufs/mnt/431d399e09146d5c9f05eb5fc86203233caa7d7f50a76939f7afa3e5e4482853/run/secrets/kubernetes.io/serviceaccount\\\\\\\" caused \\\\\\\"mkdir /var/lib/docker/aufs/mnt/431d399e09146d5c9f05eb5fc86203233caa7d7f50a76939f7afa3e5e4482853/run/secrets/kubernetes.io: read-only file system\\\\\\\"\\\"\"\n"
      Exit Code:    128
      Started:      Tue, 10 Apr 2018 11:04:18 -0400
      Finished:     Tue, 10 Apr 2018 11:04:18 -0400
    Ready:      False
    Restart Count:  7
    Liveness:       http-get http://:8088/liveness delay=180s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8088/readiness delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      SPRING_CONFIG_LOCATION:   file:///run/secrets/
      SPRING_PROFILES_ACTIVE:   dev
      LOGGING_CONFIG:       classpath:logback-spring.xml
      _JAVA_OPTIONS:        -Xmx196m
      KUBERNETES_NAMESPACE: hono (v1:metadata.namespace)
    Mounts:
      /run/secrets from conf (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bzq9s (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  conf:
    Type:   Secret (a volume populated by a Secret)
    SecretName: hono-service-messaging-conf
    Optional:   false
  default-token-bzq9s:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-bzq9s
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason          Message
  --------- --------    -----   ----            -------------               --------    ------          -------
  13m       13m     1   default-scheduler                       Normal      Scheduled       Successfully assigned hono-service-messaging-1-p4jk4 to localhost
  13m       13m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-bzq9s" 
  13m       13m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "conf" 
  13m       13m     1   kubelet, localhost  spec.containers{eclipse-hono-service-messaging} Warning     Failed          Error: failed to start container "eclipse-hono-service-messaging": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d97d4379-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/603862840e00552fc3345ab3d0260b674faf4a717698f3df82edd4a7133b7552\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/603862840e00552fc3345ab3d0260b674faf4a717698f3df82edd4a7133b7552/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/603862840e00552fc3345ab3d0260b674faf4a717698f3df82edd4a7133b7552/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  13m       13m     1   kubelet, localhost  spec.containers{eclipse-hono-service-messaging} Warning     Failed          Error: failed to start container "eclipse-hono-service-messaging": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d97d4379-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/ac10596b47476324079f4db88ddce025a44d5102281a19711239b357738385fc\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/ac10596b47476324079f4db88ddce025a44d5102281a19711239b357738385fc/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/ac10596b47476324079f4db88ddce025a44d5102281a19711239b357738385fc/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  12m       12m     1   kubelet, localhost  spec.containers{eclipse-hono-service-messaging} Warning     Failed          Error: failed to start container "eclipse-hono-service-messaging": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d97d4379-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/bcecd9779f15950d6f40526b156d1e8a4ced9e641fc2df8f4fd36319a24632e3\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/bcecd9779f15950d6f40526b156d1e8a4ced9e641fc2df8f4fd36319a24632e3/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/bcecd9779f15950d6f40526b156d1e8a4ced9e641fc2df8f4fd36319a24632e3/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  12m       12m     1   kubelet, localhost  spec.containers{eclipse-hono-service-messaging} Warning     Failed          Error: failed to start container "eclipse-hono-service-messaging": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d97d4379-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/9e7eb616292cbceb84b7575d1e6ea002977781ff1c0083d7d3d452c1f5ea586b\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/9e7eb616292cbceb84b7575d1e6ea002977781ff1c0083d7d3d452c1f5ea586b/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/9e7eb616292cbceb84b7575d1e6ea002977781ff1c0083d7d3d452c1f5ea586b/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  13m       11m     5   kubelet, localhost  spec.containers{eclipse-hono-service-messaging} Normal      Pulled          Container image "eclipse/hono-service-messaging:0.6-M1-SNAPSHOT" already present on machine
  13m       11m     5   kubelet, localhost  spec.containers{eclipse-hono-service-messaging} Normal      Created         Created container
  11m       11m     1   kubelet, localhost  spec.containers{eclipse-hono-service-messaging} Warning     Failed          Error: failed to start container "eclipse-hono-service-messaging": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d97d4379-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/30a9bbc140048f1e176937ec382e6af5c47f7895115086b0d7a1fd9eaa55112a\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/30a9bbc140048f1e176937ec382e6af5c47f7895115086b0d7a1fd9eaa55112a/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/30a9bbc140048f1e176937ec382e6af5c47f7895115086b0d7a1fd9eaa55112a/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  12m       3m      48  kubelet, localhost  spec.containers{eclipse-hono-service-messaging} Warning     BackOff         Back-off restarting failed container

===============================================================================

$ oc describe pod/hono-service-auth-1-9t5vn
Name:       hono-service-auth-1-9t5vn
Namespace:  hono
Node:       localhost/192.168.122.205
Start Time: Tue, 10 Apr 2018 10:52:51 -0400
Labels:     app=hono-service-auth
        deployment=hono-service-auth-1
        deploymentconfig=hono-service-auth
        group=org.eclipse.hono
        provider=fabric8
        version=0.6-M1-SNAPSHOT
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hono","name":"hono-service-auth-1","uid":"cd2b0048-3cce-11e8-b6b0-4e3a...
        openshift.io/deployment-config.latest-version=1
        openshift.io/deployment-config.name=hono-service-auth
        openshift.io/deployment.name=hono-service-auth-1
        openshift.io/scc=restricted
Status:     Running
IP:     172.17.0.14
Created By: ReplicationController/hono-service-auth-1
Controlled By:  ReplicationController/hono-service-auth-1
Containers:
  eclipse-hono-service-auth:
    Container ID:   docker://11981b91e399935dd6827fc0a1b3e43f142790e8379c1725827ec462317f7c5a
    Image:      eclipse/hono-service-auth:0.6-M1-SNAPSHOT
    Image ID:       docker://sha256:bf528b938e9fa6827b72a1d174e16f408fad279ee38199f8f6f1c6e431fc743e
    Ports:      5671/TCP, 5672/TCP
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:53: mounting \\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d32e2104-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/aufs/mnt/ab6425731b725b6882d44487128bc4a63388f4413f63cf48cfcbd709b39de0d1\\\\\\\" at \\\\\\\"/var/lib/docker/aufs/mnt/ab6425731b725b6882d44487128bc4a63388f4413f63cf48cfcbd709b39de0d1/run/secrets/kubernetes.io/serviceaccount\\\\\\\" caused \\\\\\\"mkdir /var/lib/docker/aufs/mnt/ab6425731b725b6882d44487128bc4a63388f4413f63cf48cfcbd709b39de0d1/run/secrets/kubernetes.io: read-only file system\\\\\\\"\\\"\"\n"
      Exit Code:    128
      Started:      Tue, 10 Apr 2018 11:04:24 -0400
      Finished:     Tue, 10 Apr 2018 11:04:24 -0400
    Ready:      False
    Restart Count:  7
    Liveness:       tcp-socket :5672 delay=25s timeout=1s period=9s #success=1 #failure=3
    Readiness:      tcp-socket :5672 delay=15s timeout=1s period=5s #success=1 #failure=3
    Environment:
      SPRING_CONFIG_LOCATION:   file:///run/secrets/
      SPRING_PROFILES_ACTIVE:   authentication-impl,dev
      LOGGING_CONFIG:       classpath:logback-spring.xml
      _JAVA_OPTIONS:        -Xmx32m
      KUBERNETES_NAMESPACE: hono (v1:metadata.namespace)
    Mounts:
      /run/secrets from conf (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bzq9s (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  conf:
    Type:   Secret (a volume populated by a Secret)
    SecretName: hono-service-auth-conf
    Optional:   false
  default-token-bzq9s:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-bzq9s
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason          Message
  --------- --------    -----   ----            -------------               --------    ------          -------
  14m       14m     1   default-scheduler                       Normal      Scheduled       Successfully assigned hono-service-auth-1-9t5vn to localhost
  14m       14m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "conf" 
  14m       14m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-bzq9s" 
  14m       14m     1   kubelet, localhost  spec.containers{eclipse-hono-service-auth}  Warning     Failed          Error: failed to start container "eclipse-hono-service-auth": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d32e2104-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/dc957041f81695ba7d567071206fc25830b117405ab3e719f5b7af9ab996d349\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/dc957041f81695ba7d567071206fc25830b117405ab3e719f5b7af9ab996d349/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/dc957041f81695ba7d567071206fc25830b117405ab3e719f5b7af9ab996d349/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  14m       14m     1   kubelet, localhost  spec.containers{eclipse-hono-service-auth}  Warning     Failed          Error: failed to start container "eclipse-hono-service-auth": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d32e2104-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/06320e105857d50ea79766483991b4465977b705fca09bf8c7658edb0c009fed\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/06320e105857d50ea79766483991b4465977b705fca09bf8c7658edb0c009fed/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/06320e105857d50ea79766483991b4465977b705fca09bf8c7658edb0c009fed/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  13m       13m     1   kubelet, localhost  spec.containers{eclipse-hono-service-auth}  Warning     Failed          Error: failed to start container "eclipse-hono-service-auth": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d32e2104-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/b7b90d585468fe2997b3a290cc5ff156208fa487ed3ef94a67001f8dfe871d0e\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/b7b90d585468fe2997b3a290cc5ff156208fa487ed3ef94a67001f8dfe871d0e/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/b7b90d585468fe2997b3a290cc5ff156208fa487ed3ef94a67001f8dfe871d0e/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  13m       13m     1   kubelet, localhost  spec.containers{eclipse-hono-service-auth}  Warning     Failed          Error: failed to start container "eclipse-hono-service-auth": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d32e2104-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/243056cd92d0445713406dcade6a5a909b38f818436b34858c210a909f79b3a1\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/243056cd92d0445713406dcade6a5a909b38f818436b34858c210a909f79b3a1/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/243056cd92d0445713406dcade6a5a909b38f818436b34858c210a909f79b3a1/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  14m       12m     5   kubelet, localhost  spec.containers{eclipse-hono-service-auth}  Normal      Pulled          Container image "eclipse/hono-service-auth:0.6-M1-SNAPSHOT" already present on machine
  14m       12m     5   kubelet, localhost  spec.containers{eclipse-hono-service-auth}  Normal      Created         Created container
  12m       12m     1   kubelet, localhost  spec.containers{eclipse-hono-service-auth}  Warning     Failed          Error: failed to start container "eclipse-hono-service-auth": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d32e2104-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/9099afe92916ef97a2cab382fbab47e34dbfac5d06f2662705de7193ded27411\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/9099afe92916ef97a2cab382fbab47e34dbfac5d06f2662705de7193ded27411/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/9099afe92916ef97a2cab382fbab47e34dbfac5d06f2662705de7193ded27411/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  14m       4m      46  kubelet, localhost  spec.containers{eclipse-hono-service-auth}  Warning     BackOff         Back-off restarting failed container

===============================================================================

$ oc describe pod/hono-dispatch-router-1-wldb6
Name:       hono-dispatch-router-1-wldb6
Namespace:  hono
Node:       localhost/192.168.122.205
Start Time: Tue, 10 Apr 2018 10:52:43 -0400
Labels:     app=hono-dispatch-router
        deployment=hono-dispatch-router-1
        deploymentconfig=hono-dispatch-router
        group=org.eclipse.hono
        provider=fabric8
        version=0.6-M1-SNAPSHOT
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hono","name":"hono-dispatch-router-1","uid":"cbe04c6d-3cce-11e8-b6b0-4...
        openshift.io/deployment-config.latest-version=1
        openshift.io/deployment-config.name=hono-dispatch-router
        openshift.io/deployment.name=hono-dispatch-router-1
        openshift.io/scc=restricted
Status:     Running
IP:     172.17.0.10
Created By: ReplicationController/hono-dispatch-router-1
Controlled By:  ReplicationController/hono-dispatch-router-1
Containers:
  eclipse-hono-dispatch-router:
    Container ID:   docker://174dce16e6a89b26a05aaf55a5cad32b404cb75f8f2fe3315df7225b4525dd76
    Image:      enmasseproject/qdrouterd-base:0.8.0-1
    Image ID:       docker-pullable://enmasseproject/qdrouterd-base@sha256:05421717cc00aa174ac3fd4ef282a49439d9cf1600ebc4f95980d5e7536d0ea8
    Port:       <none>
    Command:
      /sbin/qdrouterd
      -c
      /run/secrets/qdrouterd-with-broker.json
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:53: mounting \\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d27f4bdc-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/aufs/mnt/6334685f63c676ffeef242f4a10af59a7203a32cdd26bff05e1e008725bce783\\\\\\\" at \\\\\\\"/var/lib/docker/aufs/mnt/6334685f63c676ffeef242f4a10af59a7203a32cdd26bff05e1e008725bce783/run/secrets/kubernetes.io/serviceaccount\\\\\\\" caused \\\\\\\"mkdir /var/lib/docker/aufs/mnt/6334685f63c676ffeef242f4a10af59a7203a32cdd26bff05e1e008725bce783/run/secrets/kubernetes.io: read-only file system\\\\\\\"\\\"\"\n"
      Exit Code:    128
      Started:      Tue, 10 Apr 2018 11:05:32 -0400
      Finished:     Tue, 10 Apr 2018 11:05:32 -0400
    Ready:      False
    Restart Count:  7
    Liveness:       tcp-socket :5672 delay=180s timeout=1s period=9s #success=1 #failure=3
    Readiness:      tcp-socket :5672 delay=10s timeout=1s period=5s #success=1 #failure=3
    Environment:
      KUBERNETES_NAMESPACE: hono (v1:metadata.namespace)
    Mounts:
      /run/secrets from config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bzq9s (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  config:
    Type:   Secret (a volume populated by a Secret)
    SecretName: hono-dispatch-router-conf
    Optional:   false
  default-token-bzq9s:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-bzq9s
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason          Message
  --------- --------    -----   ----            -------------               --------    ------          -------
  15m       15m     1   default-scheduler                       Normal      Scheduled       Successfully assigned hono-dispatch-router-1-wldb6 to localhost
  15m       15m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "config" 
  15m       15m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-bzq9s" 
  15m       15m     1   kubelet, localhost  spec.containers{eclipse-hono-dispatch-router}   Normal      Pulling         pulling image "enmasseproject/qdrouterd-base:0.8.0-1"
  13m       13m     1   kubelet, localhost  spec.containers{eclipse-hono-dispatch-router}   Normal      Pulled          Successfully pulled image "enmasseproject/qdrouterd-base:0.8.0-1"
  13m       13m     1   kubelet, localhost  spec.containers{eclipse-hono-dispatch-router}   Warning     Failed          Error: failed to start container "eclipse-hono-dispatch-router": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d27f4bdc-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/df1b15436836c323e957e780dbd9f44d140e967c09c32bc098d224bbff1c47ef\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/df1b15436836c323e957e780dbd9f44d140e967c09c32bc098d224bbff1c47ef/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/df1b15436836c323e957e780dbd9f44d140e967c09c32bc098d224bbff1c47ef/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  13m       13m     1   kubelet, localhost  spec.containers{eclipse-hono-dispatch-router}   Warning     Failed          Error: failed to start container "eclipse-hono-dispatch-router": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d27f4bdc-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/7d86b36f077594a33a5cde35a79d5539632f597a9333fca95d2cad12d94d4677\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/7d86b36f077594a33a5cde35a79d5539632f597a9333fca95d2cad12d94d4677/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/7d86b36f077594a33a5cde35a79d5539632f597a9333fca95d2cad12d94d4677/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  12m       12m     1   kubelet, localhost  spec.containers{eclipse-hono-dispatch-router}   Warning     Failed          Error: failed to start container "eclipse-hono-dispatch-router": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d27f4bdc-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/2ed4ee9294d98580561953719bdbd16e33f0a367a04f7bb74d2a39a81d30a0e7\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/2ed4ee9294d98580561953719bdbd16e33f0a367a04f7bb74d2a39a81d30a0e7/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/2ed4ee9294d98580561953719bdbd16e33f0a367a04f7bb74d2a39a81d30a0e7/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  12m       12m     1   kubelet, localhost  spec.containers{eclipse-hono-dispatch-router}   Warning     Failed          Error: failed to start container "eclipse-hono-dispatch-router": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d27f4bdc-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/be2650a847866bb3e87b413da15965ed65a4e965bbffe7762934300763774040\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/be2650a847866bb3e87b413da15965ed65a4e965bbffe7762934300763774040/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/be2650a847866bb3e87b413da15965ed65a4e965bbffe7762934300763774040/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  13m       11m     5   kubelet, localhost  spec.containers{eclipse-hono-dispatch-router}   Normal      Created         Created container
  11m       11m     1   kubelet, localhost  spec.containers{eclipse-hono-dispatch-router}   Warning     Failed          Error: failed to start container "eclipse-hono-dispatch-router": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/var/lib/minishift/openshift.local.volumes/pods/d27f4bdc-3cce-11e8-b6b0-4e3ad1dda170/volumes/kubernetes.io~secret/default-token-bzq9s\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/e6a72ab9795a572758334b14f845638063160604c1e4a474d137f886f25f57d7\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/e6a72ab9795a572758334b14f845638063160604c1e4a474d137f886f25f57d7/run/secrets/kubernetes.io/serviceaccount\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"mkdir /var/lib/docker/aufs/mnt/e6a72ab9795a572758334b14f845638063160604c1e4a474d137f886f25f57d7/run/secrets/kubernetes.io: read-only file system\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
  13m       10m     5   kubelet, localhost  spec.containers{eclipse-hono-dispatch-router}   Normal      Pulled          Container image "enmasseproject/qdrouterd-base:0.8.0-1" already present on machine
  12m       5s      61  kubelet, localhost  spec.containers{eclipse-hono-dispatch-router}   Warning     BackOff         Back-off restarting failed container
ctron commented 6 years ago

That looks weird. The only thing I was able to find was: https://github.com/openshift/origin/issues/15038

But I guess you are not running Openshift locally, but on Minishift, which (at least for me) uses "aufs" and doesn't have those problems.

However I have to admit that I am using my tutorial, rather than the Hono provided images: https://github.com/ctron/hono-demo-1 But at least for EnMasse, it doesn't make any difference.

I will try an upgrade to Minishift 1.15.1 tomorrow and re-try your exact case. If you can point me to the tutorial and settings you used, that would be helpful, just to be on the safe side.

mohabh88 commented 6 years ago

Yes, please do @ctron, that would be greatly appreciated. I have been running into this issue since long time ago.

Here you are the tutorial I am using to deploy Hono on top of Minishift: https://www.eclipse.org/hono/deployment/openshift/

I am just following the tutorial as is and not specifying any additional settings or so. Please keep me posted.

sophokles73 commented 6 years ago

Again, FMPOV this is the same issue you had recently when deploying to latest minikube. I believe that I have fixed this on master already. can you try to build from master and deploy to minishift?

mohabh88 commented 6 years ago

Yes, I re-downloaded the master again and the deployment looks okay now. Not all the pods ready though! and the "hono-service-device-registry-1-r9r86" is stil throwing the CrashLoopBackOff error.

The pods are not ready as they have been running for more than five minutes and haven't passed their readiness check

 $ oc get pod
NAME                                    READY     STATUS                  RESTARTS   AGE
grafana-4046359926-xkz6n                1/1       Running                 0          9m
hono-adapter-http-vertx-1-deploy        1/1       Running                 0          9m
hono-adapter-http-vertx-1-gd69b         0/1       Running                 0          9m
hono-adapter-kura-1-deploy              1/1       Running                 0          9m
hono-adapter-kura-1-lnvst               0/1       Running                 0          9m
hono-adapter-mqtt-vertx-1-deploy        1/1       Running                 0          9m
hono-adapter-mqtt-vertx-1-fkblg         0/1       Running                 0          9m
hono-service-auth-1-dsd9x               1/1       Running                 0          9m
hono-service-device-registry-1-deploy   1/1       Running                 0          9m
hono-service-device-registry-1-r9r86    0/1       Init:CrashLoopBackOff   6          9m
hono-service-messaging-1-deploy         1/1       Running                 0          9m
hono-service-messaging-1-w5vdc          0/1       Running                 0          9m
influxdb-3804478203-vqx59               1/1       Running                 0          9m
oc describe pod/hono-service-device-registry-1-r9r86
Name:       hono-service-device-registry-1-r9r86
Namespace:  hono
Node:       localhost/192.168.122.201
Start Time: Tue, 10 Apr 2018 14:28:15 -0400
Labels:     app=hono-service-device-registry
        deployment=hono-service-device-registry-1
        deploymentconfig=hono-service-device-registry
        group=org.eclipse.hono
        provider=fabric8
        version=0.6-SNAPSHOT
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hono","name":"hono-service-device-registry-1","uid":"e58f7728-3cec-11e...
        openshift.io/deployment-config.latest-version=1
        openshift.io/deployment-config.name=hono-service-device-registry
        openshift.io/deployment.name=hono-service-device-registry-1
        openshift.io/scc=restricted
Status:     Pending
IP:     172.17.0.12
Created By: ReplicationController/hono-service-device-registry-1
Controlled By:  ReplicationController/hono-service-device-registry-1
Init Containers:
  copy-example-data:
    Container ID:   docker://2167ebc57eca495050c97bae7d4e5774838c32301c79ef50107357b19429c069
    Image:      busybox
    Image ID:       docker-pullable://busybox@sha256:58ac43b2cc92c687a32c8be6278e50a063579655fe3090125dcb2af0ff9e1a64
    Port:       <none>
    Command:
      sh
      -c
      cp -u /tmp/hono/example-credentials.json /var/lib/hono/device-registry/credentials.json; cp -u /tmp/hono/example-tenants.json /var/lib/hono/device-registry/tenants.json
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 10 Apr 2018 14:39:55 -0400
      Finished:     Tue, 10 Apr 2018 14:39:55 -0400
    Ready:      False
    Restart Count:  7
    Environment:    <none>
    Mounts:
      /tmp/hono from conf (rw)
      /var/lib/hono/device-registry from registry (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4xzqz (ro)
Containers:
  eclipse-hono-service-device-registry:
    Container ID:   
    Image:      eclipse/hono-service-device-registry:0.6-SNAPSHOT
    Image ID:       
    Ports:      8080/TCP, 8443/TCP, 5671/TCP, 5672/TCP
    State:      Waiting
      Reason:       PodInitializing
    Ready:      False
    Restart Count:  0
    Liveness:       http-get http://:8088/liveness delay=180s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8088/readiness delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      SPRING_CONFIG_LOCATION:   file:///etc/hono/
      SPRING_PROFILES_ACTIVE:   dev
      LOGGING_CONFIG:       classpath:logback-spring.xml
      _JAVA_OPTIONS:        -Xmx64m
      KUBERNETES_NAMESPACE: hono (v1:metadata.namespace)
    Mounts:
      /etc/hono from conf (ro)
      /var/lib/hono/device-registry from registry (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4xzqz (ro)
Conditions:
  Type      Status
  Initialized   False 
  Ready     False 
  PodScheduled  True 
Volumes:
  registry:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  registry
    ReadOnly:   false
  conf:
    Type:   Secret (a volume populated by a Secret)
    SecretName: hono-service-device-registry-conf
    Optional:   false
  default-token-4xzqz:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-4xzqz
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               TypeReason          Message
  --------- --------    -----   ----            -------------               --------    ------          -------
  12m       12m     1   default-scheduler                       Normal      Scheduled       Successfully assigned hono-service-device-registry-1-r9r86 to localhost
  12m       12m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "hono" 
  12m       12m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "conf" 
  12m       12m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-4xzqz" 
  12m       12m     3   kubelet, localhost  spec.initContainers{copy-example-data}  Normal      Started         Started container
  12m       11m     4   kubelet, localhost  spec.initContainers{copy-example-data}  Normal      Pulling         pulling image "busybox"
  12m       11m     4   kubelet, localhost  spec.initContainers{copy-example-data}  Normal      Pulled          Successfully pulled image "busybox"
  12m       11m     4   kubelet, localhost  spec.initContainers{copy-example-data}  Normal      Created         Created container
  12m       2m      55  kubelet, localhost  spec.initContainers{copy-example-data}  Warning     BackOff         Back-off restarting failed container
mohabh88 commented 6 years ago

Now the status of most of the pods became "Error" because they were not ready!

$ oc get pod
NAME                                    READY     STATUS    RESTARTS   AGE
grafana-4046359926-xkz6n                1/1       Running   0          2h
hono-adapter-http-vertx-1-deploy        0/1       Error     0          2h
hono-adapter-kura-1-deploy              0/1       Error     0          2h
hono-adapter-mqtt-vertx-1-deploy        0/1       Error     0          2h
hono-service-auth-1-dsd9x               1/1       Running   0          2h
hono-service-device-registry-1-deploy   0/1       Error     0          2h
hono-service-messaging-1-deploy         0/1       Error     0          2h
influxdb-3804478203-vqx59               1/1       Running   0          2h

===============================================================================

$ oc describe pod/hono-adapter-http-vertx-1-deploy
Name:       hono-adapter-http-vertx-1-deploy
Namespace:  hono
Node:       localhost/192.168.122.201
Start Time: Tue, 10 Apr 2018 14:28:07 -0400
Labels:     openshift.io/deployer-pod-for.name=hono-adapter-http-vertx-1
Annotations:    openshift.io/deployment.name=hono-adapter-http-vertx-1
        openshift.io/scc=restricted
Status:     Failed
IP:     172.17.0.10
Containers:
  deployment:
    Container ID:   docker://11bc0ef154327e05ca38872cb6c8464a6c04f87f01d0be1b3f1fe483aabe541a
    Image:      openshift/origin-deployer:v3.7.1
    Image ID:       docker-pullable://openshift/origin-deployer@sha256:2e39b45e1a68fd25647f0fd64b19d81b9dee04ee84ec49fefc2a28580dc9ab61
    Port:       <none>
    State:      Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 10 Apr 2018 14:28:17 -0400
      Finished:     Tue, 10 Apr 2018 15:28:20 -0400
    Ready:      False
    Restart Count:  0
    Environment:
      KUBERNETES_MASTER:    https://127.0.0.1:8443
      OPENSHIFT_MASTER:     https://127.0.0.1:8443
      BEARER_TOKEN_FILE:    /var/run/secrets/kubernetes.io/serviceaccount/token
      OPENSHIFT_CA_DATA:    -----BEGIN CERTIFICATE-----
MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
c2hpZnQtc2lnbmVyQDE1MjMzODQyNjMwHhcNMTgwNDEwMTgxNzQzWhcNMjMwNDA5
MTgxNzQ0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MjMzODQyNjMw
ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDoGRhmrtXQ8WDCidCmLRVV
sX0JxblK9qjI4JHXmdF2OMD51CeWY6OnCbcG/lU9bD8vD0bmk0ZTvbdfGAFpcQYg
Onso2ZHDbXSvGX3/Dt04RLXNQGGgfmzWlP6sxvmk8NKhzXIqqWhXbq+FDzWBADxG
+0bMLOSxThlZ1f+BjL/YWi+4TdHxselYaDTbCM3Q8fFbJ7YE/1qdz12G7eIBKJO1
bXN001KNFcWJ+71O8l2ZEoyoGXBLjvZfTln2Bm4yRUpQnZIrLpH+JKk7Kz1EN73R
tK6Kp+SEilbRkXkOurcOVsGfH17/mN4FKLWfyUBUpw2L2JAKMCdJocKzk7aHc+bb
AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
SIb3DQEBCwUAA4IBAQBqlcV8rKcmgyn/hGgZBnWbiY2ciou+GRcccswKD5+k6nvg
HE/ehsFDPdW7ENdy/IQI+mewHmt4RvoUSr3c/1dYE5yAtD2+81bVlsXDxIHtj3Uh
VPruL1Lbg26sLJSjsGH3Y+bKrbtHjmMVRAi7nrYZww81c4OYalpi9Y2DW2wYP1ap
8MDrMNThtbHklLb0S5h2oBLGRd0KjJpAjzcMLF2fJNSDnsCyl3LVHqT1u3ql5wSj
Luwk2xvNi97J7s6/OUNmFgFwAIuJuoHl5uhS7gPEaaUGu985zE75nUsH1jWM3Yb9
t+65Q5DtjD6H3Poum70Ap9pH0NIQ+FAYJ+i9nv/H
-----END CERTIFICATE-----

      OPENSHIFT_DEPLOYMENT_NAME:    hono-adapter-http-vertx-1
      OPENSHIFT_DEPLOYMENT_NAMESPACE:   hono
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from deployer-token-n5pkp (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  deployer-token-n5pkp:
    Type:   Secret (a volume populated by a Secret)
    SecretName: deployer-token-n5pkp
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:     <none>

===============================================================================

$ oc describe pod/hono-adapter-kura-1-deploy
Name:       hono-adapter-kura-1-deploy
Namespace:  hono
Node:       localhost/192.168.122.201
Start Time: Tue, 10 Apr 2018 14:28:10 -0400
Labels:     openshift.io/deployer-pod-for.name=hono-adapter-kura-1
Annotations:    openshift.io/deployment.name=hono-adapter-kura-1
        openshift.io/scc=restricted
Status:     Failed
IP:     172.17.0.11
Containers:
  deployment:
    Container ID:   docker://5392dbb75ef0a42f63e6cdadd58fd8292b5c09043b67430a4663fcb7a7922b01
    Image:      openshift/origin-deployer:v3.7.1
    Image ID:       docker-pullable://openshift/origin-deployer@sha256:2e39b45e1a68fd25647f0fd64b19d81b9dee04ee84ec49fefc2a28580dc9ab61
    Port:       <none>
    State:      Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 10 Apr 2018 14:28:17 -0400
      Finished:     Tue, 10 Apr 2018 15:28:22 -0400
    Ready:      False
    Restart Count:  0
    Environment:
      KUBERNETES_MASTER:    https://127.0.0.1:8443
      OPENSHIFT_MASTER:     https://127.0.0.1:8443
      BEARER_TOKEN_FILE:    /var/run/secrets/kubernetes.io/serviceaccount/token
      OPENSHIFT_CA_DATA:    -----BEGIN CERTIFICATE-----
MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
c2hpZnQtc2lnbmVyQDE1MjMzODQyNjMwHhcNMTgwNDEwMTgxNzQzWhcNMjMwNDA5
MTgxNzQ0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MjMzODQyNjMw
ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDoGRhmrtXQ8WDCidCmLRVV
sX0JxblK9qjI4JHXmdF2OMD51CeWY6OnCbcG/lU9bD8vD0bmk0ZTvbdfGAFpcQYg
Onso2ZHDbXSvGX3/Dt04RLXNQGGgfmzWlP6sxvmk8NKhzXIqqWhXbq+FDzWBADxG
+0bMLOSxThlZ1f+BjL/YWi+4TdHxselYaDTbCM3Q8fFbJ7YE/1qdz12G7eIBKJO1
bXN001KNFcWJ+71O8l2ZEoyoGXBLjvZfTln2Bm4yRUpQnZIrLpH+JKk7Kz1EN73R
tK6Kp+SEilbRkXkOurcOVsGfH17/mN4FKLWfyUBUpw2L2JAKMCdJocKzk7aHc+bb
AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
SIb3DQEBCwUAA4IBAQBqlcV8rKcmgyn/hGgZBnWbiY2ciou+GRcccswKD5+k6nvg
HE/ehsFDPdW7ENdy/IQI+mewHmt4RvoUSr3c/1dYE5yAtD2+81bVlsXDxIHtj3Uh
VPruL1Lbg26sLJSjsGH3Y+bKrbtHjmMVRAi7nrYZww81c4OYalpi9Y2DW2wYP1ap
8MDrMNThtbHklLb0S5h2oBLGRd0KjJpAjzcMLF2fJNSDnsCyl3LVHqT1u3ql5wSj
Luwk2xvNi97J7s6/OUNmFgFwAIuJuoHl5uhS7gPEaaUGu985zE75nUsH1jWM3Yb9
t+65Q5DtjD6H3Poum70Ap9pH0NIQ+FAYJ+i9nv/H
-----END CERTIFICATE-----

      OPENSHIFT_DEPLOYMENT_NAME:    hono-adapter-kura-1
      OPENSHIFT_DEPLOYMENT_NAMESPACE:   hono
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from deployer-token-n5pkp (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  deployer-token-n5pkp:
    Type:   Secret (a volume populated by a Secret)
    SecretName: deployer-token-n5pkp
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:     <none>

===============================================================================

$ oc describe pod/hono-adapter-mqtt-vertx-1-deploy 
Name:       hono-adapter-mqtt-vertx-1-deploy
Namespace:  hono
Node:       localhost/192.168.122.201
Start Time: Tue, 10 Apr 2018 14:28:07 -0400
Labels:     openshift.io/deployer-pod-for.name=hono-adapter-mqtt-vertx-1
Annotations:    openshift.io/deployment.name=hono-adapter-mqtt-vertx-1
        openshift.io/scc=restricted
Status:     Failed
IP:     172.17.0.9
Containers:
  deployment:
    Container ID:   docker://12b169261a60076369cce96621cbc2068aae73d7ba6cd7656b6824a549299a28
    Image:      openshift/origin-deployer:v3.7.1
    Image ID:       docker-pullable://openshift/origin-deployer@sha256:2e39b45e1a68fd25647f0fd64b19d81b9dee04ee84ec49fefc2a28580dc9ab61
    Port:       <none>
    State:      Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 10 Apr 2018 14:28:17 -0400
      Finished:     Tue, 10 Apr 2018 15:28:22 -0400
    Ready:      False
    Restart Count:  0
    Environment:
      KUBERNETES_MASTER:    https://127.0.0.1:8443
      OPENSHIFT_MASTER:     https://127.0.0.1:8443
      BEARER_TOKEN_FILE:    /var/run/secrets/kubernetes.io/serviceaccount/token
      OPENSHIFT_CA_DATA:    -----BEGIN CERTIFICATE-----
MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
c2hpZnQtc2lnbmVyQDE1MjMzODQyNjMwHhcNMTgwNDEwMTgxNzQzWhcNMjMwNDA5
MTgxNzQ0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MjMzODQyNjMw
ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDoGRhmrtXQ8WDCidCmLRVV
sX0JxblK9qjI4JHXmdF2OMD51CeWY6OnCbcG/lU9bD8vD0bmk0ZTvbdfGAFpcQYg
Onso2ZHDbXSvGX3/Dt04RLXNQGGgfmzWlP6sxvmk8NKhzXIqqWhXbq+FDzWBADxG
+0bMLOSxThlZ1f+BjL/YWi+4TdHxselYaDTbCM3Q8fFbJ7YE/1qdz12G7eIBKJO1
bXN001KNFcWJ+71O8l2ZEoyoGXBLjvZfTln2Bm4yRUpQnZIrLpH+JKk7Kz1EN73R
tK6Kp+SEilbRkXkOurcOVsGfH17/mN4FKLWfyUBUpw2L2JAKMCdJocKzk7aHc+bb
AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
SIb3DQEBCwUAA4IBAQBqlcV8rKcmgyn/hGgZBnWbiY2ciou+GRcccswKD5+k6nvg
HE/ehsFDPdW7ENdy/IQI+mewHmt4RvoUSr3c/1dYE5yAtD2+81bVlsXDxIHtj3Uh
VPruL1Lbg26sLJSjsGH3Y+bKrbtHjmMVRAi7nrYZww81c4OYalpi9Y2DW2wYP1ap
8MDrMNThtbHklLb0S5h2oBLGRd0KjJpAjzcMLF2fJNSDnsCyl3LVHqT1u3ql5wSj
Luwk2xvNi97J7s6/OUNmFgFwAIuJuoHl5uhS7gPEaaUGu985zE75nUsH1jWM3Yb9
t+65Q5DtjD6H3Poum70Ap9pH0NIQ+FAYJ+i9nv/H
-----END CERTIFICATE-----

      OPENSHIFT_DEPLOYMENT_NAME:    hono-adapter-mqtt-vertx-1
      OPENSHIFT_DEPLOYMENT_NAMESPACE:   hono
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from deployer-token-n5pkp (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  deployer-token-n5pkp:
    Type:   Secret (a volume populated by a Secret)
    SecretName: deployer-token-n5pkp
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:     <none>

===============================================================================

$ oc describe pod/hono-service-device-registry-1-deploy
Name:       hono-service-device-registry-1-deploy
Namespace:  hono
Node:       localhost/192.168.122.201
Start Time: Tue, 10 Apr 2018 14:28:00 -0400
Labels:     openshift.io/deployer-pod-for.name=hono-service-device-registry-1
Annotations:    openshift.io/deployment.name=hono-service-device-registry-1
        openshift.io/scc=restricted
Status:     Failed
IP:     172.17.0.6
Containers:
  deployment:
    Container ID:   docker://d1455378b6da4646db954a559406c82b8e5328f88883a32070ed53b0c3d99107
    Image:      openshift/origin-deployer:v3.7.1
    Image ID:       docker-pullable://openshift/origin-deployer@sha256:2e39b45e1a68fd25647f0fd64b19d81b9dee04ee84ec49fefc2a28580dc9ab61
    Port:       <none>
    State:      Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 10 Apr 2018 14:28:08 -0400
      Finished:     Tue, 10 Apr 2018 15:28:13 -0400
    Ready:      False
    Restart Count:  0
    Environment:
      KUBERNETES_MASTER:    https://127.0.0.1:8443
      OPENSHIFT_MASTER:     https://127.0.0.1:8443
      BEARER_TOKEN_FILE:    /var/run/secrets/kubernetes.io/serviceaccount/token
      OPENSHIFT_CA_DATA:    -----BEGIN CERTIFICATE-----
MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
c2hpZnQtc2lnbmVyQDE1MjMzODQyNjMwHhcNMTgwNDEwMTgxNzQzWhcNMjMwNDA5
MTgxNzQ0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MjMzODQyNjMw
ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDoGRhmrtXQ8WDCidCmLRVV
sX0JxblK9qjI4JHXmdF2OMD51CeWY6OnCbcG/lU9bD8vD0bmk0ZTvbdfGAFpcQYg
Onso2ZHDbXSvGX3/Dt04RLXNQGGgfmzWlP6sxvmk8NKhzXIqqWhXbq+FDzWBADxG
+0bMLOSxThlZ1f+BjL/YWi+4TdHxselYaDTbCM3Q8fFbJ7YE/1qdz12G7eIBKJO1
bXN001KNFcWJ+71O8l2ZEoyoGXBLjvZfTln2Bm4yRUpQnZIrLpH+JKk7Kz1EN73R
tK6Kp+SEilbRkXkOurcOVsGfH17/mN4FKLWfyUBUpw2L2JAKMCdJocKzk7aHc+bb
AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
SIb3DQEBCwUAA4IBAQBqlcV8rKcmgyn/hGgZBnWbiY2ciou+GRcccswKD5+k6nvg
HE/ehsFDPdW7ENdy/IQI+mewHmt4RvoUSr3c/1dYE5yAtD2+81bVlsXDxIHtj3Uh
VPruL1Lbg26sLJSjsGH3Y+bKrbtHjmMVRAi7nrYZww81c4OYalpi9Y2DW2wYP1ap
8MDrMNThtbHklLb0S5h2oBLGRd0KjJpAjzcMLF2fJNSDnsCyl3LVHqT1u3ql5wSj
Luwk2xvNi97J7s6/OUNmFgFwAIuJuoHl5uhS7gPEaaUGu985zE75nUsH1jWM3Yb9
t+65Q5DtjD6H3Poum70Ap9pH0NIQ+FAYJ+i9nv/H
-----END CERTIFICATE-----

      OPENSHIFT_DEPLOYMENT_NAME:    hono-service-device-registry-1
      OPENSHIFT_DEPLOYMENT_NAMESPACE:   hono
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from deployer-token-n5pkp (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  deployer-token-n5pkp:
    Type:   Secret (a volume populated by a Secret)
    SecretName: deployer-token-n5pkp
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:     <none>

===============================================================================

$ oc describe pod/hono-service-messaging-1-deploy
Name:       hono-service-messaging-1-deploy
Namespace:  hono
Node:       localhost/192.168.122.201
Start Time: Tue, 10 Apr 2018 14:28:02 -0400
Labels:     openshift.io/deployer-pod-for.name=hono-service-messaging-1
Annotations:    openshift.io/deployment.name=hono-service-messaging-1
        openshift.io/scc=restricted
Status:     Failed
IP:     172.17.0.7
Containers:
  deployment:
    Container ID:   docker://71febd3768354044dffdead7a0004031775e9d8df1ba4852c3e0596b28b0c643
    Image:      openshift/origin-deployer:v3.7.1
    Image ID:       docker-pullable://openshift/origin-deployer@sha256:2e39b45e1a68fd25647f0fd64b19d81b9dee04ee84ec49fefc2a28580dc9ab61
    Port:       <none>
    State:      Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 10 Apr 2018 14:28:09 -0400
      Finished:     Tue, 10 Apr 2018 15:28:12 -0400
    Ready:      False
    Restart Count:  0
    Environment:
      KUBERNETES_MASTER:    https://127.0.0.1:8443
      OPENSHIFT_MASTER:     https://127.0.0.1:8443
      BEARER_TOKEN_FILE:    /var/run/secrets/kubernetes.io/serviceaccount/token
      OPENSHIFT_CA_DATA:    -----BEGIN CERTIFICATE-----
MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
c2hpZnQtc2lnbmVyQDE1MjMzODQyNjMwHhcNMTgwNDEwMTgxNzQzWhcNMjMwNDA5
MTgxNzQ0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MjMzODQyNjMw
ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDoGRhmrtXQ8WDCidCmLRVV
sX0JxblK9qjI4JHXmdF2OMD51CeWY6OnCbcG/lU9bD8vD0bmk0ZTvbdfGAFpcQYg
Onso2ZHDbXSvGX3/Dt04RLXNQGGgfmzWlP6sxvmk8NKhzXIqqWhXbq+FDzWBADxG
+0bMLOSxThlZ1f+BjL/YWi+4TdHxselYaDTbCM3Q8fFbJ7YE/1qdz12G7eIBKJO1
bXN001KNFcWJ+71O8l2ZEoyoGXBLjvZfTln2Bm4yRUpQnZIrLpH+JKk7Kz1EN73R
tK6Kp+SEilbRkXkOurcOVsGfH17/mN4FKLWfyUBUpw2L2JAKMCdJocKzk7aHc+bb
AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
SIb3DQEBCwUAA4IBAQBqlcV8rKcmgyn/hGgZBnWbiY2ciou+GRcccswKD5+k6nvg
HE/ehsFDPdW7ENdy/IQI+mewHmt4RvoUSr3c/1dYE5yAtD2+81bVlsXDxIHtj3Uh
VPruL1Lbg26sLJSjsGH3Y+bKrbtHjmMVRAi7nrYZww81c4OYalpi9Y2DW2wYP1ap
8MDrMNThtbHklLb0S5h2oBLGRd0KjJpAjzcMLF2fJNSDnsCyl3LVHqT1u3ql5wSj
Luwk2xvNi97J7s6/OUNmFgFwAIuJuoHl5uhS7gPEaaUGu985zE75nUsH1jWM3Yb9
t+65Q5DtjD6H3Poum70Ap9pH0NIQ+FAYJ+i9nv/H
-----END CERTIFICATE-----

      OPENSHIFT_DEPLOYMENT_NAME:    hono-service-messaging-1
      OPENSHIFT_DEPLOYMENT_NAMESPACE:   hono
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from deployer-token-n5pkp (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  deployer-token-n5pkp:
    Type:   Secret (a volume populated by a Secret)
    SecretName: deployer-token-n5pkp
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:     <none>
ctron commented 6 years ago

@sophokles73 Thanks for the reminder. I initially interpreted #558 as being a problem of minikube. But it looks like this got actually fixed in Hono.

The final error seems to be related to the init command copy-example-data and not the main container itself. Maybe you can check the logs of the container and see what is going on. Checking the logs of a container can be done with oc logs -c container-name pod/pod-name.

sysexcontrol commented 6 years ago

@ctron @sophokles73 : I had problems exactly here when I introduced the tenant example data and it was for unclear reasons not possible to copy credentials AND tenant example data in one init container per combined sh command. The compromise solution after a while of trying to get it working was to introduce 2 init containers, one for each example data. Not nice, but that worked.

Now the copy is done in one step again - maybe it produces just the same problems as before?

I am sure it must be possible with one init container, but maybe there is still a problem as it is done now.

Quick idea:

Change - 'cp -u /tmp/hono/example-credentials.json /var/lib/hono/device-registry/credentials.json; cp -u /tmp/hono/example-tenants.json /var/lib/hono/device-registry/tenants.json'

to

- 'cp -u /tmp/hono/example-credentials.json /tmp/hono/example-tenants.json /var/lib/hono/device-registry'

This reduces it back to one step command.

mohabh88 commented 6 years ago

I tried to check the logs of the containers, but possibly I am not interpreting the command in the wrong way @ctron.

I used $ oc logs -c container-name pod/hono-adapter-kura-1-deploy for example and it returnedError from server (NotFound): pods "hono-adapter-kura-1-deploy" not found also tried $ oc logs -c grafana pod/grafana-4046359926-xkz6n and also threw Error from server (NotFound): pods "grafana-4046359926-xkz6n" not found.

Also looked up for how to get containers log in minishift, but could not find helpful information; could you please point me to the right command to use?

The status is still the same since yesterday,

$ oc get pod --namespace hono
NAME                                    READY     STATUS    RESTARTS   AGE
grafana-4046359926-xkz6n                1/1       Running   2          16h
hono-adapter-http-vertx-1-deploy        0/1       Error     0          16h
hono-adapter-kura-1-deploy              0/1       Error     0          16h
hono-adapter-mqtt-vertx-1-deploy        0/1       Error     0          16h
hono-service-auth-1-dsd9x               1/1       Running   2          16h
hono-service-device-registry-1-deploy   0/1       Error     0          16h
hono-service-messaging-1-deploy         0/1       Error     0          16h
influxdb-3804478203-vqx59               1/1       Running   2          16h
mohabh88 commented 6 years ago

I did undeploy openshift and re-deployed it again and now all the pods became running, but four of them are not ready.

$ oc get pod
NAME                                   READY     STATUS    RESTARTS   AGE
grafana-4046359926-t6lgf               1/1       Running   0          10m
hono-adapter-http-vertx-1-deploy       1/1       Running   0          10m
hono-adapter-http-vertx-1-rktcw        0/1       Running   0          10m
hono-adapter-kura-1-deploy             1/1       Running   0          10m
hono-adapter-kura-1-k2nnf              0/1       Running   0          10m
hono-adapter-mqtt-vertx-1-deploy       1/1       Running   0          10m
hono-adapter-mqtt-vertx-1-pb7tm        0/1       Running   0          10m
hono-service-auth-1-j4pmx              1/1       Running   0          10m
hono-service-device-registry-1-ktkpn   1/1       Running   0          10m
hono-service-messaging-1-c8dj2         0/1       Running   0          10m
hono-service-messaging-1-deploy        1/1       Running   0          10m
influxdb-3804478203-qwwc9              1/1       Running   0          10m

The not-ready pods have been running for more than five minutes and haven't passed their readiness check and possibly they are going to fail after some time with an "Error" state like before.

Checking the pods' behavior (i.e., still not able to retrieve the logs for the containers!):

$ oc describe pod/hono-adapter-http-vertx-1-rktcw
Name:       hono-adapter-http-vertx-1-rktcw
Namespace:  hono
Node:       localhost/192.168.122.30
Start Time: Wed, 11 Apr 2018 08:44:28 -0400
Labels:     app=hono-adapter-http-vertx
        deployment=hono-adapter-http-vertx-1
        deploymentconfig=hono-adapter-http-vertx
        group=org.eclipse.hono
        provider=fabric8
        version=0.6-SNAPSHOT
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hono","name":"hono-adapter-http-vertx-1","uid":"098a03a3-3d86-11e8-a06...
        openshift.io/deployment-config.latest-version=1
        openshift.io/deployment-config.name=hono-adapter-http-vertx
        openshift.io/deployment.name=hono-adapter-http-vertx-1
        openshift.io/scc=restricted
Status:     Running
IP:     172.17.0.15
Created By: ReplicationController/hono-adapter-http-vertx-1
Controlled By:  ReplicationController/hono-adapter-http-vertx-1
Containers:
  eclipse-hono-adapter-http-vertx:
    Container ID:   docker://53e1afd173e709292e30dfeae780c33e8aec6c831bbddf4c21f1c9eec71bac04
    Image:      eclipse/hono-adapter-http-vertx:0.6-SNAPSHOT
    Image ID:       docker://sha256:e75bab818fb4648f89abe6829af1835019c638b3b1f70192cb497f9231666bd4
    Ports:      8088/TCP, 8080/TCP, 8443/TCP
    State:      Running
      Started:      Wed, 11 Apr 2018 08:44:37 -0400
    Ready:      False
    Restart Count:  0
    Liveness:       http-get http://:8088/liveness delay=180s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8088/readiness delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      SPRING_CONFIG_LOCATION:   file:///etc/hono/
      SPRING_PROFILES_ACTIVE:   dev
      LOGGING_CONFIG:       classpath:logback-spring.xml
      _JAVA_OPTIONS:        -Xmx128m
      KUBERNETES_NAMESPACE: hono (v1:metadata.namespace)
    Mounts:
      /etc/hono from conf (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9ppzq (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  conf:
    Type:   Secret (a volume populated by a Secret)
    SecretName: hono-adapter-http-vertx-conf
    Optional:   false
  default-token-9ppzq:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-9ppzq
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason          Message
  --------- --------    -----   ----            -------------               --------    ------          -------
  13m       13m     1   default-scheduler                       Normal      Scheduled       Successfully assigned hono-adapter-http-vertx-1-rktcw to localhost
  13m       13m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-9ppzq" 
  13m       13m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "conf" 
  13m       13m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Normal      Pulled          Container image "eclipse/hono-adapter-http-vertx:0.6-SNAPSHOT" already present on machine
  13m       13m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Normal      Created         Created container
  13m       13m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Normal      Started         Started container
  12m       12m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Warning     Unhealthy       Readiness probe failed: Get http://172.17.0.15:8088/readiness: dial tcp 172.17.0.15:8088: getsockopt: connection refused
  12m       3m      59  kubelet, localhost  spec.containers{eclipse-hono-adapter-http-vertx}    Warning     Unhealthy       Readiness probe failed: HTTP probe failed with statuscode: 503

==============================================================================================================================================================

$ oc describe pod/hono-adapter-kura-1-k2nnf
Name:       hono-adapter-kura-1-k2nnf
Namespace:  hono
Node:       localhost/192.168.122.30
Start Time: Wed, 11 Apr 2018 08:44:35 -0400
Labels:     app=hono-adapter-kura
        deployment=hono-adapter-kura-1
        deploymentconfig=hono-adapter-kura
        group=org.eclipse.hono
        provider=fabric8
        version=0.6-SNAPSHOT
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hono","name":"hono-adapter-kura-1","uid":"0d7ca26f-3d86-11e8-a06b-9aac...
        openshift.io/deployment-config.latest-version=1
        openshift.io/deployment-config.name=hono-adapter-kura
        openshift.io/deployment.name=hono-adapter-kura-1
        openshift.io/scc=restricted
Status:     Running
IP:     172.17.0.16
Created By: ReplicationController/hono-adapter-kura-1
Controlled By:  ReplicationController/hono-adapter-kura-1
Containers:
  eclipse-hono-adapter-kura:
    Container ID:   docker://c0276137ce9ee2059061826966d3a23f9a28beee6c8642175f180f326515e86d
    Image:      eclipse/hono-adapter-kura:0.6-SNAPSHOT
    Image ID:       docker://sha256:585fb197273203e04edd984329b95e73d935e200af37dfd36a4a7718520039f0
    Ports:      8088/TCP, 8883/TCP, 1883/TCP
    State:      Running
      Started:      Wed, 11 Apr 2018 08:44:44 -0400
    Ready:      False
    Restart Count:  0
    Liveness:       http-get http://:8088/liveness delay=180s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8088/readiness delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      SPRING_CONFIG_LOCATION:   file:///etc/hono/
      SPRING_PROFILES_ACTIVE:   dev
      LOGGING_CONFIG:       classpath:logback-spring.xml
      _JAVA_OPTIONS:        -Xmx128m
      KUBERNETES_NAMESPACE: hono (v1:metadata.namespace)
    Mounts:
      /etc/hono from conf (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9ppzq (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  conf:
    Type:   Secret (a volume populated by a Secret)
    SecretName: hono-adapter-kura-conf
    Optional:   false
  default-token-9ppzq:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-9ppzq
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason          Message
  --------- --------    -----   ----            -------------               --------    ------          -------
  15m       15m     1   default-scheduler                       Normal      Scheduled       Successfully assigned hono-adapter-kura-1-k2nnf to localhost
  15m       15m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-9ppzq" 
  15m       15m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "conf" 
  15m       15m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-kura}  Normal      Pulled          Container image "eclipse/hono-adapter-kura:0.6-SNAPSHOT" already present on machine
  15m       15m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-kura}  Normal      Created         Created container
  15m       15m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-kura}  Normal      Started         Started container
  14m       5s      90  kubelet, localhost  spec.containers{eclipse-hono-adapter-kura}  Warning     Unhealthy       Readiness probe failed: HTTP probe failed with statuscode: 503

==============================================================================================================================================================

$ oc describe pod/hono-adapter-mqtt-vertx-1-pb7tm
Name:       hono-adapter-mqtt-vertx-1-pb7tm
Namespace:  hono
Node:       localhost/192.168.122.30
Start Time: Wed, 11 Apr 2018 08:44:28 -0400
Labels:     app=hono-adapter-mqtt-vertx
        deployment=hono-adapter-mqtt-vertx-1
        deploymentconfig=hono-adapter-mqtt-vertx
        group=org.eclipse.hono
        provider=fabric8
        version=0.6-SNAPSHOT
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hono","name":"hono-adapter-mqtt-vertx-1","uid":"0b8de7ba-3d86-11e8-a06...
        openshift.io/deployment-config.latest-version=1
        openshift.io/deployment-config.name=hono-adapter-mqtt-vertx
        openshift.io/deployment.name=hono-adapter-mqtt-vertx-1
        openshift.io/scc=restricted
Status:     Running
IP:     172.17.0.14
Created By: ReplicationController/hono-adapter-mqtt-vertx-1
Controlled By:  ReplicationController/hono-adapter-mqtt-vertx-1
Containers:
  eclipse-hono-adapter-mqtt-vertx:
    Container ID:   docker://f14483b2a86fc635d649d6716604465606c4719ed3a92e6cd4501fc79116a377
    Image:      eclipse/hono-adapter-mqtt-vertx:0.6-SNAPSHOT
    Image ID:       docker://sha256:f020d90bf4fb1dd7c7201ed9dd60bcb203a27f5f3ba017d3850177af27739a72
    Ports:      8088/TCP, 8883/TCP, 1883/TCP
    State:      Running
      Started:      Wed, 11 Apr 2018 08:44:37 -0400
    Ready:      False
    Restart Count:  0
    Liveness:       http-get http://:8088/liveness delay=180s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8088/readiness delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      SPRING_CONFIG_LOCATION:   file:///etc/hono/
      SPRING_PROFILES_ACTIVE:   dev
      LOGGING_CONFIG:       classpath:logback-spring.xml
      _JAVA_OPTIONS:        -Xmx128m
      KUBERNETES_NAMESPACE: hono (v1:metadata.namespace)
    Mounts:
      /etc/hono from conf (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9ppzq (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  conf:
    Type:   Secret (a volume populated by a Secret)
    SecretName: hono-adapter-mqtt-vertx-conf
    Optional:   false
  default-token-9ppzq:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-9ppzq
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason          Message
  --------- --------    -----   ----            -------------               --------    ------          -------
  16m       16m     1   default-scheduler                       Normal      Scheduled       Successfully assigned hono-adapter-mqtt-vertx-1-pb7tm to localhost
  16m       16m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-9ppzq" 
  16m       16m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "conf" 
  16m       16m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Normal      Pulled          Container image "eclipse/hono-adapter-mqtt-vertx:0.6-SNAPSHOT" already present on machine
  16m       16m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Normal      Created         Created container
  15m       15m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Normal      Started         Started container
  15m       15m     1   kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Warning     Unhealthy       Readiness probe failed: Get http://172.17.0.14:8088/readiness: dial tcp 172.17.0.14:8088: getsockopt: connection refused
  15m       59s     89  kubelet, localhost  spec.containers{eclipse-hono-adapter-mqtt-vertx}    Warning     Unhealthy       Readiness probe failed: HTTP probe failed with statuscode: 503

==============================================================================================================================================================

$ oc describe pod/hono-service-messaging-1-c8dj2
Name:       hono-service-messaging-1-c8dj2
Namespace:  hono
Node:       localhost/192.168.122.30
Start Time: Wed, 11 Apr 2018 08:44:26 -0400
Labels:     app=hono-service-messaging
        deployment=hono-service-messaging-1
        deploymentconfig=hono-service-messaging
        group=org.eclipse.hono
        provider=fabric8
        version=0.6-SNAPSHOT
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hono","name":"hono-service-messaging-1","uid":"07c42841-3d86-11e8-a06b...
        openshift.io/deployment-config.latest-version=1
        openshift.io/deployment-config.name=hono-service-messaging
        openshift.io/deployment.name=hono-service-messaging-1
        openshift.io/scc=restricted
Status:     Running
IP:     172.17.0.13
Created By: ReplicationController/hono-service-messaging-1
Controlled By:  ReplicationController/hono-service-messaging-1
Containers:
  eclipse-hono-service-messaging:
    Container ID:   docker://b42a6ed4ce64132aadf894ca8fff5007b2cfe79fc5ffb2402037273c1bfaae16
    Image:      eclipse/hono-service-messaging:0.6-SNAPSHOT
    Image ID:       docker://sha256:2d443669b9a316c214a0f4bbc429b38aacc6a2628f15ebf7635b91f7ae80bff5
    Ports:      8088/TCP, 5671/TCP, 5672/TCP
    State:      Running
      Started:      Wed, 11 Apr 2018 08:44:34 -0400
    Ready:      False
    Restart Count:  0
    Liveness:       http-get http://:8088/liveness delay=180s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8088/readiness delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      SPRING_CONFIG_LOCATION:   file:///etc/hono/
      SPRING_PROFILES_ACTIVE:   dev
      LOGGING_CONFIG:       classpath:logback-spring.xml
      _JAVA_OPTIONS:        -Xmx196m
      KUBERNETES_NAMESPACE: hono (v1:metadata.namespace)
    Mounts:
      /etc/hono from conf (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9ppzq (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  conf:
    Type:   Secret (a volume populated by a Secret)
    SecretName: hono-service-messaging-conf
    Optional:   false
  default-token-9ppzq:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-9ppzq
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason          Message
  --------- --------    -----   ----            -------------               --------    ------          -------
  17m       17m     1   default-scheduler                       Normal      Scheduled       Successfully assigned hono-service-messaging-1-c8dj2 to localhost
  17m       17m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-9ppzq" 
  17m       17m     1   kubelet, localhost                      Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "conf" 
  16m       16m     1   kubelet, localhost  spec.containers{eclipse-hono-service-messaging} Normal      Pulled          Container image "eclipse/hono-service-messaging:0.6-SNAPSHOT" already present on machine
  16m       16m     1   kubelet, localhost  spec.containers{eclipse-hono-service-messaging} Normal      Created         Created container
  16m       16m     1   kubelet, localhost  spec.containers{eclipse-hono-service-messaging} Normal      Started         Started container
  16m       1m      90  kubelet, localhost  spec.containers{eclipse-hono-service-messaging} Warning     Unhealthy       Readiness probe failed: HTTP probe failed with statuscode: 503
ctron commented 6 years ago

I can only suggest to have a look at oc log --help, which explains how to access the logs of a specific container. You should also be able to use the Web UI to do the same.

Maybe it helps you using the following example instead: https://github.com/ctron/hono-demo-1 … see the readme on the same page. You should start with newly created minishift instance though.

I played through this tutorial a few times, it uses a released version of Hono (0.5), maybe that helps for the moment.

mohabh88 commented 6 years ago

containers logs.zip

I got the logs from the Web UI; as still $ oc logs -c eclipse-hono-adapter-kura pod/hono-adapter-kura-1-deploy, for example, throws
Error from server (BadRequest): container eclipse-hono-adapter-kura is not valid for pod hono-adapter-kura-1-deploy; I looked for the containers' names a lot and fond them somewhere!

Yes, the tutorial says that it is Hono (0.5), but the master itself (Github) is the newer version of it, which is Hono (0.6) and it should work as intended.

Please let me know if you need more information for the log or so and I'd be delighted to address them.

On a side note: Thank you for the passed tutorial/example of yours, but I'd prefer to stick to the example as I am working on some simulations and comparisons for my studies and I already used the kubernetes one and would like to continue in that flow. Thank you for your understanding and co-operation. Also I tried your example yesterday and wasn't able to continue working on it due to resources limitations on my side.

ctron commented 6 years ago

@mohabh88 Have you looked in the logs you provided? They don't seem to relate to the copy-example-data container. Or did I overlook something?

mohabh88 commented 6 years ago

All the logs provided are the ones already shown via the Web UI; the copy-example-data container doesn't exist in there.

available deployments

monitoring

If you mean something else, please guide to find it; that would be greatly appreciated.

Also, right after providing the previous logs, all the failed pods disappeared

ctron commented 6 years ago

If you dig into "pods", you will see "containers". One pod may consist of multiple containers. You need to check there.

mohabh88 commented 6 years ago

@ctron
Thank you for your guidance. I found that container in the "hono-service-device-registry-1-ktkpn " pod.

Here you are the logs for that pod/container:

INIT CONTAINER: COPY-EXAMPLE-DATA.zip

and the screenshot for its details:

init container copy-example-data

ctron commented 6 years ago

Something like that:

➜  openshift git:(master) ✗ oc logs hono-service-device-registry-1-7bttt -c copy-example-data
cp: can't create '/var/lib/hono/device-registry/credentials.json': Permission denied
cp: can't create '/var/lib/hono/device-registry/tenants.json': Permission denied
mohabh88 commented 6 years ago

it shows nothing

$ oc logs hono-service-device-registry-1-ktkpn -c copy-example-data --namespace hono
moaly@m4122-01:~/Documents$ 

double checking /var/lib/ ... hono doesn't exist as well

$ /var/lib/
AccountsService/         geoclue/                 sgml-base/
acpi-support/            ghostscript/             snapd/
alsa/                    git/                     snmp/
apparmor/                grafana/                 sudo/
app-info/                hp/                      systemd/
apt/                     initramfs-tools/         tex-common/
aspell/                  ispell/                  texmf/
avahi-autoipd/           libreoffice/             ubiquity/
binfmts/                 libvirt/                 ubuntu-drivers-common/
bluetooth/               libxml-sax-perl/         ubuntu-fan/
colord/                  locales/                 ubuntu-release-upgrader/
dbus/                    lockdown/                ucf/
dhcp/                    logrotate/               udisks2/
dictionaries-common/     man-db/                  update-manager/
dkms/                    misc/                    update-notifier/
doc-base/                mlocate/                 upower/
docker/                  NetworkManager/          ureadahead/
dpkg/                    os-prober/               usb_modeswitch/
emacsen-common/          PackageKit/              usbutils/
fwupd/                   pam/                     vim/
gconf/                   plymouth/                whoopsie/
gdm3/                    polkit-1/                xfonts/
gems/                    python/                  xkb/

Also tried to copy the logs to a txt file, but the file created is empty.

mohabh88 commented 6 years ago

reading more about that command in the help

Note that logs
  # from older deployments may not exist either **because the deployment was successful**
  # or due to deployment pruning or manual deletion of the deployment.

"hono-service-device-registry-1-ktkpn 1/1 " is deployed successfully (running), so apparently there are no logs for it, right?

$ oc get pod --namespace honoNAME                                   READY     STATUS    RESTARTS   AGE
grafana-4046359926-t6lgf               1/1       Running   0          3h
hono-adapter-http-vertx-1-deploy       0/1       Error     0          3h
hono-adapter-kura-1-deploy             0/1       Error     0          3h
hono-adapter-kura-2-deploy             0/1       Error     0          2h
hono-adapter-mqtt-vertx-1-deploy       0/1       Error     0          3h
hono-service-auth-1-j4pmx              1/1       Running   0          3h
hono-service-device-registry-1-ktkpn   1/1       Running   0          3h
hono-service-messaging-1-deploy        0/1       Error     0          3h
influxdb-3804478203-qwwc9              1/1       Running   0          3h
mohabh88 commented 6 years ago

I have several observations here (if you don't mind); I hope that the information presented below is organized enough for your convenience:

I undeployed/deleted everything and started from the beginning to find what may the issues I am experiencing.

When trying to download minishift with the recommended settings using `minishift start --metrics --cpus 3 --memory 10GB --disk-size 30GB ;

-- Checking available disk space ... 0% used OK
   Importing 'openshift/origin:v3.7.1'  CACHE MISS
   Importing 'openshift/origin-docker-registry:v3.7.1'  CACHE MISS
   Importing 'openshift/origin-haproxy-router:v3.7.1'  CACHE MISS

minishift_start_command_logs.zip

Deploying Apache ActiveMQ Artemis Broker ...
secret "hono-artemis-conf" created
Error from server (BadRequest): error when creating "/home/moaly/Documents/hono-master-2/example/target/deploy/openshift/../artemis-deployment.yml": Deployment in version "v1beta2" cannot be handled as a Deployment: no kind "Deployment" is registered for version "apps/v1beta2"
service "hono-artemis" created
... done
Deploying Qpid Dispatch Router ...
secret "hono-dispatch-router-conf" created
Error from server (BadRequest): error when creating "/home/moaly/Documents/hono-master-2/example/target/deploy/openshift/../dispatch-router-deployment.yml": Deployment in version "v1beta2" cannot be handled as a Deployment: no kind "Deployment" is registered for version "apps/v1beta2"
service "hono-dispatch-router" created
service "hono-dispatch-router-ext" created
route "hono-dispatch-router" created
... done

openshift_deploy_sh_command_logs.zip

Server https://192.168.42.188:8443 openshift v3.7.1+282e43f-42 kubernetes v1.7.6+a08f5eeb62


- A number of pods are not responding every time a new deployment starts:

$ oc get pod NAME READY STATUS RESTARTS AGE grafana-4046359926-vjh4j 1/1 Running 0 19m hono-adapter-http-vertx-1-9cgb9 0/1 Running 0 19m hono-adapter-http-vertx-1-deploy 1/1 Running 0 19m hono-adapter-kura-1-deploy 1/1 Running 0 19m hono-adapter-kura-1-lbvv9 0/1 Running 0 19m hono-adapter-mqtt-vertx-1-deploy 1/1 Running 0 19m hono-adapter-mqtt-vertx-1-x6c4z 0/1 Running 0 19m hono-service-auth-1-m67hj 1/1 Running 0 19m hono-service-device-registry-1-67jt9 1/1 Running 0 19m hono-service-messaging-1-c9cf4 0/1 Running 0 19m hono-service-messaging-1-deploy 1/1 Running 0 19m influxdb-3804478203-wthtn 1/1 Running 0 19m


and then

$ oc get pod NAME READY STATUS RESTARTS AGE grafana-4046359926-vjh4j 1/1 Running 0 1h hono-adapter-http-vertx-1-deploy 0/1 Error 0 1h hono-adapter-kura-1-deploy 0/1 Error 0 1h hono-adapter-mqtt-vertx-1-deploy 0/1 Error 0 1h hono-service-auth-1-m67hj 1/1 Running 0 1h hono-service-device-registry-1-67jt9 1/1 Running 0 1h hono-service-messaging-1-deploy 0/1 Error 0 1h influxdb-3804478203-wthtn 1/1 Running 0 1h


- The container `COPY-EXAMPLE-DATA` doesn't generate any logs "for any of the deployments"; while others do. 

moaly@m4122-01:~/Documents$ oc logs hono-service-device-registry-1-67jt9 -c copy-example-data moaly@m4122-01:~/Documents$ moaly@m4122-01:~/Documents$


OR

$ oc logs hono-service-messaging-1-deploy -c copy-example-data Error from server (BadRequest): container copy-example-data is not valid for pod hono-service-messaging-1-deploy



[oc logs hono-service-messaging-1-c9cf4 -c eclipse-hono-service-messaging.zip](https://github.com/eclipse/hono/files/1900309/oc.logs.hono-service-messaging-1-c9cf4.-c.eclipse-hono-service-messaging.zip)
[oc logs hono-adapter-http-vertx-1-9cgb9 -c eclipse-hono-adapter-http-vertx.zip](https://github.com/eclipse/hono/files/1900310/oc.logs.hono-adapter-http-vertx-1-9cgb9.-c.eclipse-hono-adapter-http-vertx.zip)
[oc logs hono-adapter-mqtt-vertx-1-x6c4z -c eclipse-hono-adapter-mqtt-vertx.zip](https://github.com/eclipse/hono/files/1900311/oc.logs.hono-adapter-mqtt-vertx-1-x6c4z.-c.eclipse-hono-adapter-mqtt-vertx.zip)
[oc logs hono-adapter-kura-1-lbvv9 -c eclipse-hono-adapter-kura.zip](https://github.com/eclipse/hono/files/1900312/oc.logs.hono-adapter-kura-1-lbvv9.-c.eclipse-hono-adapter-kura.zip)

I tried to focus on the failed ones only.

- On the other hand, I thought that it might be an issue with the current OpenShift version being used (as previously occurred for minikube #558); that's why I considered working with openshift v3.6.1 (instead of the current one 3.7.1); but also the same behavior was there for both downloading minishift and deploying Hono to OpenShift."I can forward you all the details if you would like and if requested".

may be the slight difference between both of them was that the (3.6.1) had most of the pods completed and active, not throwing warning/errors (but not turning into running at all), showing their metrics, and when starting the consumer; it throws errors after the 5 attempts, where as for the (3.7.1) it is throwing the warning/errors of the readiness, and the metrics are not available due to errors occurred getting metrics for containers `https://hawkular-metrics-openshift-infra.192.168.42.188.nip.io/hawkular/metrics`.

Note: I am keeping the current configuration as is now on my machine, till I hear your suggestion(s) back. Thank you for your continued assistance :).
ctron commented 6 years ago

@mohabh88 I got a preliminary workaround in #572

mohabh88 commented 6 years ago

@sophokles73

related to #573; I got the below when grepping the registry and the exit

$ docker ps -a | grep registry
ba1c2eead1f1        3df36b477b98                                                                                               "java -Dvertx.cach..."   16 hours ago        Up 16 hours                                   k8s_eclipse-hono-service-device-registry_hono-service-device-registry-1-67jt9_hono_aac2e101-3dbd-11e8-812b-82401bd09fd7_0
c0fea99d6e08        busybox@sha256:58ac43b2cc92c687a32c8be6278e50a063579655fe3090125dcb2af0ff9e1a64                            "sh -c 'cp -u /tmp..."   16 hours ago        Exited (0) 16 hours ago                       k8s_copy-example-data_hono-service-device-registry-1-67jt9_hono_aac2e101-3dbd-11e8-812b-82401bd09fd7_0
b8bc295e511c        openshift/origin-pod:v3.7.1                                                                                "/usr/bin/pod"           16 hours ago        Up 16 hours                                   k8s_POD_hono-service-device-registry-1-67jt9_hono_aac2e101-3dbd-11e8-812b-82401bd09fd7_0
e62f61e3b8de        openshift/origin-docker-registry@sha256:e4d37f8fc2b990468c12b3ff45d8f931c8084ba0d48a8b4426b9418edbdf1ef5   "/bin/sh -c '/usr/..."   16 hours ago        Up 16 hours                                   k8s_registry_docker-registry-1-wh5sl_default_0481a27b-3dbd-11e8-812b-82401bd09fd7_0
16043bec651b        openshift/origin-pod:v3.7.1                                                                                "/usr/bin/pod"           16 hours ago        Up 16 hours                                   k8s_POD_docker-registry-1-wh5sl_default_0481a27b-3dbd-11e8-812b-82401bd09fd7_0
$ docker ps -a | grep Exited
c0fea99d6e08        busybox@sha256:58ac43b2cc92c687a32c8be6278e50a063579655fe3090125dcb2af0ff9e1a64                            "sh -c 'cp -u /tmp..."   16 hours ago        Exited (0) 16 hours ago                       k8s_copy-example-data_hono-service-device-registry-1-67jt9_hono_aac2e101-3dbd-11e8-812b-82401bd09fd7_0
c814509a6895        openshift/origin-deployer@sha256:2e39b45e1a68fd25647f0fd64b19d81b9dee04ee84ec49fefc2a28580dc9ab61          "/usr/bin/openshif..."   16 hours ago        Exited (1) 15 hours ago                       k8s_deployment_hono-adapter-kura-1-deploy_hono_ab8e2bc1-3dbd-11e8-812b-82401bd09fd7_0
16da501e4b42        openshift/origin-pod:v3.7.1                                                                                "/usr/bin/pod"           16 hours ago        Exited (0) 15 hours ago                       k8s_POD_hono-adapter-kura-1-deploy_hono_ab8e2bc1-3dbd-11e8-812b-82401bd09fd7_0
29acb861c33b        openshift/origin-deployer@sha256:2e39b45e1a68fd25647f0fd64b19d81b9dee04ee84ec49fefc2a28580dc9ab61          "/usr/bin/openshif..."   16 hours ago        Exited (1) 15 hours ago                       k8s_deployment_hono-adapter-mqtt-vertx-1-deploy_hono_a9ddea73-3dbd-11e8-812b-82401bd09fd7_0
e7e41ca682c6        openshift/origin-deployer@sha256:2e39b45e1a68fd25647f0fd64b19d81b9dee04ee84ec49fefc2a28580dc9ab61          "/usr/bin/openshif..."   16 hours ago        Exited (1) 15 hours ago                       k8s_deployment_hono-adapter-http-vertx-1-deploy_hono_a896948f-3dbd-11e8-812b-82401bd09fd7_0
b9c7d806f2de        openshift/origin-pod:v3.7.1                                                                                "/usr/bin/pod"           16 hours ago        Exited (0) 15 hours ago                       k8s_POD_hono-adapter-mqtt-vertx-1-deploy_hono_a9ddea73-3dbd-11e8-812b-82401bd09fd7_0
28d0e7fcc50f        openshift/origin-pod:v3.7.1                                                                                "/usr/bin/pod"           16 hours ago        Exited (0) 15 hours ago                       k8s_POD_hono-adapter-http-vertx-1-deploy_hono_a896948f-3dbd-11e8-812b-82401bd09fd7_0
9688c1b7bb4a        openshift/origin-deployer@sha256:2e39b45e1a68fd25647f0fd64b19d81b9dee04ee84ec49fefc2a28580dc9ab61          "/usr/bin/openshif..."   16 hours ago        Exited (1) 15 hours ago                       k8s_deployment_hono-service-messaging-1-deploy_hono_a7103ae4-3dbd-11e8-812b-82401bd09fd7_0
8ae53cbb2636        openshift/origin-pod:v3.7.1                                                                                "/usr/bin/pod"           16 hours ago        Exited (0) 15 hours ago                       k8s_POD_hono-service-messaging-1-deploy_hono_a7103ae4-3dbd-11e8-812b-82401bd09fd7_0
99aa65a1cd53        openshift/origin@sha256:cfe5f34ed5d3374cd679ab376e84116d0568a58df2777948def7a425c7a087ca                   "/bin/bash -c '#/b..."   16 hours ago        Exited (0) 16 hours ago                       k8s_storage-setup-job_persistent-volume-setup-tnb7c_default_f525cda7-3dbc-11e8-812b-82401bd09fd7_0
ed41c6cee4e3        openshift/origin-pod:v3.7.1                                                                                "/usr/bin/pod"           16 hours ago        Exited (0) 16 hours ago                       k8s_POD_persistent-volume-setup-tnb7c_default_f525cda7-3dbc-11e8-812b-82401bd09fd7_0

Could you provide an explanation please? Also, I see in #572, you have pushed a change to use apps/v1beta1 Deployment for artemis and dispatch router ; so this should fix my issue? Re-pulling the master again would help starting the containers?

sophokles73 commented 6 years ago

so this should fix my issue? Re-pulling the master again would help starting the containers?

Based on the fact that in the output of your oc get pod command above, I would say: yes. The problem seems to be that the dispatch router is not running (because it couldn't be deployed), consequently its dependent services (Hono Messaging and the protocol adapters) didn't come up as well ...

So, please pull from master and try to re-deploy ...

mohabh88 commented 6 years ago

@ctron

Just out of curiosity, configuring Grafana in your example; creating the two new datasources don't work properly? :- e.g., ds_hono.json or dashboard_hono.json cannot be opened.

$ curl -X POST -T src/grafana/ds_hono.json -H "content-type: application/json" "http://admin:admin@grafana-grafana.192.168.42.193.nip.io/api/datasources"
curl: Can't open 'src/grafana/ds_hono.json'!
curl: try 'curl --help' or 'curl --manual' for more information
$curl -X POST -T src/grafana/dashboard_hono.json -H "content-type: application/json" "http://admin:admin@grafana-grafana.192.168.42.193.nip.io/api/dashboards/db"
curl: Can't open 'src/grafana/dashboard_hono.json'!
curl: try 'curl --help' or 'curl --manual' for more information

(also I tried using root account (sudo) and - the $GRAFANA_URL withouth "grafana-grafana" after the @ sign, but still the same error is thrown)

Could you please provide a bit more insight into this?

steps: project grafana.zip

ctron commented 6 years ago

I think you are located in the wrong direction, could that be? The files are there: https://github.com/ctron/hono-demo-1/tree/master/src/grafana

sophokles73 commented 6 years ago

@mohabh88 is your issue with deploying to minishift solved? If so, please close this issue and do not hijack it for another problem. Thank you.

mohabh88 commented 6 years ago

No, it is not fully resolved yet; the behavior is still there for the device registry pod Init:CrashLoopBackOff and the (http, mqtt, kura) adapters are in not ready status.

$ oc get pod
NAME                                    READY     STATUS                  RESTARTS   AGE
grafana-4046359926-jf4g8                1/1       Running                 1          11m
hono-adapter-http-vertx-1-887vj         0/1       Running                 0          10m
hono-adapter-http-vertx-1-deploy        1/1       Running                 0          10m
hono-adapter-kura-1-deploy              1/1       Running                 0          10m
hono-adapter-kura-1-fq5vp               0/1       Running                 0          10m
hono-adapter-mqtt-vertx-1-74p6f         0/1       Running                 0          10m
hono-adapter-mqtt-vertx-1-deploy        1/1       Running                 0          10m
hono-artemis-2105025420-mjqff           1/1       Running                 0          11m
hono-dispatch-router-1366534701-2w7tz   1/1       Running                 0          11m
hono-service-auth-1-xg9zn               1/1       Running                 0          10m
hono-service-device-registry-1-7zjmb    0/1       Init:CrashLoopBackOff   6          10m
hono-service-device-registry-1-deploy   1/1       Running                 0          11m
hono-service-messaging-1-8ttjx          1/1       Running                 0          10m
influxdb-3804478203-rns8w               1/1       Running                 0          11m
$ oc logs hono-service-device-registry-1-7zjmb -c copy-example-data
cp: can't create '/var/lib/hono/device-registry/credentials.json': Permission denied
cp: can't create '/var/lib/hono/device-registry/tenants.json': Permission denied
$ oc get pod
NAME                                    READY     STATUS    RESTARTS   AGE
grafana-4046359926-jf4g8                1/1       Running   1          1h
hono-adapter-http-vertx-1-deploy        0/1       Error     0          1h
hono-adapter-kura-1-deploy              0/1       Error     0          1h
hono-adapter-mqtt-vertx-1-deploy        0/1       Error     0          1h
hono-artemis-2105025420-mjqff           1/1       Running   0          1h
hono-dispatch-router-1366534701-2w7tz   1/1       Running   0          1h
hono-service-auth-1-xg9zn               1/1       Running   0          1h
hono-service-device-registry-1-deploy   0/1       Error     0          1h
hono-service-messaging-1-8ttjx          1/1       Running   0          1h
influxdb-3804478203-rns8w               1/1       Running   0          1h
mohabh88 commented 6 years ago

Is the functionality affected each time hono is deployed/undeployed on minishift? It seems I have to pull the master every time to have all the pods up and running!

Thank you for your assistance though :), re-pulling the master twice helped. Closing this issue.