openhab / openhab-cloud

Cloud companion for openHAB instances
Eclipse Public License 2.0
315 stars 161 forks source link

Kubernetes deployment #295

Open jeepman32 opened 4 years ago

jeepman32 commented 4 years ago

G'day!

I've tried deploying to kubernetes using the instructions at ./deployment/kubernetes/readme.MD, but I've had a few issues relating to config.json. I followed the deployment instructions, providing the context of my remote kubernetes cluster on DigitalOcean, using the --context [...] flag. I editied the deployment and undeployment scripts and provided this flag to all commands run. Deployment was successful, however on container start for app-1 I get this error:

module.js:472
    throw err;
    ^
Error: Cannot find module './config.json'
    at Function.Module._resolveFilename (module.js:470:15)
    at Function.Module._load (module.js:418:25)
    at Module.require (module.js:498:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/opt/openhabcloud/logger.js:11:25)
    at Module._compile (module.js:571:32)
    at Object.Module._extensions..js (module.js:580:10)
    at Module.load (module.js:488:32)
    at tryModuleLoad (module.js:447:12)
    at Function.Module._load (module.js:439:3)

I've tried to exec into the pod that was crashing but it appears as though the pod gets stuck in a loop and disallows any exec connections, so I can't seem to check if config.json does or not actually exist. What I guess is happening is that none of the config-[type].json are being renamed to config.json within the docker image, so the node app can't find it!

This was using kubernetes v1.15.9, and v1.16.6 after updating the config from extensions/v1beta1 to apps/v1 and adding the required selector to spec. Please let me know if you require more information!

jeepman32 commented 4 years ago

Additonal notes!

I've managed to insert a copy of config-production.json as a volume secret, encoded as an opaque json string. Now I've got a new error:


module.js:472
    throw err;
    ^
Error: Cannot find module '/opt/openhabcloud/app.js'
    at Function.Module._resolveFilename (module.js:470:15)
    at Function.Module._load (module.js:418:25)
    at Module.runMain (module.js:605:10)
    at run (bootstrap_node.js:427:7)
    at startup (bootstrap_node.js:151:9)
    at bootstrap_node.js:542:3
jeepman32 commented 4 years ago

I've since discovered that the latest docker image seems to share the same config.json missing issue.

marziman commented 4 years ago

Hello thanks for checking the k8s deployment. That will be changed and is pretty old. We re currently planning to update this stuffs. I will soon get back with an update

marziman commented 4 years ago

You can def. be involved here, if you re willing. We just are trying to figure out how the new k8s deployment shall look like. It will be most likely helm based.

jeepman32 commented 4 years ago

Hey marziman, I'd love to be involved! I'm no expert on k8s in particular but after reading some of the documentation, it doesn't seem to hard. I'm keen to start soon as I've still not managed to get my instance of openhab-cloud running, and I'd prefer to do it once, the right way!

digitaldan commented 4 years ago

Hi @jeepman32 thanks for the offer to help! We have an initial buildout working, and are just cleaning it up into helm charts to start, hopefully we can have something pushed this week to the repo to test with.

joalmjoalm commented 4 years ago

can you also update the versions in the package.json. When building my own docker image I get plenty of warnings of using outdated versions.

joalmjoalm commented 4 years ago

Also prepare for k8s v 16+ that has changed deployments from extensions/v1beta... to apps/v1. I came as long as above on k8s v 17.4 by changing deployments to apps/v1.

andrejsstepanovs commented 4 years ago

How much longer? This issue is still present in latest image. And there is no other image available in hub.docker.io except :latest

EDIT

this does the trick for now:

FROM docker.io/openhab/openhabcloud-app
USER root
RUN sed -i "s/\"registration_enabled\": false/\"registration_enabled\": true/g" config-docker.json
RUN cp config-docker.json config.json

https://hub.docker.com/repository/docker/wormhit/openhab

mmastrorilli commented 4 years ago

Hi. any news or workaround about this issue?

marziman commented 4 years ago

Hello all, i will take a look into this and try to update the images. This shall from now on also be done automatically. @joalmjoalm yes it shall be compatible with K8s 1.18+. The version you have used is ultra old.

I am planning to bring it to the newest k8s version and share it with you guys. Hope to get it to the weekend.

mmastrorilli commented 4 years ago

Please consider also that latest mongodb 4.2 version deprecates --smallfiles arg. You should remove it from yml. I'm still struggling on how to solve config.json missing issue.

mmastrorilli commented 4 years ago

Hi all, I was able to successfully deploy on minikube with this following manifest. Kubernetes version v1.19.2 In order to solve the issue of missing config.json I had to mount it on a volume. In case of minikube VM i mounted host directory on minikube with

minikube mount "<host local path whith config.json.template>:/data/openhabcloud-config"

I changed all extensions/v1beta to apps/v1 adding required selector for each deployment. Moreover also environment variables must be declared in the manifest for app-1.

I hope this could help closing the issue.

apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
    creationTimestamp: null
    labels:
      org.openhab.cloud.service: app-1
    name: app-1
  spec:
    type: NodePort
    ports:
    - name: "3000"
      port: 3000
      targetPort: 3000
    selector:
      org.openhab.cloud.service: app-1
  status:
    loadBalancer: {}
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
    creationTimestamp: null
    labels:
      org.openhab.cloud.service: mongodb
    name: mongodb
  spec:
    ports:
    - name: "27017"
      port: 27017
      targetPort: 27017
    selector:
      org.openhab.cloud.service: mongodb
  status:
    loadBalancer: {}
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
    creationTimestamp: null
    labels:
      org.openhab.cloud.service: nginx
    name: nginx
  spec:
    ports:
    - name: "80"
      port: 80
      targetPort: 8081
    - name: "443"
      port: 443
      targetPort: 8443
    selector:
      org.openhab.cloud.service: nginx
  status:
    loadBalancer: {}
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
    creationTimestamp: null
    labels:
      org.openhab.cloud.service: redis
    name: redis
  spec:
    ports:
    - name: "6379"
      port: 6379
      targetPort: 6379
    selector:
      org.openhab.cloud.service: redis
  status:
    loadBalancer: {}
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
    creationTimestamp: null
    labels:
      org.openhab.cloud.service: app-1
    name: app-1
  spec:
    replicas: 1
    strategy: {}
    template:
      metadata:
        creationTimestamp: null
        labels:
          org.openhab.cloud.service: app-1
      spec:
        containers:
        - image: docker.io/openhab/openhabcloud-app:latest
          name: app-1
          ports:
          - containerPort: 3000
          resources: {}
          env:
          - name: COMPOSE_PROJECT_NAME
            value: "openhab-cloud"
          - name: DOMAIN_NAME
            value: "localhost"
          - name: EMAIL
            value: "noreply@localost.com"
          - name: EXPRESS_KEY
            value: "123456"
          workingDir: /opt/openhabcloud
          command: ["./run-app.sh"]
          volumeMounts:
            - mountPath: /opt/openhabcloud/config.json.template
              name: config-storage
        restartPolicy: Always
        volumes:
          - name: config-storage
            hostPath:
              path: "/data/openhabcloud-config/config.json.template"
    selector:
      matchLabels:
        org.openhab.cloud.service: app-1
  status: {}
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
    creationTimestamp: null
    labels:
      org.openhab.cloud.service: mongodb
    name: mongodb
  spec:
    replicas: 1
    strategy:
      type: Recreate
    template:
      metadata:
        creationTimestamp: null
        labels:
          org.openhab.cloud.service: mongodb
      spec:
        containers:
        - args:
          - mongod
          - --bind_ip_all
          image: bitnami/mongodb:latest
          name: mongodb
          ports:
          - containerPort: 27017
          resources: {}
          volumeMounts:
          - mountPath: /data/db
            name: mongodb-empty0
          - mountPath: /data/configdb
            name: mongodb-empty1
        restartPolicy: Always
        volumes:
        - emptyDir: {}
          name: mongodb-empty0
        - emptyDir: {}
          name: mongodb-empty1
    selector:
      matchLabels:
        org.openhab.cloud.service: mongodb
  status: {}
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
    creationTimestamp: null
    labels:
      org.openhab.cloud.service: nginx
    name: nginx
  spec:
    replicas: 1
    strategy:
      type: Recreate
    template:
      metadata:
        creationTimestamp: null
        labels:
          org.openhab.cloud.service: nginx
      spec:
        containers:
        - image: docker.io/openhab/openhabcloud-nginx
          name: nginx
          ports:
          - containerPort: 8081
          - containerPort: 8443
          resources: {}
          volumeMounts:
          - mountPath: /opt/openhabcloud
            name: app-1
        restartPolicy: Always
        volumes:
        - emptyDir: {}
          name: app-1
    selector:
      matchLabels:
        org.openhab.cloud.service: nginx
  status: {}
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
    creationTimestamp: null
    labels:
      org.openhab.cloud.service: redis
    name: redis
  spec:
    replicas: 1
    strategy: {}
    template:
      metadata:
        creationTimestamp: null
        labels:
          org.openhab.cloud.service: redis
      spec:
        containers:
        - env:
          - name: REDIS_PASSWORD
            value: 123_openHAB
          image: bitnami/redis:latest
          name: redis
          ports:
          - containerPort: 6379
          resources: {}
        restartPolicy: Always
    selector:
      matchLabels:
        org.openhab.cloud.service: redis
  status: {}
kind: List
metadata: {}

In order to test deployments I set NodePort type on app-1 service, but this could not be a production configuration. Some more instruction on configuring nginx as ingress load balancer would be appreciated.

marziman commented 2 years ago

I want to release a helm chart and provide an automatable way of deployment for k8s. Writing an Operator would be much time consuming. I will look on the weekend if I can take a time.

andibraeu commented 7 months ago

During the last weekend, I wrote a helm chart: https://artifacthub.io/packages/helm/andibraeu/openhab-cloud/