Koenkk / zigbee2mqtt-chart

Helm Chart for Zigbee2MQTT
5 stars 3 forks source link

Zigbee2MQTTT don't like the fact that configuration.yaml is read only #4

Open raedkit opened 2 months ago

raedkit commented 2 months ago

Hi,

First of all, thank you for providing this helm chart. It's quite useful. When trying to use it, I got an issue regarding the fact that the configmap which is generated from values.yaml and mounted later as configuration.yaml file is by design read only. It seems that Zigbee2MQTTT don't like the fact that it's only read only as it could be updated via the UI. Please find below the logs related to this issue :

Using '/app/data' as data directory
[2024-06-06 22:56:30] info:     z2m: Logging to console
[2024-06-06 22:56:30] info:     z2m: Starting Zigbee2MQTT version 1.37.1 (commit #ea39d86)
[2024-06-06 22:56:30] info:     z2m: Starting zigbee-herdsman (0.46.6)
[2024-06-06 22:56:30] error:    z2m: Failed to start zigbee
[2024-06-06 22:56:30] error:    z2m: Check https://www.zigbee2mqtt.io/guide/installation/20_zigbee2mqtt-fails-to-start.html for possible solutions
[2024-06-06 22:56:30] error:    z2m: Exiting...
[2024-06-06 22:56:30] error:    z2m: Error: EROFS: read-only file system, open '/app/data/configuration.yaml'
    at Object.openSync (node:fs:596:3)
    at Object.writeFileSync (node:fs:2322:35)
    at Object.writeIfChanged (/app/lib/util/yaml.ts:25:12)
    at write (/app/lib/util/settings.ts:255:10)
    at Object.set (/app/lib/util/settings.ts:471:5)
    at Zigbee.generateNetworkKey (/app/lib/zigbee.ts:179:18)
    at Zigbee.start (/app/lib/zigbee.ts:37:26)
    at Controller.start (/app/lib/controller.ts:109:27)
    at start (/app/index.js:107:5)
Koenkk commented 2 months ago

Z2M requires the configuration.yaml to be read-only, as I understand correctly, this cannot be done with K8S, any way to workaround this @@jlpedrosa ?

raedkit commented 2 months ago

For me the configuration file should be added as persistent volume claim. Also all the configuration keys in the helm starting with zigbee2mqtt.* shouldn't be part of the helm chart/values but only managed internally in the zigbee2mqtt UI. Elsewhere if we want to go further in the current usage, we should use a sidecar container with an API server client : the sidecar container should be responsible for updating the configmap object using the API server client, such as Kubernetes' client-go library. But in my point of view it would be easier to keep in the zigbee2mqtt philosophy to start by initializing a minimal configuration.yaml file in the pvc and then the user has to optimize all the parameters in the zigbee2mqtt UI.

pmarques commented 2 months ago

Z2M requires the configuration.yaml to be read-only, as I understand correctly, this cannot be done with K8S, any way to workaround this @@jlpedrosa ?

@Koenkk I believe you mean Z2M requires the configuration.yaml to be read/write

For me the configuration file should be added as persistent volume claim. Also all the configuration keys in the helm starting with zigbee2mqtt.* shouldn't be part of the helm chart/values but only managed internally in the zigbee2mqtt UI.

@raedkit I would like to keep both since I prefer to manage them via the ConfigMap and Secrets (as per my PR #3)

lsewhere if we want to go further in the current usage, we should use a sidecar container with an API server client : the sidecar container should be responsible for updating the configmap object using the API server client, such as Kubernetes' client-go library. No strong opinion, just think is an unnecessary complexity which can clash with deployment workflows.

But in my point of view it would be easier to keep in the zigbee2mqtt philosophy to start by initializing a minimal configuration.yaml file in the pvc and then the user has to optimize all the parameters in the zigbee2mqtt UI.

We can have an option to not use the ConfigMap or just copy the initial values to disk on first start to allow writes as you suggested

aleksarias commented 1 month ago

Hello everyone, do you know if there is a fix for this issue? Is there something I should modify in the values.yaml file to get around this? I'm unable to add any new devices when using this chart.

jlpedrosa commented 1 month ago

Hey guys! I'm coming back from some time off, I'll get too it during this week. In any case, I'm a bit puzzled, this was working in my own cluster.

aleksarias commented 1 month ago

@jlpedrosa, I used the latest (and I believe) only release in this repo using the latest version of the docker image that's been released of Z2M in Docker Hub, version 1.39. With this setup anytime a setting is modified in the frontend or a new device is being added the resulting error is:

error:  z2m: Request 'zigbee2mqtt/bridge/request/options' failed with error: 'EROFS: read-only file system, open '/app/data/configuration.yaml''

I tried using a persistence setting section and leveraging the statefulset set but neither solved the problem.

Here's my valuess.yaml

# -- override the release name
nameOverride: null
# -- override the name of the objects generated
fullnameOverride: null
customLabels: {}
image:
  # -- Image repository for the `zigbee2mqtt` container.
  repository: koenkk/zigbee2mqtt
  # -- Version for the `zigbee2mqtt` container.
  tag: "latest"
  # -- Container pull policy
  pullPolicy: Always
  # -- Container additional secrets to pull image
  imagePullSecrets: {}
service:
  # -- annotations for the service created
  annotations: {}
  # -- type of Service to be created
  type: LoadBalancer
  # -- port in which the service will be listening
  port: 8080
persistence:
  data:
    enabled: true
    mountPath: /app/data
    accessMode: ReadWriteOnce
    size: 1Gi
statefulset:
  storage:
    enabled: false
    size: 1Gi
    # -- the name for the storage class to be used in the persistent volume claim
    storageClassName: local-path
    accessMode: ReadWriteOnce
    existingVolume: ""
    ## Persistent Volume selectors
    ## https://kubernetes.io/docs/concepts/storage/persistent-volumes/#selector
    matchLabels: {}
    matchExpressions: {}
  # -- pod dns policy
  dnsPolicy: ClusterFirst
  # -- CPU/Memory configuration for the pods
  resources:
    requests:
      memory: 600Mi
      cpu: 200m
    limits:
      memory: 1000Mi
      cpu: 1000m
  # -- Node taint tolerations for the pods
  tolerations: {}
  # -- Select specific kube node, this will allow enforcing zigbee2mqtt running
  # only on the node with the USB adapter connected
  nodeSelector: {}
zigbee2mqtt:
  homeassistant:
    enabled: true
    discovery_topic: 'homeassistant'
    status_topic: 'hass/status'
    legacy_entity_attributes: true
    legacy_triggers: false
  # -- Optional: allow new devices to join.
  permit_join: true
  availability:
    active:
      # -- Time after which an active device will be marked as offline in
      # minutes (default = 10 minutes)
      timeout: 10
    passive:
      # -- Time after which a passive device will be marked as offline in
      # minutes (default = 1500 minutes aka 25 hours)
      timeout: 1500
  timezone: UTC
  external_converters: []
  mqtt:
    # -- Required: MQTT server URL (use mqtts:// for SSL/TLS connection)
    server: "mqtt://mqtt-mosquitto"
  serial:
    port: "/dev/ttyUSB0"
    # -- Optional: disable LED of the adapter if supported (default: false)
    disable_led: false
    # -- Optional: adapter type, not needed unless you are experiencing problems (default: shown below, options: zstack, deconz, ezsp)
    # adapter: null
    # -- Optional: Baud rate speed for serial port, this can be anything firmware support but default is 115200 for Z-Stack and EZSP, 38400 for Deconz, however note that some EZSP firmware need 57600.
    baudrate: 115200
    # -- Optional: RTS / CTS Hardware Flow Control for serial port (default: false)
    rtscts: false
  # -- Optional: OTA update settings
  # See https://www.zigbee2mqtt.io/guide/usage/ota_updates.html for more info
  ota:
    # -- Optional: use IKEA TRADFRI OTA test server, see OTA updates documentation (default: false)
    ikea_ota_use_test_url: false
    # -- Minimum time between OTA update checks
    update_check_interval: 1440
    # -- Disable automatic update checks
    disable_automatic_update_check: false
  frontend:
    # -- Mandatory, default 8080
    port: 8080
    # -- Optional, empty by default to listen on both IPv4 and IPv6. Opens a unix socket when given a path instead of an address (e.g. '/run/zigbee2mqtt/zigbee2mqtt.sock')
    # Don't set this if you use Docker or the Home Assistant add-on unless you're sure the chosen IP is available inside the container
    host: 0.0.0.0
    # -- Optional, enables authentication, disabled by default, cleartext (no hashing required)
    auth_token: null
    # -- Optional, url on which the frontend can be reached, currently only used for the Home Assistant device configuration page
    url: z2m.iot
  advanced:
    channel: 11
    log_output:
      - console
    log_level: debug
    timestamp_format: 'YYYY-MM-DD HH:mm:ss'
    cache_state: true
    # -- Optional: persist cached state, only used when cache_state: true (default: true)
    cache_state_persistent: true
    # -- Optional: send cached state on startup, only used when cache_state_persistent: true (default: true)
    cache_state_send_on_startup: true
    # -- Optional: Add a last_seen attribute to MQTT messages, contains date/time of last Zigbee message
    # possible values are: disable (default), ISO_8601, ISO_8601_local, epoch (default: disable)
    last_seen: 'ISO_8601'
    # -- Optional: Add an elapsed attribute to MQTT messages, contains milliseconds since the previous msg (default: false)
    elapsed: true
    # -- Optional: Enables report feature, this feature is DEPRECATED since reporting is now setup by default
    # when binding devices. Docs can still be found here: https://github.com/Koenkk/zigbee2mqtt.io/blob/master/docs/information/report.md
    report: true
    # -- Optional: disables the legacy api (default: shown below)
    legacy_api: false
    # -- Optional: MQTT output type: json, attribute or attribute_and_json (default: shown below)
    # Examples when 'state' of a device is published
    # json: topic: 'zigbee2mqtt/my_bulb' payload '{"state": "ON"}'
    # attribute: topic 'zigbee2mqtt/my_bulb/state' payload 'ON"
    # attribute_and_json:
    # -- Optional: configure adapter concurrency (e.g. 2 for CC2531 or 16 for CC26X2R1) (default: null, uses recommended value)
    adapter_concurrent: null
    transmit_power: 5
    adapter_delay: 0
# -- Ingress configuration. Zigbee2mqtt does use webssockets, which is not part of the Ingress standart settings.
# most of the popular ingresses supports them through annotations. Please check https://www.zigbee2mqtt.io/guide/installation/08_kubernetes.html
# for examples.
ingress:
  # -- When enabled a new Ingress will be created
  enabled: true
  # -- The ingress class name for the ingress
  ingressClassName: traefik
  # -- Additional labels for the ingres
  labels: {}
  # -- Ingress implementation specific (potentially) for most use cases Prefix should be ok
  pathType: Prefix
  # Additional annotations for the ingress. ExternalDNS, and CertManager are tipically integrated here
  annotations: { }
  # -- list of hosts that should be allowed for the zigbee2mqtt service
  hosts:
    - host: z2m.iot
      paths:
        - path: /
          pathType: ImplementationSpecific
        - path: /api
          pathType: ImplementationSpecific
  # -- configuration for tls service (ig any)
  #tls:
  #  - secretName: some-tls-secret
  #    hosts:
aleksarias commented 3 weeks ago

I submitted a pull request that fixes this issue and increments the version. I didn't update the documentation stating what the changes are.

jlpedrosa commented 3 weeks ago

I don't think having an init container is the right solution, left the comments in the PR.
I am still unable to reproduce this locally, z2m is booting correclty for me. Instead of binding the image to "latest" can you tell us which version are you using exactly?

aleksarias commented 3 weeks ago

@jlpedrosa I'm using version 1.39.0. I mentioned the version in my comment where I pasted in my values.yaml for the chart.

I read your comments on the PR. I can update the chart with the option to create a persistent volume where the configuration file gets stored. This would mean any settings done through the UI will not get lost through container restarts. The file will only be created in the persistent volume only if it doesn't exist already.

It's surprising to me you're unable to reproduce the issue. How is it you're getting the z2m container to write back to a configmap that represents the configuration.yaml file?

pmarques commented 2 weeks ago

With this setup anytime a setting is modified in the frontend or a new device is being added the resulting error is

@aleksarias can you give an example of a setting/config you are trying to modify? I’m using the config map as well and so far I had no issues (although I only use small portion of the frontend for configuration and mainly devices related)

——

About the initial setup for the config, another way which crossed my mind is to use chart hooks. I haven’t spend to much time thinking about this but seems like a pre-install can be used to “create” an initial config or even copy a base config from the docker image (configuration.yaml).

aleksarias commented 2 weeks ago

@pmarques, an example setting that results in the error is the "allow join" setting. However, even adding new devices even fails so if that's succeeding for you then this is very mysterious because:

  1. configuration.yaml (which is mounted as a ConfigMap) is used to store z2m settings
  2. ConfigMaps are read only when mounted in pods https://kubernetes.io/docs/concepts/configuration/configmap/
thaynes43 commented 1 week ago

@pmarques I am experiencing the same error when clicking submit after checking permit join:

image

Oddly the first startup of the pod also crashed due to this, but subsequent ones did not:

Error: EROFS: read-only file system, open '/app/data/configuration.yaml'
    at Object.openSync (node:fs:596:3)
    at Object.writeFileSync (node:fs:2322:35)
    at Object.writeIfChanged (/app/lib/util/yaml.ts:25:12)
    at write (/app/lib/util/settings.ts:272:10)
    at Object.set (/app/lib/util/settings.ts:497:5)
    at Controller.start (/app/lib/controller.ts:151:22)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at runNextTicks (node:internal/process/task_queues:64:3)
    at processImmediate (node:internal/timers:447:9)
    at start (/app/index.js:154:5)
pmarques commented 1 week ago

We know the ConfigMap can’t be edited / written and so any changes which required such capability will fail. I personally use the temporary Permit Join in the menu bar (as per print-screen)

image

Unfortunately I don’t have the time right now to make any changes and run the needed tests. I reviewed the change proposed by @aleksarias in https://github.com/Koenkk/zigbee2mqtt-chart/pull/5 but that specific one introduces other issues mentioned by @jlpedrosa. Said that, if you want to make any changes while using the current helm chart which uses the ConfigMap, the best way is to deploy/update the config map via the helm chart.

To the best of my knowledge:

Last but not least, I don’t think we helm chart should introduce such UX pain but we don’t have a great solution at this point in time.

thaynes43 commented 1 week ago

@pmarques thanks! I didn't think to try the temporary button. It makes sense now given the other option it trying to update the permit_join: false configuration.

Configuring everything in the chart works fine for me. I am migrating 71 devices off a pi4 w/ a dongle to a my k8s cluster w/ a Tube ZB tcp coordinator. I can just copy the config that's on the pi4 to values in the chart and not have to worry much about changing anything.

jlpedrosa commented 1 week ago

I think @pmarques comment makes sense. IMO if we were to allow "unmanaged" config, it should be through the introduction of a volume that is read write, in that scenario, you're "on your on" ie: the volume needs to contain the config already and the helm chart won't provision any config map. if anyone wants to create this change, I'd be cool with that.

@thaynes43 makes sense?

thaynes43 commented 1 week ago

@jlpedrosa I think that makes sense. I assume the initial helm install would provision the volume with a config based on what you have in the chart and from then on out you can modify it through the UI.

The only thing here I wouldn't be sure of is if you botched the initial config and couldn't get to the UI, but then you could just uninstall and re-try. I was missing adapter: ember which threw it into a crash loop but pushing an update to my values sorted it out without destroying any volumes.

jlpedrosa commented 1 week ago

I think the only sensible way to support that scenario is to manually connect to the voulme and edit the files.