8gears / n8n-helm-chart

A Kubernetes Helm chart for n8n a Workflow Automation Tool. Easily automate tasks across different services.
https://artifacthub.io/packages/helm/open-8gears/n8n
Apache License 2.0
187 stars 97 forks source link

Readiness probe failed: Get "[...] connect: connection refused #56

Closed Brandl closed 11 months ago

Brandl commented 1 year ago

Hey,

I'm running into the following problem and am out of ideas how to fix it. My cluster is running k3s and I'm using fluxcd, other charts worked just fine so far.

19m         Normal    Scheduled              pod/n8n-784b44868b-v98r4     Successfully assigned n8n/n8n-784b44868b-v98r4 to mymachine
19m         Normal    Pulled                 pod/n8n-784b44868b-v98r4     Container image "n8nio/n8n:1.7.1" already present on machine
19m         Normal    Created                pod/n8n-784b44868b-v98r4     Created container n8n
19m         Normal    Started                pod/n8n-784b44868b-v98r4     Started container n8n
19m         Warning   Unhealthy              pod/n8n-784b44868b-v98r4     Readiness probe failed: Get "http://10.42.0.36:5678/healthz": dial tcp 10.42.0.36:5678: connect: connection refused
17m         Normal    WaitForFirstConsumer   persistentvolumeclaim/n8n    waiting for first consumer to be created before binding

So from my perspective, the pvc doesn't get created, because it the failed Readiness probe? There is also not ingress to be found. I tried different variations, both without custom values and without any values supplied.

---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: n8n
  namespace: n8n
spec:
  chart:
    spec:
      chart: n8n
      reconcileStrategy: ChartVersion
      sourceRef:
        kind: HelmRepository
        name: open-8gears
      version: 0.13.0
  interval: 10m0s
  values:
    config:
      #executions:
      #  process:                # In what process workflows should be executed - possible values [main, own] - default: own
      #  timeout:                # Max run time (seconds) before stopping the workflow execution - default: -1
      #  maxTimeout:            # Max execution time (seconds) that can be set for a workflow individually - default: 3600
      #  saveDataOnError:        # What workflow execution data to save on error - possible values [all , none] - default: all
      #  saveDataOnSuccess:    # What workflow execution data to save on success - possible values [all , none] - default: all
      #  saveDataManualExecutions:    # Save data of executions when started manually via editor - default: false
      #  pruneData:            # Delete data of past executions on a rolling basis - default: false
      #  pruneDataMaxAge:        # How old (hours) the execution data has to be to get deleted - default: 336
      #  pruneDataTimeout:        # Timeout (seconds) after execution data has been pruned - default: 3600
      generic:
        timezone: Europe/Vienna    # The timezone to use - default: America/New_York
      #path:         # Path n8n is deployed to - default: "/"
      host: n8n.custom.domain        # Host name n8n can be reached - default: localhost
      port: 80        # HTTP port n8n can be reached - default: 5678
    ingress:
      enabled: true
      className: traefik
      tls:
       - hosts:
           - n8n.custom.domain
         secretName: custom-tls
      hosts:
        - host: n8n.custom.domain
          paths:
            - path: /
    persistence:
      enabled: true
      accessModes:
        - ReadWriteOnce
      size: 2Gi
    readinessProbe:
      httpGet:
        path: /healthz
        port: http
        initialDelaySeconds: 120
      # periodSeconds: 10
        timeoutSeconds: 30
      # failureThreshold: 6
      # successThreshold: 1
Brandl commented 1 year ago

It's especially frustrating, since the pod seems to be running just fine:

# kubectl get pods -n n8n                 
NAME                   READY   STATUS    RESTARTS   AGE
n8n-6ffb8d685d-9ggjh   1/1     Running   0          11m

# kubectl logs n8n-6ffb8d685d-9ggjh -n n8n
Loading config overwrites [ '/n8n-config/config.json' ]
UserSettings were generated and saved to: /home/node/.n8n/config
n8n ready on 0.0.0.0, port 5678
Migrations in progress, please do NOT stop the process.
Initializing n8n process
Version: 1.7.1

Editor is now accessible via:
http://localhost:5678/
Brandl commented 11 months ago

I solved this by realising that the syntax for ingress in this helm is different, that Im used to:

  hosts:
    - host: n8n.custom.domain
      paths:
        - "/"