8gears / n8n-helm-chart

A Kubernetes Helm chart for n8n a Workflow Automation Tool. Easily automate tasks across different services.
https://artifacthub.io/packages/helm/open-8gears/n8n
Apache License 2.0
187 stars 97 forks source link

Error installing helm chart: deployment.yaml wrong type for value #6

Closed davinerd closed 1 year ago

davinerd commented 3 years ago

Hello, I was trying to install the helm chart and got the following error:

$  helm -f values.yaml install n8n 8gears/n8n
Error: template: n8n/templates/deployment.yaml:43:35: executing "n8n/templates/deployment.yaml" at <.Values.config>: wrong type for value; expected map[string]interface {}; got interface {}

The error refers to this line of code: https://github.com/8gears/n8n-helm-chart/blob/master/templates/deployment.yaml#L43

I'm not setting the port in my values.yaml so it's left commented in.

While I investigate a fix (and perhaps with a PR) I thought I'd report here first.

Happy to provide additional info if needed.

Vad1mo commented 3 years ago

What’s your values files look like?

davinerd commented 3 years ago

Here we go:

security:
#  excludeEndpoints: # Additional endpoints to exclude auth checks. Multiple endpoints can be separated by colon - default: ''
  basicAuth:
    active: true    # If basic auth should be activated for editor and REST-API - default: false
    user: ABC      # The name of the basic auth user - default: ''
    password: XXXXX   # The password of the basic auth user - default: ''
    hash: true      # If password for basic auth is hashed - default: false
extraEnv: {}
# Set this if running behind a reverse proxy and the external port is different from the port n8n runs on
#   WEBHOOK_TUNNEL_URL: "https://n8n.myhost.com/

persistence:
  ## If true, use a Persistent Volume Claim, If false, use emptyDir
  ##
  enabled: true
  type: dynamic # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
  storageClass: "standard"
  ## PVC annotations
  ##
  annotations:
    helm.sh/resource-policy: keep
  ## Persistent Volume Access Mode
  ##
  accessModes:
    - ReadWriteOnce
  ## Persistent Volume size
  ##
  size: 10Gi
  ## Use an existing PVC
  ##
  # existingClaim:

replicaCount: 1

image:
  repository: n8n/n8n:latest
  pullPolicy: Always
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations: {}

podSecurityContext: {}
# fsGroup: 2000

securityContext:
  {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: true
  annotations: {
    kubernetes.io/ingress.class: nginx
  # kubernetes.io/tls-acme: "true"
  }
  #hosts:
  #  - host: chart-example.local
  #    paths: []
  #tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources:
  {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
#   cpu: 100m
#   memory: 128Mi

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}
Vad1mo commented 3 years ago

what is your helm version?

davinerd commented 3 years ago
➜  ~ helm version 
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
davinerd commented 3 years ago

Played a little bit around and found out that the workaround is to hardcode the port and containerPort values. Not even casting to an int served.

I'm sure there is a more elegant way to solve this, isn't it?

danicano10 commented 3 years ago

Hello @davinerd!

I have tried to implement the solution that you mention but it didn't work for me, can you send me your deployment.yaml file?

davinerd commented 3 years ago

Hi @danicano10 , Here is my deployment.yaml. Take into account that I've edited to add more features not present in this repo:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "n8n.fullname" . }}
  labels:
    {{- include "n8n.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "n8n.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      annotations:
        checksum/config: {{ print .Values | sha256sum }}
        {{- with .Values.podAnnotations }}
        {{- toYaml . | nindent 8 }}
        {{- end }}
      labels:
        {{- include "n8n.selectorLabels" . | nindent 8 }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "n8n.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            {{- range $key, $value := .Values.extraEnv }}
            - name: {{ $key }}
              value: {{ $value | quote}}
            {{ end }}
            - name: "N8N_PORT" #! we better set the port once again as ENV Var, see: https://community.n8n.io/t/default-config-is-not-set-or-the-port-to-be-more-precise/3158/3?u=vad1mo
              value: {{ .Values.port | default "5678" | quote }}
            - name: N8N_BASIC_AUTH_ACTIVE
              value: {{ .Values.security.basicAuth.active | default "false" | quote }}
            - name: N8N_BASIC_AUTH_USER
              value: {{ .Values.security.basicAuth.user | default "n8n" | quote }}
            - name: N8N_BASIC_AUTH_PASSWORD
              value: {{ .Values.security.basicAuth.password | default "n8n" | quote }}
            {{- if or .Values.secret .Values.n8n.encryption_key }}
            - name: "N8N_ENCRYPTION_KEY"
              valueFrom:
                secretKeyRef:
                  key:  N8N_ENCRYPTION_KEY
                  name: {{ include "n8n.fullname" . }}
            {{- end }}
            {{- if or .Values.config .Values.secret }}
            - name: "N8N_CONFIG_FILES"
              value: {{ include "n8n.configFiles" . | quote }}
            {{ end }}
          ports:
            - name: http
              containerPort: 5678
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /healthz
              port: http
          readinessProbe:
            httpGet:
              path: /healthz
              port: http
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
          volumeMounts:
            - name: data
              mountPath: /root/.n8n
            {{- if .Values.config }}
            - name: config-volume
              mountPath: /n8n-config
            {{- end }}
            {{- if .Values.secret }}
            - name: secret-volume
              mountPath: /n8n-secret
                {{- end }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      volumes:
        - name: "data"
          {{ include "n8n.pvc" . }}
        {{- if .Values.config }}
        - name: config-volume
          configMap:
            name: {{ include "n8n.fullname" . }}
        {{- end }}
        {{- if .Values.secret }}
        - name: secret-volume
          secret:
            secretName: {{ include "n8n.fullname" . }}
            items:
              - key: "secret.json"
                path: "secret.json"
        {{- end }}
Igor992 commented 3 years ago

@davinerd Did you managed to fix this problem for you?

Maybe to consider creating something like the following (picture below).

image

You pointed at the deployment.yml file which works fine and expects that find that port under config, like get .Values.config "port" I'm using helmfile way of deployment, but it's the same if you configure the standard helm values.yml file.

Use template rendering which helm provides to see manifest.yml

image

You can also change this port if you want, and it will be propagated in a proper way for the k8s deployment manifest file.

image

I hope this helps a bit.

Sincerely, Igor