Closed davinerd closed 1 year ago
What’s your values files look like?
Here we go:
security:
# excludeEndpoints: # Additional endpoints to exclude auth checks. Multiple endpoints can be separated by colon - default: ''
basicAuth:
active: true # If basic auth should be activated for editor and REST-API - default: false
user: ABC # The name of the basic auth user - default: ''
password: XXXXX # The password of the basic auth user - default: ''
hash: true # If password for basic auth is hashed - default: false
extraEnv: {}
# Set this if running behind a reverse proxy and the external port is different from the port n8n runs on
# WEBHOOK_TUNNEL_URL: "https://n8n.myhost.com/
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: true
type: dynamic # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
storageClass: "standard"
## PVC annotations
##
annotations:
helm.sh/resource-policy: keep
## Persistent Volume Access Mode
##
accessModes:
- ReadWriteOnce
## Persistent Volume size
##
size: 10Gi
## Use an existing PVC
##
# existingClaim:
replicaCount: 1
image:
repository: n8n/n8n:latest
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext:
{}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
ingress:
enabled: true
annotations: {
kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
}
#hosts:
# - host: chart-example.local
# paths: []
#tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
{}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
what is your helm version?
➜ ~ helm version
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
Played a little bit around and found out that the workaround is to hardcode the port
and containerPort
values. Not even casting to an int served.
I'm sure there is a more elegant way to solve this, isn't it?
Hello @davinerd!
I have tried to implement the solution that you mention but it didn't work for me, can you send me your deployment.yaml file?
Hi @danicano10 ,
Here is my deployment.yaml
. Take into account that I've edited to add more features not present in this repo:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "n8n.fullname" . }}
labels:
{{- include "n8n.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "n8n.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
checksum/config: {{ print .Values | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "n8n.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "n8n.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $key, $value := .Values.extraEnv }}
- name: {{ $key }}
value: {{ $value | quote}}
{{ end }}
- name: "N8N_PORT" #! we better set the port once again as ENV Var, see: https://community.n8n.io/t/default-config-is-not-set-or-the-port-to-be-more-precise/3158/3?u=vad1mo
value: {{ .Values.port | default "5678" | quote }}
- name: N8N_BASIC_AUTH_ACTIVE
value: {{ .Values.security.basicAuth.active | default "false" | quote }}
- name: N8N_BASIC_AUTH_USER
value: {{ .Values.security.basicAuth.user | default "n8n" | quote }}
- name: N8N_BASIC_AUTH_PASSWORD
value: {{ .Values.security.basicAuth.password | default "n8n" | quote }}
{{- if or .Values.secret .Values.n8n.encryption_key }}
- name: "N8N_ENCRYPTION_KEY"
valueFrom:
secretKeyRef:
key: N8N_ENCRYPTION_KEY
name: {{ include "n8n.fullname" . }}
{{- end }}
{{- if or .Values.config .Values.secret }}
- name: "N8N_CONFIG_FILES"
value: {{ include "n8n.configFiles" . | quote }}
{{ end }}
ports:
- name: http
containerPort: 5678
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: http
readinessProbe:
httpGet:
path: /healthz
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: data
mountPath: /root/.n8n
{{- if .Values.config }}
- name: config-volume
mountPath: /n8n-config
{{- end }}
{{- if .Values.secret }}
- name: secret-volume
mountPath: /n8n-secret
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: "data"
{{ include "n8n.pvc" . }}
{{- if .Values.config }}
- name: config-volume
configMap:
name: {{ include "n8n.fullname" . }}
{{- end }}
{{- if .Values.secret }}
- name: secret-volume
secret:
secretName: {{ include "n8n.fullname" . }}
items:
- key: "secret.json"
path: "secret.json"
{{- end }}
@davinerd Did you managed to fix this problem for you?
Maybe to consider creating something like the following (picture below).
You pointed at the deployment.yml file which works fine and expects that find that port under config, like get .Values.config "port" I'm using helmfile way of deployment, but it's the same if you configure the standard helm values.yml file.
Use template rendering which helm provides to see manifest.yml
You can also change this port if you want, and it will be propagated in a proper way for the k8s deployment manifest file.
I hope this helps a bit.
Sincerely, Igor
Hello, I was trying to install the helm chart and got the following error:
The error refers to this line of code: https://github.com/8gears/n8n-helm-chart/blob/master/templates/deployment.yaml#L43
I'm not setting the
port
in myvalues.yaml
so it's left commented in.While I investigate a fix (and perhaps with a PR) I thought I'd report here first.
Happy to provide additional info if needed.