djjudas21 / charts

Collection of Helm charts
14 stars 6 forks source link

Add samples? #32

Open zorbathut opened 1 year ago

zorbathut commented 1 year ago

Is your feature request related to a problem ?

I recently tried to get the Joplin Server Helm chart working. I was not successful; I managed to get a whole bunch of 404 errors and not a lot more, despite gradually maneuvering my way through tricky k8s-at-home errors and doing a whole bunch of tweaking.

I eventually gave up and scrapped the Helm chart and just did it in straight k8s.

Describe the solution you'd like.

I don't know if this is intended for end-users to use or if it's just personal. If it's just personal, rock on, you do you :) but if it's intended for end-users, a working example deployment would be nice! For Joplin Server specifically, there's a few settings that absolutely need to be changed, and it's unclear how to get the ingress working correctly.

Describe alternatives you've considered.

[none]

Additional context.

Here's the Helmfile I ended up with, which didn't work:

- name: joplin-server
  namespace: joplin-server
  chart: djjudas21/joplin-server
  values:
  - env:
      APP_BASE_URL: https://joplin.myserver.com
  - ingress:
      main:
        enabled: true
        hosts:
        - host: joplin.myserver.com
          paths:
          - path: /
      annotations:
        kubernetes.io/ingress.class: nginx
        cert-manager.io/cluster-issuer: "letsencrypt-prod"
      tls:
        secretName: joplin-server-kubernetes-tls
        hosts:
        - host: joplin.myserver.com
  - postgresql:
      enabled: true

I have no idea how close I was.

djjudas21 commented 1 year ago

Hey, thanks for reporting and sorry to hear you didn't get this working. These charts are supposed to good enough for other people to use, although it's only me who maintains them.

Let me share with you the exact values that I use to deploy with this chart:

env:
  # -- Set the container timezone
  TZ: Europe/London
  # -- joplin-server base URL
  APP_BASE_URL: https://joplin.myserver.com
  # -- joplin-server listening port (same as Service port)
  APP_PORT: 22300
  # -- Use pg for postgres
  DB_CLIENT: pg
  # -- Postgres DB Host
  POSTGRES_HOST: joplin-server-postgresql
  # -- Postgres DB port
  POSTGRES_PORT:  # 5432
  # -- Postgres DB name
  POSTGRES_DATABASE: joplin
  # -- Postgres DB Username
  POSTGRES_USER: joplin
  # -- Postgres DB password
  POSTGRES_PASSWORD: joplin-pass

controller:
  replicas: 2
affinity:
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
    - weight: 100
      podAffinityTerm:
        labelSelector:
          matchExpressions:
          - key: app.kubernetes.io
            operator: In
            values:
            - joplin-server
        topologyKey: kubernetes.io/hostname

resources:
  requests:
    cpu: 10m
    memory: 192Mi

ingress:
  main:
    enabled: true
    annotations:
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
    ingressClassName: "public"
    hosts:
      - host: joplin.myserver.com
        paths:
          - path: /
            pathType: Prefix
    tls:
      - secretName: ingress-tls
        hosts:
          - joplin.myserver.com

# -- Enable and configure postgresql database subchart under this key.
#    For more options see [postgresql chart documentation](https://github.com/bitnami/charts/tree/master/bitnami/postgresql)
postgresql:
  enabled: true
  auth:
    postgresPassword: joplin-admin-pass
    username: joplin
    password: joplin-pass
    database: joplin
  primary:
    persistence:
      enabled: true
      retain: true
      storageClass: cstor
      size: 2Gi
    resources:
      limits: {}
      requests:
        memory: 64Mi
        cpu: 10m
  priorityClassName: database

There's a lot of stuff in my example that isn't exactly necessary (like running 2 replicas on different nodes) but I think the key to your problem is probably in the environment variables and the postgres credentials. Can you retry, explicitly setting the POSTGRES_ env vars, and the postgresql.auth config?

In the default values.yaml for this chart, those env vars are empty. It's tricky, because you do need to manually set the POSTGRES_ variables, so ideally they could be set by default. But, you can't assume that an end user wants to deploy postgres as part of the chart. So it's kind of broken both ways unless you explicitly set some options.

Let me know if you get this working with the extra options, and let me have a think about making this chart work out of the box so it's a better experience for everyone.

zorbathut commented 1 year ago

I pasted that in verbatim, with these changes:

Unfortunately, I still get 404 Not Found.

The Postgres server seems mostly happy, and in fact it looks like it's receiving queries from something, presumably Joplin (and then complaining about them):

[a bunch of generally cheerful first-time startup stuff here]
postgresql 22:53:36.97 INFO  ==> ** PostgreSQL setup finished! **
postgresql 22:53:37.01 INFO  ==> ** Starting PostgreSQL **
2023-02-11 22:53:37.060 GMT [1] LOG:  pgaudit extension initialized
2023-02-11 22:53:37.069 GMT [1] LOG:  starting PostgreSQL 14.4 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2023-02-11 22:53:37.071 GMT [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2023-02-11 22:53:37.071 GMT [1] LOG:  listening on IPv6 address "::", port 5432
2023-02-11 22:53:37.086 GMT [1] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2023-02-11 22:53:37.102 GMT [154] LOG:  database system was shut down at 2023-02-11 22:53:36 GMT
2023-02-11 22:53:37.143 GMT [1] LOG:  database system is ready to accept connections
2023-02-11 22:59:34.566 GMT [717] ERROR:  relation "knex_migrations" does not exist at character 20
2023-02-11 22:59:34.566 GMT [717] STATEMENT:  select "name" from "knex_migrations" order by "id" desc limit $1
2023-02-11 22:59:34.870 GMT [718] ERROR:  relation "knex_migrations" does not exist at character 20
2023-02-11 22:59:34.870 GMT [718] STATEMENT:  select "name" from "knex_migrations" order by "id" desc limit $1

The Joplin servers seem happy:

2023-02-11 22:59:36: App: Call this for testing: `curl https://[redacted]/api/ping`
2023-02-11 22:59:36: ShareService: Maintenance completed in 41ms
2023-02-11 23:00:00: TaskService: Running #2 (Update total sizes) (scheduled)...
2023-02-11 23:00:00: TaskService: Completed #2 (Update total sizes) in 32ms

But shucks if I don't still have a generic 404 Not Found error.

I admit this is out of my knowledge - any idea how to diagnose this? The other services on this cluster, including the working Joplin server that isn't going through Helm, are using a rather bogstandard service/ingress setup, at least as far as I know. I'm happy to keep testing stuff out but I'm a k8s novice; if debugging is worth your time, lemme know what to do :)

For what it's worth, here's the doubtless-extremely-messy k8s file I'm using for my functional server:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: joplin-server
  labels:
    app: joplin-server
spec:
  selector:
    matchLabels:
      app: joplin-server
  template:
    metadata:
      labels:
        app: joplin-server
    spec:
      containers:
      - name: joplin-server
        image: joplin/server:2.10.8-beta
        env:
        - name: APP_BASE_URL
          value: https://joplin.example.com
        - name: APP_PORT
          value: '22300'
        - name: DB_CLIENT
          value: pg
        - name: POSTGRES_USER
          value: joplin
        - name: POSTGRES_PASSWORD
          value: nope
        - name: POSTGRES_DATABASE
          value: joplin
        - name: POSTGRES_PORT
          value: '5432'
        - name: POSTGRES_HOST
          value: joplin-server-postgres
        ports:
        - containerPort: 22300
---
apiVersion: v1
kind: Service
metadata:
  name: joplin-server
spec:
  selector:
    app: joplin-server
  ports:
  - protocol: "TCP"
    port: 22300
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: joplin-server-postgres
  labels:
    app: joplin-server-postgres
spec:
  selector:
    matchLabels:
      app: joplin-server-postgres
  template:
    metadata:
      labels:
        app: joplin-server-postgres
    spec:
      containers:
      - name: joplin-server-postgres
        image: postgres:15.1
        env:
        - name: POSTGRES_USER
          value: joplin
        - name: POSTGRES_PASSWORD
          value: nope
        - name: POSTGRES_DB
          value: joplin
        ports:
        - containerPort: 5432
        volumeMounts:
        - name: joplin-server-mount
          mountPath: /var/lib/postgresql/data
          subPath: postgres
      volumes:
      - name: joplin-server-mount
        persistentVolumeClaim:
          claimName: joplin-server-claim
---
apiVersion: v1
kind: Service
metadata:
  name: joplin-server-postgres
spec:
  selector:
    app: joplin-server-postgres
  ports:
  - protocol: "TCP"
    port: 5432
---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: joplin-server-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
  - hosts:
    - joplin.example.com
    secretName: joplin-server-kubernetes-tls
  rules:
  - host: "joplin.example.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: joplin-server
            port:
              number: 22300
---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: joplin-server-claim
  labels:
    app: joplin-server
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: do-block-storage
  resources:
    requests:
      storage: 2Gi

(the volume claim is technically in a different file but I don't think this matters)

djjudas21 commented 1 year ago

Hey, sorry I've taken a few days to come back to you. If you're getting a 404 then one of two things is happening:

  1. The config of the Joplin server app itself is wrong, or you're requesting a path that doesn't exist
  2. Something is wrong with the Ingress.

Judging by do-block-storage, you're running this on DigitalOcean, yes? As far as I can tell, that's using a standard NGINX Ingress controller (some public clouds do their own funky thing for ingress). Can you please run kubectl get ingressclass and check that the name of your ingress class is the same as you've set in the Ingress resource above? Sometimes it gets called public or something else. If the ingress class says it is the default one (it should be) then you can just omit kubernetes.io/ingress.class: nginx from your config.

Without seeing your actual cluster it's hard to be more helpful than this :frowning_face:

zorbathut commented 1 year ago

My turn to apologize for the delay; I'm going through Employment Adventures (tm) and nobody enjoys those.

Yep, it's on DigitalOcean. I tried kubectl get ingressclass and it returned nginx, so I guess that line was redundant. Unfortunately it still doesn't work.

This is probably not worth spending a bunch of time on; as mentioned, I do have Joplin working, just via manual Kubernetes descriptors instead of Helm. Helm is cool! I like Helm! But boy howdy is it hard to debug :V

I'd be happy to keep working on it because I'm learning useful stuff about Kubernetes, but I also know "debugging via some guy who doesn't know what he's doing" is not a great experience. So don't worry about it :) Maybe in a few months someone will come along with the same problem and the stuff you did here will prove useful!

Your call on what to do with the issue; if you want to leave it open as a reference, go for it, but I won't be offended if you close it.