Open abctaylor opened 6 months ago
Remember to start/ deploy postgres first.
apiVersion: apps/v1
kind: Deployment
metadata:
name: linkwarden
namespace: linkwarden
labels:
app.kubernetes.io/name: linkwarden
spec:
replicas: 1
selector:
matchLabels:
app: linkwarden
template:
metadata:
labels:
app: linkwarden
annotations: # only needed when deploying with diun
diun.enable: "true"
diun.notify_on: "new;update"
spec:
containers:
- name: linkwarden
image: ghcr.io/linkwarden/linkwarden:latest
imagePullPolicy: Always
resources:
limits:
memory: "1Gi" # if not set it will drain resources until your node is out of memory, as kubernetes advised not to use swap...
env:
- name: DATABASE_URL
value: "postgresql://postgres:postgres-password@<your-ip>:5432/linkwarden" # point to your postgres instance
- name: TZ
value: Europe/Berlin # your own timezone of course
- name: NEXTAUTH_SECRET
value: super-secret-placeholder
volumeMounts:
- name: linkwarden-data-pv
mountPath: /data/data
ports:
- containerPort: 3000
name: http
volumes:
- name: linkwarden-data-pv
persistentVolumeClaim:
claimName: linkwarden-data-pvc
I'm using longhorn, so change the storage class accordingly:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: linkwarden-data-pvc
namespace: linkwarden
spec:
accessModes:
- ReadWriteMany
storageClassName: longhorn-durable
resources:
requests:
storage: 5Gi
Service for the deployment
apiVersion: v1
kind: Service
metadata:
name: linkwarden
namespace: linkwarden
spec:
type: ClusterIP
ports:
- port: 3000
targetPort: 3000
selector:
app: linkwarden
Ingress route with traefik
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: linkwarden
namespace: linkwarden
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`linkwarden.yourdomain.com`)
services:
- name: linkwarden
port: 3000
tls:
certResolver: ionos # use your own cert resolver
Postgres, simple Deployment with standard db called linkwarden.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: postgres
labels:
app.kubernetes.io/name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
annotations: # also just needed when diun is used
diun.enable: "true"
diun.notify_on: "new;update"
spec:
containers:
- name: postgres
image: postgres:16.4-bookworm
imagePullPolicy: Always
env:
- name: TZ
value: Europe/Berlin
- name: POSTGRES_DB
value: linkwarden
- name: POSTGRES_PASSWORD
value: "postgres-password" # change this here and in linkwarden
volumeMounts:
- name: postgres-data-pv
mountPath: /var/lib/postgresql
ports:
- containerPort: 5432
name: psql
readinessProbe:
exec:
command:
- /bin/sh
- -c
- exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
periodSeconds: 30
initialDelaySeconds: 5
failureThreshold: 6
timeoutSeconds: 5
livenessProbe:
exec:
command:
- /bin/sh
- -c
- exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
periodSeconds: 30
volumes:
- name: postgres-data-pv
persistentVolumeClaim:
claimName: postgres-data-pvc
Also a longhorn storage:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-data-pvc
namespace: postgres
spec:
accessModes:
- ReadWriteMany
storageClassName: longhorn-durable
resources:
requests:
storage: 5Gi
Service with external IP (your host networks IP in range of your metallb Deployment)
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: postgres
spec:
ports:
- port: 5432
targetPort: 5432
selector:
app: postgres
type: LoadBalancer
loadBalancerIP: XXX.XXX.XXX.XXX
Looking at the code for worker I am thinking having multiple replicas for high availability may be a bad idea?
Agreed. That'll probably want a ReadWriteMany storageClass which will be a pain for most Linkwarden users (homelabbers not corps).
I was more worried about the workers conflicting with each other. Looks like you can override the containers command to only run the worker and run the web service separately, then the web service can be replicated. Still would need RWX support on the volume though.
I got a setup where the web is separate from worker. Worked nice too because the worker was crashing using less than 4gb of RAM but since they are separate the web service was still running.
apiVersion: apps/v1
kind: Deployment
metadata:
name: linkwarden-web
namespace: linkwarden
labels:
app.kubernetes.io/name: linkwarden-web
spec:
replicas: 2
selector:
matchLabels:
app: linkwarden-web
template:
metadata:
labels:
app: linkwarden-web
spec:
containers:
- name: linkwarden
image: ghcr.io/linkwarden/linkwarden:v2.7.1
imagePullPolicy: IfNotPresent
command: ["/bin/bash", "-c"]
args:
- |
yarn run next start
resources:
limits:
memory: "1Gi"
requests:
memory: "100Mi"
env:
- name: NEXT_PUBLIC_DISABLE_REGISTRATION
value: "true"
- name: NEXT_PUBLIC_CREDENTIALS_ENABLED
value: "false"
- name: NEXTAUTH_URL
value: "https://linkwarden.spgrn.com/api/v1/auth"
- name: NEXTAUTH_SECRET
valueFrom:
secretKeyRef:
name: next-secret
key: secret
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: postgres-app
key: uri
- name: NEXT_PUBLIC_KEYCLOAK_ENABLED
value: "true"
- name: KEYCLOAK_ISSUER
valueFrom:
secretKeyRef:
name: oauth-secret
key: issuer
- name: KEYCLOAK_CLIENT_ID
valueFrom:
secretKeyRef:
name: oauth-secret
key: clientId
- name: KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oauth-secret
key: clientSecret
- name: TZ
value: America/New_York
volumeMounts:
- name: linkwarden-data
mountPath: /data/data
ports:
- containerPort: 3000
name: http
volumes:
- name: linkwarden-data
persistentVolumeClaim:
claimName: linkwarden-data
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: linkwarden-worker
namespace: linkwarden
labels:
app.kubernetes.io/name: linkwarden-worker
spec:
replicas: 1
selector:
matchLabels:
app: linkwarden-worker
template:
metadata:
labels:
app: linkwarden-worker
spec:
containers:
- name: linkwarden
image: ghcr.io/linkwarden/linkwarden:v2.7.1
imagePullPolicy: IfNotPresent
command: ["/bin/bash", "-c"]
args:
- |
yarn prisma migrate deploy
yarn run worker:prod
resources:
limits:
memory: "4Gi"
requests:
memory: "100Mi"
env:
- name: NEXT_PUBLIC_DISABLE_REGISTRATION
value: "true"
- name: NEXT_PUBLIC_CREDENTIALS_ENABLED
value: "false"
- name: NEXTAUTH_URL
value: "https://linkwarden.spgrn.com/api/v1/auth"
- name: NEXTAUTH_SECRET
valueFrom:
secretKeyRef:
name: next-secret
key: secret
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: postgres-app
key: uri
- name: NEXT_PUBLIC_KEYCLOAK_ENABLED
value: "true"
- name: KEYCLOAK_ISSUER
valueFrom:
secretKeyRef:
name: oauth-secret
key: issuer
- name: KEYCLOAK_CLIENT_ID
valueFrom:
secretKeyRef:
name: oauth-secret
key: clientId
- name: KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oauth-secret
key: clientSecret
- name: TZ
value: America/New_York
volumeMounts:
- name: linkwarden-data
mountPath: /data/data
volumes:
- name: linkwarden-data
persistentVolumeClaim:
claimName: linkwarden-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: linkwarden-data
namespace: linkwarden
spec:
storageClassName: ceph-filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Hi - could I propose a Helm chart/simple config for standing this up in Kubernetes is created?