Closed brujoand closed 3 years ago
Here is what my current PV/PVC looks like for passing to extraVolumes
. I'm not sure how I would use the persistence section of the helm chart to achieve the same result. I want my media volume to map to an external disk. How are others doing this?
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: frigate
spec:
capacity:
storage: 900Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/data/frigate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3os-7155
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: frigate
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 900Gi
volumeName: "frigate"
extraVolumes:
- name: frigate
persistentVolumeClaim:
claimName: frigate
extraVolumeMounts:
- mountPath: /media/frigate
name: frigate
From what I understand all you would need is this:
persistence:
media:
enabled: true
existingClaim: frigate
Given that you keep the existing pv and pvc as is.
Edit: Ah but you're mounting to /media/frigate though. Not sure if that could be fix without copy/pasting in a live container.
I'm fine to keep using extraVolumes
and disabling persistence. I'm wondering if that part of the chart is useful for others.
Personally, my media mount is an existingClaim on an NFS with a ton of storage:
extraVolumes:
- name: media
persistentVolumeClaim:
claimName: nfs-media-pvc
extraVolumeMounts:
- mountPath: /media
name: media
subPath: Videos
I leverage the ‘data’ persistent storage only for the database because it can’t run properly on NFS (which is my media). Via the frigate configuration setting:
database:
path: /data/frigate.db
Complete chart config is here: https://github.com/billimek/k8s-gitops/blob/master/default/frigate/frigate.yaml
I do think that we need to address the missing /media
mountpoint and think that this change is necessary and good. There will be a gap for anyone using NFS due to the sqllite database; but maybe that can be addressed differently.
I'm also curious if we need to do anything special for the /tmp/cache
directory. I currently handle this by making a special volume and volumeClaim to map it to a memory-based emptydir instead of ephemeral disk on the host:
extraVolumes:
- name: cache
emptyDir:
medium: Memory
extraVolumeMounts:
- mountPath: /tmp/cache
name: cache
Yeah I agree with @billimek on this I think. It's good and in many cases necessary with an actual volume for the DB, but for storing media an existing volume claim might be the way to go, atleast if you plan to store a lot and for longer periods. I think the optimal approach here is to keep /data but to also allow users to specify where the media output should go. This way you get some convenience of adding a volume, but you're not restricted as to where the data is stored. WDYT?
Personally, I would lean towards directing people to use extraVolumes
in the readme since there are many different use cases and external storage setups.
Yeah that also works. In any case, I'll close this as it doesn't really solve anything.
I still think it would be nice to have a flag to enable media for the persistence block. I'll PR it if you want to reopn.
This changes the data pvc and mountpoint to media, ref #6 I've bumped the major version of the chart as this would be a breaking change