Open AlejandroUPC opened 1 year ago
What do you mean with "all the data is lost"? I essentially used a configuration with all users and public keys within my values.yaml which then get's injected into a configmap and then mounted into the pod, so at every restart the state has to be rebuild but besides that we're good.
Correct me if i'm wrong but server-ssh-keys server-config (admins, users, groups, keys, etc) are maintained in its database: /var/lib/sftpgo/sftpgo.db
With every pod restart this server-config is now wasted, because this sftpgo.db is not preserved on a persistentVolume. Putting everything in a configmap is a static unwanted solution...
Proposal: Add a persistentVolumeClaim to the sftpgo helm chart and mount this on the /var/lib/sftpgo directory With every pod restart its state is then preserved!!!
Workaround for the time being would probably to: Add a PVC before deploying the helm chart and mount that one to /var/lib/sftpgo with the value.yaml options: "volumes: [] and volumeMounts: []".
Same issue here, any news on this?
Hi,
I am wondering what approach is being used around here since when deploying this to a Kubernetes cluster if no additional steps are taken all the data is lost if the pod is restarted.
Other than stfp.db and sftp.json are there any other stateful files to be kept?
Also interested on approaches of the community, if using PVC or storing the data in some database on another pod? To me this approach would make more sense, having somewhat a db way to capture the state within the helm