meilisearch / meilisearch-kubernetes

Meilisearch on Kubernetes Helm charts and manifests
https://www.meilisearch.com
MIT License
216 stars 59 forks source link

PVC gets deleted even when persistence is enabled #24

Open deshetti opened 4 years ago

deshetti commented 4 years ago

I installed the helm chart withpersistence.enabled: true and when I uninstalled the helm chart I see that the PVC was deleted. I would assume that PVC would not be deleted when I uninstall the chart in persistence mode, it should be only be deleted manually.

eskombro commented 4 years ago

Interesting, this deserves a little further investigation, as I see here developers having quite the opposite issue

When Helm installs a chart including a statefulset which uses volumeClaimTemplates to generate new PVCs for each replica created, Helm loses control on those PVCs.

Any insigths @renehernandez ? :)

renehernandez commented 4 years ago

I think the issue lies that we are defining a PersistentVolumeClaim object at https://github.com/meilisearch/meilisearch-kubernetes/blob/master/charts/meilisearch/templates/pvc.yaml and mounting it in the StatefulSet definition; instead of relying on volumeClaimTemplates to automatically define the PVCs.

My suspiction is that since we are defining the PVC object and it is managed by Helm, then it gets deleted during the delete/uninstall operations, as oppose to declaring the PVC through the volumeClaimTemplates in the StatefulSet where Helm wouldn't keep track of the generated PVC as Helm resource and thus would follow the kubernetes behavior

renehernandez commented 4 years ago

We could either moved the PVC to be declared as a volumeClaimTemplates section or add the helm.sh/resource-policy: keep annotation to the PVC template so it isn't deleted

veneliniliev commented 2 years ago

I have the same problem. after updating the cluster, the PVC is deleted and re-created

alallema commented 2 years ago

Hi @veneliniliev, Thanks for raising this issue. Indeed it seems that this issue hasn't been fixed. Can I ask how did you proceed to update your cluster did you uninstall and reinstall the helm chart?

alallema commented 2 years ago

I close this issue due to inactivity, it does not seem to be still present. Feel free to reopen it if the problem is still there.

veneliniliev commented 2 years ago

I close this issue due to inactivity, it does not seem to be still present. Feel free to reopen it if the problem is still there.

I still have a problem. I did a clean install of the latest version of the chart and after updating the cluster everything indexed disappeared

NishaSharma14 commented 2 years ago

We are also facing the same issue. We have the persistence.enabled: true but with helm uninstall it's deleting the PVC.

vincentri commented 2 years ago

any update?

deshetti commented 2 years ago

We had to create our own helm chart to support this and a few other features the current helm chart doesn't support like setting the master key from a secret: https://github.com/factly/helm-charts/tree/main/charts/meilisearch

The mount path on the helm chart is also wrong as of today.

Another major issue for which I am currently having to use this workaround: https://github.com/meilisearch/meilisearch/issues/2503#issuecomment-1152218396

alallema commented 2 years ago

Hi @deshetti, Thank you very much for your feedback and for sharing your solution, I'm sorry I didn't take care of these issues, but I'm trying to address them as soon as possible, but it's not one of our priorities right now.

veneliniliev commented 2 years ago

Hi @deshetti, Thank you very much for your feedback and for sharing your solution, I'm sorry I didn't take care of these issues, but I'm trying to address them as soon as possible, but it's not one of our priorities right now.

@alallema today I had this problem again. For 5 hours, customers had problems with our application because of this. it is unusable in the constant updates of GKE. obviously, we will have to think of other search solutions for our application.

alallema commented 2 years ago

Hi @veneliniliev, I'm really sorry to hear that but unfortunately, we can't solve this problem at the moment, until we finish working on v0.28. And to be honest, I don't know why it's not fixed yet, but we are open source and we accept PR if someone finds the solution. In the meantime, you can check this workaround

miguelmoreba commented 2 years ago

I had the same issue described here. This is how I avoided it:

I stopped using the helm chart and created a kustomization file, using the manifests provided in this repo and tweaking them to my needs. I think the issue has to do with the PVC becoming "orphan" whenever the helm chart is uninstalled.

NishaSharma14 commented 2 years ago

Hi @alallema , The workaround does not work me. I changed the mount path to /meili_data but the PVC still get deleted on helm uninstall I am trying this on GKE.

NishaSharma14 commented 2 years ago

We had to create our own helm chart to and apply changes to support the persistence. Thanks, @deshetti I took the reference from your repo.

alallema commented 2 years ago

Hi @NishaSharma14, I'm sorry to hear that and I understand your frustration. But like I said it's not one of our priorities right now. I will try to fix it as soon as possible.

veneliniliev commented 2 years ago

any update?

churdstheword commented 2 years ago

@veneliniliev I am not here often and I am guessing that you've probably moved on from this, but I will tell you how we danced around this issue.

We used a feature of helm that prevents k8s resources from being deleted when uninstalling. https://helm.sh/docs/howto/charts_tips_and_tricks/#tell-helm-not-to-uninstall-a-resource

(A snippet of our values file, showing how we passed in the annotation) image

Since the helm chart attempts to create a pvc with the exact same name / configuration each time, adding the annotation will prevent it from being deleted when you uninstall, and when you reinstall, it will discover there is already a pvc with the name in the given namespace with the same configuration, and wont attempt to recreate it. (or at least that is my best understanding of how things were working here... 😅)

Hope this helps!

alallema commented 2 years ago

Hi @churdstheword, Thank you so much for this work around and for taking the time to share it here ❤️

veneliniliev commented 1 year ago

@veneliniliev I am not here often and I am guessing that you've probably moved on from this, but I will tell you how we danced around this issue.

We used a feature of helm that prevents k8s resources from being deleted when uninstalling. https://helm.sh/docs/howto/charts_tips_and_tricks/#tell-helm-not-to-uninstall-a-resource

(A snippet of our values file, showing how we passed in the annotation) image

Since the helm chart attempts to create a pvc with the exact same name / configuration each time, adding the annotation will prevent it from being deleted when you uninstall, and when you reinstall, it will discover there is already a pvc with the name in the given namespace with the same configuration, and wont attempt to recreate it. (or at least that is my best understanding of how things were working here... sweat_smile)

Hope this helps!

I have a problem when pod is moved from node to node or GKE is updated. everything starts but the data is not restored :(

jensschulze commented 1 year ago

If you do not want that Helm deletes your PVC, you must not let Helm mangage the PVC.

Just create a PVC by applying a Kubernetes resource and reference it in the persistence.existingClaim key in your values.yaml