Closed mohammedimrankasab closed 1 year ago
Won't fix. Firstly, I don't think there is an uninstall script hook. Secondly, I wouldn't know to which nodes in your cluster the pods ended up deployed to.
If you don't want to use /mnt volumes, customize the parameters in the storage section (https://github.com/edgexfoundry/edgex-helm/blob/main/values.yaml#L649).
You will want to set edgex.storage.useHostPath=false
and set the storageClass
parameters to your local-path provisioner.
If you have a way to detect local-path pre-installed and dynamically adapt I'd be interested in knowing how to do it. Today, the installer scripts assume you have no storage provisioners at all.
π Bug Report
Affected Services [REQUIRED]
Metadata service is getting effected with this stale data The issue is located in: helm uninstall command. ### Is this a regression? No Yes, the previous version in which this bug was not present was: .... ### Description and Minimal Reproduction [**REQUIRED**] Command `helm uninstall edgex-minnesota -n edgex` is not clearing up the mounted data from `/mnt` directory. Maybe we can have some script to delete the residual data volumes once we uninstall using helm so that we have everything fresh when we install again. Because of this old data we have inconsistency in the metadata. Tested with Minnesota release (3.0.0) helm chart. ## π₯ Exception or Errorπ Your Environment
Deployment Environment:
Ubuntu single node cluster created with K3s
Distributor ID: Ubuntu Description: Ubuntu 22.04.2 LTS Release: 22.04 Codename: jammy
Kubernetes version Client Version: v1.27.4+k3s1 Kustomize Version: v5.0.1 Server Version: v1.27.4+k3s1
EdgeX Version [REQUIRED]: v3.0.0 (Minnesota)
Anything else relevant?