Closed arupdevops closed 9 months ago
I'm not sure what caused this. It seems you didn't uninstall a previous version before attempting to install the one that is struggling now.
@datamattsson: Thanks for the reply. It is a new installation. The openshift cluster is healthy verified with redhat with must-gather logs. Not sure where is the issue. Team got stuck here as we are unable to use the backend primera HPE Storage for the PVC. Help ..any lead will be much appreciated
It could be permissions, it could be a multiple chefs in the kitchen situation where someone installed the driver with a Helm chart and now trying to instantiate a new driver with the Operator. If you undo what you just did, what does oc get csidrivers,csinodes
say?
@datamattsson ,
[root@ocp-svc ~]# oc get csidrivers
NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE
csi.hpe.com true true false
It looks like there's some residual installation. What does oc get pods -A | grep hpe
say?
@datamattsson :+1: [root@ocp-svc ~]# oc get pods -A | grep hpe hpe-storage hpe-csi-driver-operator-754c755bcd-5hsdg 1/1 Running 0 9h openshift-cluster-csi-drivers hpe-ezmeral-csi-driver-operator-849c9cd9d8-c55sh 1/1 Running 0 36d [root@ocp-svc ~]#
Are you using the Ezmeral CSI driver? I wonder if there's a conflict in play here as the install dates sort of add up.
@datamattsson : Okay, we are not using Ezmeral CSI driver....let me try this
@datamattsson : We tried to uninstall the Ezmeral Operator , but it is managed by cluster life cycle manager, operator is getting installed automatically. Driver is not installed though for the Ezmeral
What installed csi.hpe.com
then? Does oc get csidrivers/csi.hpe.com -o yaml
give any clues about that?
[root@ocp-svc ~]# oc get csidrivers/csi.hpe.com -o yaml apiVersion: storage.k8s.io/v1 kind: CSIDriver metadata: annotations: meta.helm.sh/release-name: hpe-csi-driver meta.helm.sh/release-namespace: helm creationTimestamp: "2023-11-08T15:11:56Z" labels: app.kubernetes.io/managed-by: Helm name: csi.hpe.com resourceVersion: "3162030" uid: 8c9801c4-13f4-407c-89e2-d61665cdaff1 spec: attachRequired: true fsGroupPolicy: ReadWriteOnceWithFSType podInfoOnMount: true requiresRepublish: false storageCapacity: false volumeLifecycleModes:
oc delete csidriver/csi.hpe.com
and try create the instance again?
@datamattsson :100: :+1: ....Fantastic ...it is working now
Closing this as issue got resolved
Hi Team, We tried to install the HPE CSI Driver 2.40 in Redhat Openshift cluster version 4.13.17. The operator is installed successfully, but while creating the instance to install the HPE CSI Driver is failing. with the below error. Name Space Used: hpe-storage. Deployment method: From Redhat Openshift console.
failed to install release: rendered manifests contain a resource that already exists. Unable to continue with install: CSIDriver "csi.hpe.com" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "csi-driver": current value is "hpe-csi-driver"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "hpe-storage": current value is "helm"
regards Arup