hpe-storage / csi-driver

A Container Storage Interface (CSI) driver from HPE
https://scod.hpedev.io
Apache License 2.0
55 stars 53 forks source link

rpc error: code = Internal desc = Error creating device for volume pvc-xxxx, err: device not found with serial 60002xxxx or target #372

Closed rafapiltrafa closed 4 months ago

rafapiltrafa commented 6 months ago

Hi !

We have installed HPE CSI Driver v2.4 over OCP 4.14 (3 masters+3 workers) in order to obtain persistent storage from a HPE_3PAR 8440. The installation was done with OCP OperatorHub and has worked perfectly and we have created a secret with the backend info and a storageclass.

The problem we are seeing is that we can create PVC with correct attached PVs, but when using them in the pods we cannot mount them. We obtain : "rpc error: code = Internal desc = Error creating device for volume pvc-xxxx, err: device not found with serial 60002xxxx or target" errors.

// sample Storageclass used:

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" name: hpe-standard-test provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/controller-expand-secret-name: hpe-backend csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: "Volume created by the HPE CSI Driver for Kubernetes" fsMode: "0777" accessProtocol: fc reclaimPolicy: Delete allowVolumeExpansion: true

We create a PVC that creates and attach a PV:

//PVC // oc apply -f test_pvc-test.yaml persistentvolumeclaim/demo-pvc-file-system-test created

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE demo-pvc-file-system-test Bound pvc-67143914-df21-4af3-a3bc-5b7b4731836d 1Gi RWO hpe-standard-test 7s

// PV NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-67143914-df21-4af3-a3bc-5b7b4731836d 1Gi RWO Delete Bound test/demo-pvc-file-system-test hpe-standard-test 52s

When creating a pod that has to use that PV we obtain an error :

// oc get pods -n test NAME READY STATUS RESTARTS AGE postgresql-5998df5877-98dkn 0/1 ContainerCreating 0 76s

Events: Type Reason Age From Message


Normal Scheduled 42s default-scheduler Successfully assigned test/postgresql-5998df5877-98dkn to worker01.clusterk8s01.test.es Normal SuccessfulAttachVolume 36s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-67143914-df21-4af3-a3bc-5b7b4731836d" Warning FailedMount 4s kubelet MountVolume.MountDevice failed for volume "pvc-67143914-df21-4af3-a3bc-5b7b4731836d" : rpc error: code = Internal desc = Failed to stage volume pvc-67143914-df21-4af3-a3bc-5b7b4731836d, err: rpc error: code = Internal desc = Error creating device for volume pvc-67143914-df21-4af3-a3bc-5b7b4731836d, err: device not found with serial 60002ac00000000000000257000222ea or target

// oc logs hpe-csi-node-2nc2q -c hpe-csi-driver

time="2023-12-28T20:19:34Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69" time="2023-12-28T20:19:34Z" level=info msg="GRPC request: {}" file="utils.go:70" time="2023-12-28T20:19:34Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75" time="2023-12-28T20:19:34Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69" time="2023-12-28T20:19:34Z" level=info msg="GRPC request: {}" file="utils.go:70" time="2023-12-28T20:19:34Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75" time="2023-12-28T20:19:34Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69" time="2023-12-28T20:19:34Z" level=info msg="GRPC request: {}" file="utils.go:70" time="2023-12-28T20:19:34Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75" time="2023-12-28T20:19:34Z" level=info msg="GRPC call: /csi.v1.Node/NodeStageVolume" file="utils.go:69" time="2023-12-28T20:19:34Z" level=info msg="GRPC request: {\"publish_context\":{\"accessProtocol\":\"fc\",\"fsCreateOptions\":\"\",\"fsMode\":\"0777\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"60002ac00000000000000257000222ea\",\"targetNames\":\"\",\"targetScope\":\"group\",\"volumeAccessMode\":\"mount\"},\"secrets\":\"stripped\",\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/858120b3b146d49ad0a9ea09fdfa4ee182be04dc92daa964051c3ecf2ca01853/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"fc\",\"compression\":\"false\",\"cpg\":\"\",\"csi.storage.k8s.io/pv/name\":\"pvc-67143914-df21-4af3-a3bc-5b7b4731836d\",\"csi.storage.k8s.io/pvc/name\":\"demo-pvc-file-system-test\",\"csi.storage.k8s.io/pvc/namespace\":\"test\",\"description\":\"Volume created by the HPE CSI Driver for Kubernetes\",\"fsMode\":\"0777\",\"fsType\":\"xfs\",\"hostEncryption\":\"false\",\"provisioningType\":\"tpvv\",\"snapCpg\":\"\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1703782340721-8001-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-67143914-df21-4af3-a3bc-5b7b4731836d\"}" file="utils.go:70" time="2023-12-28T20:19:34Z" level=info msg="NodeStageVolume requested volume pvc-67143914-df21-4af3-a3bc-5b7b4731836d with access type mount, targetPath /var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/858120b3b146d49ad0a9ea09fdfa4ee182be04dc92daa964051c3ecf2ca01853/globalmount, capability mount:<fs_type:\"xfs\" > access_mode: , publishContext map[accessProtocol:fc fsCreateOptions: fsMode:0777 fsOwner: fsType:xfs lunId:0 readOnly:false serialNumber:60002ac00000000000000257000222ea targetNames: targetScope:group volumeAccessMode:mount] and volumeContext map[accessProtocol:fc compression:false cpg: csi.storage.k8s.io/pv/name:pvc-67143914-df21-4af3-a3bc-5b7b4731836d csi.storage.k8s.io/pvc/name:demo-pvc-file-system-test csi.storage.k8s.io/pvc/namespace:test description:Volume created by the HPE CSI Driver for Kubernetes fsMode:0777 fsType:xfs hostEncryption:false provisioningType:tpvv snapCpg: storage.kubernetes.io/csiProvisionerIdentity:1703782340721-8001-csi.hpe.com volumeAccessMode:mount]" file="node_server.go:222" time="2023-12-28T20:19:34Z" level=error msg="\n Error in GetSecondaryBackends unexpected end of JSON input" file="volume.go:87" time="2023-12-28T20:19:34Z" level=error msg="\n Passed details " file="volume.go:88" time="2023-12-28T20:19:35Z" level=info msg="vendor: Generic-" file="device.go:1023" time="2023-12-28T20:19:35Z" level=error msg="open /sys/class/scsi_device/0:0:0:0/device/vpd_pg80: no such file or directory" file="file.go:41" time="2023-12-28T20:19:35Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:35Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:35Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:35Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29" time="2023-12-28T20:19:35Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29" time="2023-12-28T20:19:40Z" level=info msg="vendor: Generic-" file="device.go:1023" time="2023-12-28T20:19:40Z" level=error msg="open /sys/class/scsi_device/0:0:0:0/device/vpd_pg80: no such file or directory" file="file.go:41" time="2023-12-28T20:19:40Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:40Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:40Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:40Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29" time="2023-12-28T20:19:40Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29" time="2023-12-28T20:19:45Z" level=info msg="vendor: Generic-" file="device.go:1023" time="2023-12-28T20:19:45Z" level=error msg="open /sys/class/scsi_device/0:0:0:0/device/vpd_pg80: no such file or directory" file="file.go:41" time="2023-12-28T20:19:45Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:45Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:45Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:45Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29" time="2023-12-28T20:19:45Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29" time="2023-12-28T20:19:50Z" level=info msg="vendor: Generic-" file="device.go:1023" time="2023-12-28T20:19:50Z" level=error msg="open /sys/class/scsi_device/0:0:0:0/device/vpd_pg80: no such file or directory" file="file.go:41" time="2023-12-28T20:19:50Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:50Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:50Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:50Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29" time="2023-12-28T20:19:50Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29" time="2023-12-28T20:19:55Z" level=info msg="vendor: Generic-" file="device.go:1023" time="2023-12-28T20:19:55Z" level=error msg="open /sys/class/scsi_device/0:0:0:0/device/vpd_pg80: no such file or directory" file="file.go:41" time="2023-12-28T20:19:55Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:55Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:55Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:19:55Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29" time="2023-12-28T20:19:55Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29" time="2023-12-28T20:20:00Z" level=info msg="vendor: Generic-" file="device.go:1023" time="2023-12-28T20:20:00Z" level=error msg="open /sys/class/scsi_device/0:0:0:0/device/vpd_pg80: no such file or directory" file="file.go:41" time="2023-12-28T20:20:00Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:20:00Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:20:00Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:20:00Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29" time="2023-12-28T20:20:00Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29" time="2023-12-28T20:20:05Z" level=info msg="vendor: Generic-" file="device.go:1023" time="2023-12-28T20:20:05Z" level=error msg="open /sys/class/scsi_device/0:0:0:0/device/vpd_pg80: no such file or directory" file="file.go:41" time="2023-12-28T20:20:05Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:20:05Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:20:05Z" level=info msg="vendor: HPE " file="device.go:1023" time="2023-12-28T20:20:05Z" level=error msg="unable to create device for volume with IQN " file="device.go:1116" time="2023-12-28T20:20:05Z" level=error msg="Failed to create device from publish info. Error: device not found with serial 60002ac00000000000000257000222ea or target " file="node_server.go:543" time="2023-12-28T20:20:05Z" level=error msg="GRPC error: rpc error: code = Internal desc = Failed to stage volume pvc-67143914-df21-4af3-a3bc-5b7b4731836d, err: rpc error: code = Internal desc = Error creating device for volume pvc-67143914-df21-4af3-a3bc-5b7b4731836d, err: device not found with serial 60002ac00000000000000257000222ea or target " file="utils.go:73"

kubectl get volumeattachment NAME ATTACHER PV NODE ATTACHED AGE csi-809cf06e1f55e459dea190d61a913be5693a80e8cd946b36ca45acae0422e574 csi.hpe.com pvc-67143914-df21-4af3-a3bc-5b7b4731836d worker01.clusterk8s01.test.es true 20m

kubectl get hpenodeinfo worker01.clusterk8s01.test.es -o json { "apiVersion": "storage.hpe.com/v1", "kind": "HPENodeInfo", "metadata": { "creationTimestamp": "2023-12-21T11:48:50Z", "generation": 1, "name": "worker01.clusterk8s01.test.es", "resourceVersion": "467344", "uid": "551b3fdd-dc64-4f6b-941a-109d7c9576ac" }, "spec": { "iqns": [ "iqn.1994-05.com.redhat:e18eadf1b25b" ], "networks": [ "172.28.6.2/23", "10.100.20.20/25", "169.254.169.2/29" ], "uuid": "ce49a88f-d5c6-a43a-6662-a225105e7bda", "wwpns": [ "10008eff0320019e", "10008eff032001a0" ] } }

kubectl get hpenodeinfo worker02.clusterk8s01.test.es -o json { "apiVersion": "storage.hpe.com/v1", "kind": "HPENodeInfo", "metadata": { "creationTimestamp": "2023-12-21T11:48:50Z", "generation": 1, "name": "worker02.clusterk8s01.test.es", "resourceVersion": "467338", "uid": "834548b4-e250-4a61-8702-2e96302df56f" }, "spec": { "iqns": [ "iqn.1994-05.com.redhat:9baf6a1c428b" ], "networks": [ "172.28.8.2/23", "10.100.20.21/25", "169.254.169.2/29" ], "uuid": "d9b981dd-04c1-ab42-8c6a-899efa933981", "wwpns": [ "10008eff032001a2", "10008eff032001a4" ] } }

kubectl get hpenodeinfo worker03.clusterk8s01.test.es -o json { "apiVersion": "storage.hpe.com/v1", "kind": "HPENodeInfo", "metadata": { "creationTimestamp": "2023-12-21T11:48:50Z", "generation": 1, "name": "worker03.clusterk8s01.test.es", "resourceVersion": "467336", "uid": "dbbb809f-6c70-489d-a6db-592eca663115" }, "spec": { "iqns": [ "iqn.1994-05.com.redhat:6ba146ac3380" ], "networks": [ "172.28.10.2/23", "10.100.20.22/25", "169.254.169.2/29" ], "uuid": "6e74fc78-97bb-4f46-50a1-1dcb2f251833", "wwpns": [ "10008eff032001a6", "10008eff032001a8" ] } }

I can provide additional info from the 3PAR side although I'm not a storage admin.

Thank you very much for your help, Best Regards, rafa

datamattsson commented 6 months ago

This error

Warning FailedMount 4s kubelet MountVolume.MountDevice failed for volume "pvc-67143914-df21-4af3-a3bc-5b7b4731836d" : rpc error: code = Internal desc = Failed to stage volume pvc-67143914-df21-4af3-a3bc-5b7b4731836d, err: rpc error: code = Internal desc = Error creating device for volume pvc-67143914-df21-4af3-a3bc-5b7b4731836d, err: device not found with serial 60002ac00000000000000257000222ea or target

means that the device are not appearing on the hosts as expected. How are the worker nodes connected to the array? If they are virtual machines it won't work when using FC.

rafapiltrafa commented 6 months ago

Hi Michael, thank you very much. These are baremetal HPE servers. We have been able to mount the volumes once we have provisioned manually the ports of the hosts in the 3PAR admin console. But if we delete the PVCs the hosts get deleted from the 3PAR and for the initial PVC is always necessary to reprovision the ports manually. we have workarounded the issue by creating and presenting a disk to all the worker nodes permanentely. I don't know if this is the expected behaviour (3PAR side or CSI driver side conf ?)

Thank you very much ! Best Regards

datamattsson commented 6 months ago

The host initiators are managed by the CSI driver and this is expected behavior.

Have you tried specifying which ports are being presented to the Kubernetes nodes?

Switching to hostSeesVLUN templates helps a lot of discovery problems too:

rafapiltrafa commented 6 months ago

Hi Michael,

Thank you very much for the info. We tested with specifying FC ports in the storageclass and same result: devices were not automatically discovered in multipath until provisioning the ports manually on the 3PAR webconsole before the first PV.

In a week or so I'll make some more tests with the actual SC (default parameters except from the fc access protocol) and come back with the results. My fear is that after rebooting a node it can't discover the PVs if it is automatically deleted from 3PAR.

best regards, rafa

datamattsson commented 6 months ago

I'm curious if you're using a virtual domain on the 3PAR?

rafapiltrafa commented 6 months ago

I really don't know. I'll come back to you with the answer.

I'll ask the storage admins.

Best Regards

rafapiltrafa commented 5 months ago

Hi Michael,

sorry about my late response. I've been out for some days.

// I'm curious if you're using a virtual domain on the 3PAR?

Customer is not using virtual domain.

We have created a new SC :

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "false" name: hpe-standard-hostseesvlun provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/controller-expand-secret-name: hpe-backend csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: "Volume created by the HPE CSI Driver for Kubernetes" fsMode: "0770" accessProtocol: fc hostSeesVLUN: "true" reclaimPolicy: Delete allowVolumeExpansion: true

and recreated all the PVCs with the new SC. And the problem remains. When the node is restarted the initiator does not discover any disks. (multipath -ll empty)

When the 3PAR Administrator add the ports manually all the disks are seen again from the node.

We don't know why the node (initiator) behaves that way. What is missing in our storageclass in order to re-discover the disks once the node is rebooted?

About the fcPortsList we understand that the default value should be correct (Use this parameter to specify a subset of Fibre Channel (FC) ports on the array to create vluns. By default, the HPE CSI Driver uses all available FC ports.)

thank you very mcuh for your help, Best Regards, rafa

rafapiltrafa commented 5 months ago

Hi Michael !

We have make a couple of tests to reproduce the problem and capture the logs.

Test 1: time 08:11Z Any disk presented to the node. It starts and no discovery is made. Empty multipath -ll and all the pods woth storage stuck trying to mount the volumes.

Test2: time 08:17Z We stop the node. Then we present a disk to the node via OneView. We start the node and everything works fine.

I'm uploading the complete log files with all the info.

The main difference seems to be that in the first case the node is not known for the 3PAR, but the logic in order to discover the disks does not work.

time="2024-01-18T08:10:31Z" level=info msg="=== GET HOST RESP FROM ARRAY: ===\n &{404 Not Found 404 HTTP/1.1 1 1 map[Content-Type:[application/json] Date:[Thu, 18 Jan 2024 08:10:31 GMT] Server:[hp3par-wsapi] Strict-Transport-Security:[max-age=31536000; includeSubDomains]] 0xc0005a6480 -1 [] true false map[] 0xc000a70200 0xc00045c2c0}" file="managed_client.go:557" time="2024-01-18T08:10:31Z" level=info msg="[ REQUEST-ID 100202 ] -- CSI passed host name worker03 does not exist on Array" file="create_vhost_cmd.go:51" time="2024-01-18T08:10:31Z" level=info msg="[ REQUEST-ID 100202 ] -- Actual host to modify currently is worker03 " file="create_vhost_cmd.go:59" time="2024-01-18T08:10:31Z" level=info msg="[ REQUEST-ID 100202 ] -- Iqns and FC WWpns received from CSI are [iqn.1994-05.com.redhat:6ba146ac3380] and [10008eff032001a6 10008eff032001a8] " file="create_vhost_cmd.go:60" time="2024-01-18T08:10:31Z" level=info msg="Get session returning existing session for user k8s3par on array 10.100.176.10" file="managed_client.go:169"

When a disk is already presented to the node the node is well known to the 3PAR.

What could be the reason? We think it should work in the first scenario.

Tanks a lot ! Rafa 3par_problem_logs_and_tests.txt

datamattsson commented 5 months ago

I think we need to support involved to understand what is going on here. Do you have a support case?

rafapiltrafa commented 5 months ago

I have asked the customer to open a support case. I'll tell you the caseid number as soon as I get it.

From a quick view to the CSP log it seems that the problem resides in the host creation (I see various requests at the same time doing the same thing...race condition?). But the logic is complex and as you say we need to get support involved.

Thanks ! Best Regards rafa

rafapiltrafa commented 5 months ago

Hi Michael,

The support case number is : 5379353393.

Best Regards, rafa

rafapiltrafa commented 5 months ago

Info attached to the support case.Thanks !

rafapiltrafa commented 4 months ago

The problem is that the zoning is created automatically via OneView.

If the PVCs are created with the driver, when restarting the worker node, OneView automatically deletes the zones and the worker node is not able to see 3PAR until the ports are manually provisioned again.

Solutions:

a) Present a permanent disk from OneView to the worker nodes. In that way the zoning is never deleted when the nodes restart. b) Manually create a zoning to these worker nodes on the switches.

Thanks for your help ! Regards, Rafa.

rafapiltrafa commented 4 months ago

Thanks for your support !