Read CSI_LOG_FORMAT from log configuration file, format: json
Read CSI_LOG_LEVEL from log configuration file, level: debug
{"level":"debug","msg":"set podmon API port to :8083","time":"2024-09-25T11:18:18.039812155Z"}
{"level":"info","msg":"updating array info","time":"2024-09-25T11:18:18.039869427Z"}
2024/09/25 11:18:18 Session management is enabled.
{"level":"info","msg":"https://x.x.x.38/api/rest,PS<id1>,admin,openshift-nas,true,true,NVMETCP,x.x.x.38","time":"2024-09-25T11:18:18.127194234Z"}
START: ISCSIConnector.GetInitiatorName
{"level":"info","msg":"get initiator name","time":"2024-09-25T11:18:18.127648797Z"}
END: ISCSIConnector.GetInitiatorName
START: FCConnector.GetInitiatorPorts
START: FCConnector.getFCHBASInfo
{"level":"info","msg":"initiator name is: [iqn.1994-05.com.redhat:<iqn>]","time":"2024-09-25T11:18:18.128518907Z"}
{"level":"debug","msg":"iscsi initiators found on node","time":"2024-09-25T11:18:18.128547995Z"}
{"level":"info","msg":"get initiator name","time":"2024-09-25T11:18:18.128559745Z"}
{"level":"info","msg":"get FC hbas info","time":"2024-09-25T11:18:18.128568074Z"}
{"level":"info","msg":"check is FC supported","time":"2024-09-25T11:18:18.128575495Z"}
START: FCConnector.isFCSupported
{"level":"info","msg":"FC is not supported for this host","time":"2024-09-25T11:18:18.128610295Z"}
END: FCConnector.isFCSupported
END: FCConnector.getFCHBASInfo
END: FCConnector.GetInitiatorPorts
START: NVMeConnector.GetInitiatorName
{"level":"error","msg":"failed to read initiator ports names: FC is not supported for this host","time":"2024-09-25T11:18:18.128622615Z"}
{"level":"info","msg":"initiator FC ports names are: []","time":"2024-09-25T11:18:18.128628778Z"}
{"level":"info","msg":"FC initiators found: []","time":"2024-09-25T11:18:18.128637586Z"}
{"level":"error","msg":"FC was not found or filtered with FCPortsFilterFile","time":"2024-09-25T11:18:18.128643084Z"}
{"level":"info","msg":"get initiator name","time":"2024-09-25T11:18:18.128650803Z"}
END: NVMeConnector.GetInitiatorName
{"level":"info","msg":"initiator name is: [nqn.2014-08.org.nvmexpress:uuid:<uuid>]","time":"2024-09-25T11:18:18.129039519Z"}
{"level":"debug","msg":"NVMe initiators found on node","time":"2024-09-25T11:18:18.129059522Z"}
{"level":"info","msg":"NVMeTCP Protocol is requested","time":"2024-09-25T11:18:18.129068518Z"}
{"level":"info","msg":"setting up host on x.x.x.38","time":"2024-09-25T11:18:18.129073761Z"}
{"level":"debug","msg":"REQUEST: GET /api/rest/host?name=eq.csi-node-<machine-id>-x.x.x.141\u0026select=%2A HTTP/1.1 Host: x.x.x.38 Application-Type: CSI Driver for Dell EMC PowerStore/2.11.0+dirty ","time":"2024-09-25T11:18:18.129178091Z"}
{"level":"debug","msg":"acquire a lock","time":"2024-09-25T11:18:18.129200894Z"}
{"level":"debug","msg":"RESPONSE: HTTP/1.1 200 OK Content-Length: 2"
{"level":"debug","msg":"release a lock","time":"2024-09-25T11:18:18.134814116Z"}
{"level":"debug","msg":"REQUEST: GET /api/rest/host?limit=1000\u0026offset=0\u0026order=name\u0026select=%2A HTTP/1.1 Host: x.x.x.38 Application-Type: CSI Driver for Dell EMC PowerStore/2.11.0+dirty ","time":"2024-09-25T11:18:18.134880597Z"}
{"level":"debug","msg":"acquire a lock","time":"2024-09-25T11:18:18.134955951Z"}
{"level":"debug","msg":"RESPONSE: HTTP/1.1 200 OK Content-Length: 910"
{"level":"debug","msg":"release a lock","time":"2024-09-25T11:18:18.151832486Z"}
{"level":"debug","msg":"REQUEST: GET /api/rest/software_installed?limit=1000\u0026offset=0\u0026order=id\u0026select=id%2Cis_cluster%2Crelease_version%2Cbuild_version%2Cbuild_id HTTP/1.1 Host: x.x.x.38 Application-Type: CSI Driver for Dell EMC PowerStore/2.11.0+dirty ","time":"2024-09-25T11:18:18.151986204Z"}
{"level":"debug","msg":"acquire a lock","time":"2024-09-25T11:18:18.152012881Z"}
{"level":"debug","msg":"RESPONSE: HTTP/1.1 200 OK Content-Length: 283"
{"level":"debug","msg":"release a lock","time":"2024-09-25T11:18:18.156187961Z"}
{"level":"debug","msg":"REQUEST: POST /api/rest/host HTTP/1.1 Host: x.x.x.38 Application-Type: CSI Driver for Dell EMC PowerStore/2.11.0+dirty ..."
{"level":"debug","msg":"acquire a lock","time":"2024-09-25T11:18:18.156364942Z"}
{"level":"debug","msg":"RESPONSE: HTTP/1.1 201 Created"
{"level":"debug","msg":"release a lock","time":"2024-09-25T11:18:18.600804469Z"}
{"level":"info","msg":"finished setting up host on x.x.x.38","time":"2024-09-25T11:18:18.600818588Z"}
Removed host defintions for worker nodes on both appliances
Update secret with only one powerstore appliance and NVMeTCP protocol (13:18 CET)
oc create secret generic powerstore-config -n csi-powerstore --from-file=config=config.single.yaml -o yaml --dry-run=client | oc replace -f -
Delete csi-powerstore pods in order to create host definitions on appliance (13:18 CET)
mypods=oc get pod -n csi-powerstore| grep node | cut -f 1 -d " "
for pod in $mypods; do oc delete pod $pod -n csi-powerstore; done
Wait 60 seconds and verify host definitions have been created
Create workload which uses NVMeTCP storage class (13:19 CET)
oc create -f workload_nvmetcp.yaml
Verify pods, pvc and pv are created
oc get pod,pvc,pv -n fio
all the pv should be removed automatically - no separate command required
Update secret with second appliance, use iSCSI protocol, set it to default (13:22 CET)
oc create secret generic powerstore-config -n csi-powerstore --from-file=config=config.two.yaml -o yaml --dry-run=client | oc replace -f -
Verify host definition has been created on appliance
result: host definition is not created
Expected Behavior
Expected hosts to be created in PowerStore array without restarting the driver.
Bug Description
When adding a new appliance configuration to an existing secret, host definitions are not being created on the new appliance. Based on documentation at https://dell.github.io/csm-docs/docs/deployment/helm/drivers/installation/powerstore/#dynamically-update-the-powerstore-secrets
Logs
Single array in secret
After updating the secret
Screenshots
No response
Additional Environment Information
No response
Steps to Reproduce
Environment preparation
Update secret with only one powerstore appliance and NVMeTCP protocol (13:18 CET) oc create secret generic powerstore-config -n csi-powerstore --from-file=config=config.single.yaml -o yaml --dry-run=client | oc replace -f -
Delete csi-powerstore pods in order to create host definitions on appliance (13:18 CET) mypods=
oc get pod -n csi-powerstore| grep node | cut -f 1 -d " "
for pod in $mypods; do oc delete pod $pod -n csi-powerstore; doneWait 60 seconds and verify host definitions have been created
Create workload which uses NVMeTCP storage class (13:19 CET) oc create -f workload_nvmetcp.yaml
Verify pods, pvc and pv are created oc get pod,pvc,pv -n fio
Remove pods, pvc and pv oc delete -f workload_nvmetcp.yaml oc delete pvc fio-data-fio-0 fio-data-fio-1 fio-data-fio-2 -n fio
all the pv should be removed automatically - no separate command required
Update secret with second appliance, use iSCSI protocol, set it to default (13:22 CET) oc create secret generic powerstore-config -n csi-powerstore --from-file=config=config.two.yaml -o yaml --dry-run=client | oc replace -f -
Verify host definition has been created on appliance result: host definition is not created
Expected Behavior
Expected hosts to be created in PowerStore array without restarting the driver.
CSM Driver(s)
csi-powerstore v2.11.1
Installation Type
No response
Container Storage Modules Enabled
No response
Container Orchestrator
OpenShift 4.15.25
Operating System
unknown