hpe-storage / truenas-csp

TrueNAS Container Storage Provider for HPE CSI Driver for Kubernetes
https://scod.hpedev.io
MIT License
70 stars 9 forks source link

Unable to attach iscsi volumes #66

Open santimar opened 1 month ago

santimar commented 1 month ago

I am using a single virtualized TrueNAS SCALE Dragonfish-24.04.2.2 to provide storage to various k3s clusters. Each cluster has it's own dataset and use truenas-csp version 2.5.1 to create volumes.

I followed install instruction and everything seems to work, since pvc can be provisioned and mounted.

However, after 7/8 nodes of different clusters start to connect to truenas, volumes struggle when they are mounted. I start to see the following event:

AttachVolume.Attach failed for volume "pvc-6cbede1c-c5e1-46dd-b6eb-8203c1e9badc" : rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:my-data_first.dataset_pvc-6cbede1c-c5e1-46dd-b6eb-8203c1e9badc:1d1d92c3-5fff-4b13-4f87-dba44e3067c6
AttachVolume.Attach failed for volume "pvc-6cbede1c-c5e1-46dd-b6eb-8203c1e9badc" : rpc error: code = DeadlineExceeded desc = context deadline exceeded

Sometimes, they are still successfully mounted after some time (that varies from 10 seconds to 1 hour) but other times they stay in this error state. Also, sometimes volumes are mounted without problems.

Volumes are always correctly provisioned, since i can see them listed in truenas web ui, thay just fail to be mounted. I don't seen any obvious error or timeout log on hpe-csi-controller nor hpe-csi-node nor truenas-csp, even after enabling debug log. It just seems that the mount request is pending indefinitely (and sometimes ends successfully).

TrueNAS doesn't seems to show any resource problem like high cpu usage, and I wasn't able to find any error log even there.

Any idea?

datamattsson commented 1 month ago

Thanks for reporting this. I'm out of pocket for the next couple of weeks. Can you describe more about the cluster sizes and PVC counts etc and I'll try to reproduce this.

Historically the CSI driver has struggled with Flannel as a CNI, especially on K3s, is that what you're using?

santimar commented 1 month ago

Yes, i am using k3s with flannel as CNI

k3s --version
k3s version v1.28.10+k3s1 (a4c5612e)
go version go1.21.9

and k3s is started using --flannel-backend=wireguard-native

Each cluster has 3 nodes running Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-131-generic x86_64) and 8 PVC (size between 2 and 40 GB) at the moment A total of 8 cluster are connected to the truenas instance, and on trueNAS there are ~70 targets

Each cluster has a different root parameter in the StorageClass, so that they can't share volumes by accident and hostEncryption is enabled Eg.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: 'true'
  name: hpe-storageclass
parameters:
  allowOverrides: >-
    sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace
  csi.storage.k8s.io/controller-expand-secret-name: truenas-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
  csi.storage.k8s.io/controller-publish-secret-name: truenas-secret
  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/fstype: xfs
  csi.storage.k8s.io/node-publish-secret-name: truenas-secret
  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/node-stage-secret-name: truenas-secret
  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
  csi.storage.k8s.io/provisioner-secret-name: truenas-secret
  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
  description: Volume for PVC {pvc}
  hostEncryption: 'true'
  hostEncryptionSecretName: storage-encryption-passphrase
  hostEncryptionSecretNamespace: hpe-storage
  root: my-data/first.dataset
provisioner: csi.hpe.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true

I've eanbled trace log level in csi. This is the log of the hpe-csi-controller, container hpe-csi-driver I've added some comment in-between, but i'm not even sure if i'm looking on the correct place.

+ for arg in "$@"
+ '[' --endpoint=unix:///var/lib/csi/sockets/pluginproxy/csi.sock = --node-service ']'
+ '[' --endpoint=unix:///var/lib/csi/sockets/pluginproxy/csi.sock = --node-init ']'
+ for arg in "$@"
+ '[' --flavor=kubernetes = --node-service ']'
+ '[' --flavor=kubernetes = --node-init ']'
+ for arg in "$@"
+ '[' --pod-monitor = --node-service ']'
+ '[' --pod-monitor = --node-init ']'
+ for arg in "$@"
+ '[' --pod-monitor-interval=30 = --node-service ']'
+ '[' --pod-monitor-interval=30 = --node-init ']'
+ disableNodeConformance=
+ disableNodeConfiguration=
+ '[' '' = true ']'
+ '[' '' = true ']'
+ '[' '' = true ']'
+ echo 'Starting CSI plugin...'
+ exec /bin/csi-driver --endpoint=unix:///var/lib/csi/sockets/pluginproxy/csi.sock --flavor=kubernetes --pod-monitor --pod-monitor-interval=30
Starting CSI plugin...
time="2024-09-29T08:04:24Z" level=info msg="Initialized logging." alsoLogToStderr=true logFileLocation=/var/log/hpe-csi-controller.log logLevel=trace
time="2024-09-29T08:04:24Z" level=info msg="**********************************************" file="csi-driver.go:56"
time="2024-09-29T08:04:24Z" level=info msg="*************** HPE CSI DRIVER ***************" file="csi-driver.go:57"
time="2024-09-29T08:04:24Z" level=info msg="**********************************************" file="csi-driver.go:58"
time="2024-09-29T08:04:24Z" level=info msg=">>>>> CMDLINE Exec, args: []" file="csi-driver.go:60"
time="2024-09-29T08:04:24Z" level=trace msg=">>>>> csiCliHandler" file="csi-driver.go:89"
time="2024-09-29T08:04:24Z" level=trace msg=">>>>> FileExists for path /var/lib/csi/sockets/pluginproxy" file="file.go:62"
time="2024-09-29T08:04:24Z" level=trace msg="<<<<< FileExists" file="file.go:72"
W0929 08:04:24.277753       1 reflector.go:424] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
E0929 08:04:24.277820       1 reflector.go:140] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: Failed to watch *v1.VolumeSnapshot: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
time="2024-09-29T08:04:24Z" level=info msg="Enabling controller service capability: CREATE_DELETE_VOLUME" file="driver.go:250"
time="2024-09-29T08:04:24Z" level=info msg="Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME" file="driver.go:250"
time="2024-09-29T08:04:24Z" level=info msg="Enabling controller service capability: LIST_VOLUMES" file="driver.go:250"
time="2024-09-29T08:04:24Z" level=info msg="Enabling controller service capability: CREATE_DELETE_SNAPSHOT" file="driver.go:250"
time="2024-09-29T08:04:24Z" level=info msg="Enabling controller service capability: LIST_SNAPSHOTS" file="driver.go:250"
time="2024-09-29T08:04:24Z" level=info msg="Enabling controller service capability: CLONE_VOLUME" file="driver.go:250"
time="2024-09-29T08:04:24Z" level=info msg="Enabling controller service capability: PUBLISH_READONLY" file="driver.go:250"
time="2024-09-29T08:04:24Z" level=info msg="Enabling controller service capability: EXPAND_VOLUME" file="driver.go:250"
time="2024-09-29T08:04:24Z" level=info msg="Enabling node service capability: STAGE_UNSTAGE_VOLUME" file="driver.go:267"
time="2024-09-29T08:04:24Z" level=info msg="Enabling node service capability: EXPAND_VOLUME" file="driver.go:267"
time="2024-09-29T08:04:24Z" level=info msg="Enabling node service capability: GET_VOLUME_STATS" file="driver.go:267"
time="2024-09-29T08:04:24Z" level=info msg="Enabling volume expansion type: ONLINE" file="driver.go:281"
time="2024-09-29T08:04:24Z" level=info msg="Enabling volume access mode: SINGLE_NODE_WRITER" file="driver.go:293"
time="2024-09-29T08:04:24Z" level=info msg="Enabling volume access mode: SINGLE_NODE_READER_ONLY" file="driver.go:293"
time="2024-09-29T08:04:24Z" level=info msg="Enabling volume access mode: MULTI_NODE_READER_ONLY" file="driver.go:293"
time="2024-09-29T08:04:24Z" level=info msg="Enabling volume access mode: MULTI_NODE_SINGLE_WRITER" file="driver.go:293"
time="2024-09-29T08:04:24Z" level=info msg="Enabling volume access mode: MULTI_NODE_MULTI_WRITER" file="driver.go:293"
time="2024-09-29T08:04:24Z" level=info msg="DB service disabled!!!" file="driver.go:145"
time="2024-09-29T08:04:24Z" level=info msg="About to start the CSI driver 'csi.hpe.com with KubeletRootDirectory /var/lib/kubelet/'" file="csi-driver.go:192"
time="2024-09-29T08:04:24Z" level=info msg="[1] reply  : [/bin/csi-driver --endpoint=unix:///var/lib/csi/sockets/pluginproxy/csi.sock --flavor=kubernetes --pod-monitor --pod-monitor-interval=30]" file="csi-driver.go:195"
time="2024-09-29T08:04:24Z" level=trace msg=">>>>> StartMonitor" file="monitor.go:48"
time="2024-09-29T08:04:24Z" level=trace msg=">>>>> monitorPod" file="monitor.go:96"
time="2024-09-29T08:04:24Z" level=trace msg="<<<<< monitorPod" file="monitor.go:115"
time="2024-09-29T08:04:24Z" level=trace msg="<<<<< StartMonitor" file="monitor.go:73"
time="2024-09-29T08:04:24Z" level=info msg="Listening for connections on address: &net.UnixAddr{Name:\"//var/lib/csi/sockets/pluginproxy/csi.sock\", Net:\"unix\"}" file="server.go:86"
time="2024-09-29T08:04:24Z" level=info msg="GRPC call: /csi.v1.Identity/Probe" file="utils.go:69"
time="2024-09-29T08:04:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:24Z" level=info msg=">>>>> Probe" file="identity_server.go:39"
time="2024-09-29T08:04:24Z" level=info msg="<<<<< Probe" file="identity_server.go:42"
time="2024-09-29T08:04:24Z" level=info msg="GRPC response: {}" file="utils.go:75"
time="2024-09-29T08:04:24Z" level=info msg="GRPC call: /csi.v1.Identity/GetPluginInfo" file="utils.go:69"
time="2024-09-29T08:04:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:24Z" level=info msg=">>>>> GetPluginInfo" file="identity_server.go:16"
time="2024-09-29T08:04:24Z" level=info msg="<<<<< GetPluginInfo" file="identity_server.go:19"
time="2024-09-29T08:04:24Z" level=info msg="GRPC response: {\"name\":\"csi.hpe.com\",\"vendor_version\":\"1.3\"}" file="utils.go:75"
time="2024-09-29T08:04:24Z" level=info msg="GRPC call: /csi.v1.Identity/GetPluginCapabilities" file="utils.go:69"
time="2024-09-29T08:04:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:24Z" level=info msg=">>>>> GetPluginCapabilities" file="identity_server.go:52"
time="2024-09-29T08:04:24Z" level=info msg="<<<<< GetPluginCapabilities" file="identity_server.go:55"
time="2024-09-29T08:04:24Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Service\":{\"type\":1}}},{\"Type\":{\"Service\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-09-29T08:04:24Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerGetCapabilities" file="utils.go:69"
time="2024-09-29T08:04:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:24Z" level=trace msg=">>>>> ControllerGetCapabilities" file="controller_server.go:1177"
time="2024-09-29T08:04:24Z" level=trace msg="<<<<< ControllerGetCapabilities" file="controller_server.go:1180"
time="2024-09-29T08:04:24Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":2}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":5}}},{\"Type\":{\"Rpc\":{\"type\":6}}},{\"Type\":{\"Rpc\":{\"type\":7}}},{\"Type\":{\"Rpc\":{\"type\":8}}},{\"Type\":{\"Rpc\":{\"type\":9}}}]}" file="utils.go:75"
time="2024-09-29T08:04:24Z" level=info msg="GRPC call: /csi.v1.Identity/Probe" file="utils.go:69"
time="2024-09-29T08:04:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:24Z" level=info msg=">>>>> Probe" file="identity_server.go:39"
time="2024-09-29T08:04:24Z" level=info msg="<<<<< Probe" file="identity_server.go:42"
time="2024-09-29T08:04:24Z" level=info msg="GRPC response: {}" file="utils.go:75"
time="2024-09-29T08:04:24Z" level=info msg="GRPC call: /csi.v1.Identity/GetPluginInfo" file="utils.go:69"
time="2024-09-29T08:04:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:24Z" level=info msg=">>>>> GetPluginInfo" file="identity_server.go:16"
time="2024-09-29T08:04:24Z" level=info msg="<<<<< GetPluginInfo" file="identity_server.go:19"
time="2024-09-29T08:04:24Z" level=info msg="GRPC response: {\"name\":\"csi.hpe.com\",\"vendor_version\":\"1.3\"}" file="utils.go:75"
time="2024-09-29T08:04:24Z" level=info msg="GRPC call: /csi.v1.Identity/GetPluginCapabilities" file="utils.go:69"
time="2024-09-29T08:04:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:24Z" level=info msg=">>>>> GetPluginCapabilities" file="identity_server.go:52"
time="2024-09-29T08:04:24Z" level=info msg="<<<<< GetPluginCapabilities" file="identity_server.go:55"
time="2024-09-29T08:04:24Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Service\":{\"type\":1}}},{\"Type\":{\"Service\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-09-29T08:04:24Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerGetCapabilities" file="utils.go:69"
time="2024-09-29T08:04:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:24Z" level=trace msg=">>>>> ControllerGetCapabilities" file="controller_server.go:1177"
time="2024-09-29T08:04:24Z" level=trace msg="<<<<< ControllerGetCapabilities" file="controller_server.go:1180"
time="2024-09-29T08:04:24Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":2}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":5}}},{\"Type\":{\"Rpc\":{\"type\":6}}},{\"Type\":{\"Rpc\":{\"type\":7}}},{\"Type\":{\"Rpc\":{\"type\":8}}},{\"Type\":{\"Rpc\":{\"type\":9}}}]}" file="utils.go:75"
time="2024-09-29T08:04:24Z" level=info msg="GRPC call: /csi.v1.Identity/GetPluginInfo" file="utils.go:69"
time="2024-09-29T08:04:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:24Z" level=info msg=">>>>> GetPluginInfo" file="identity_server.go:16"
time="2024-09-29T08:04:24Z" level=info msg="<<<<< GetPluginInfo" file="identity_server.go:19"
time="2024-09-29T08:04:24Z" level=info msg="GRPC response: {\"name\":\"csi.hpe.com\",\"vendor_version\":\"1.3\"}" file="utils.go:75"
time="2024-09-29T08:04:24Z" level=info msg="GRPC call: /csi.v1.Identity/Probe" file="utils.go:69"
time="2024-09-29T08:04:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:24Z" level=info msg=">>>>> Probe" file="identity_server.go:39"
time="2024-09-29T08:04:24Z" level=info msg="<<<<< Probe" file="identity_server.go:42"
time="2024-09-29T08:04:24Z" level=info msg="GRPC response: {}" file="utils.go:75"
time="2024-09-29T08:04:24Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerGetCapabilities" file="utils.go:69"
time="2024-09-29T08:04:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:24Z" level=trace msg=">>>>> ControllerGetCapabilities" file="controller_server.go:1177"
time="2024-09-29T08:04:24Z" level=trace msg="<<<<< ControllerGetCapabilities" file="controller_server.go:1180"
time="2024-09-29T08:04:24Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":2}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":5}}},{\"Type\":{\"Rpc\":{\"type\":6}}},{\"Type\":{\"Rpc\":{\"type\":7}}},{\"Type\":{\"Rpc\":{\"type\":8}}},{\"Type\":{\"Rpc\":{\"type\":9}}}]}" file="utils.go:75"
time="2024-09-29T08:04:25Z" level=info msg="GRPC call: /csi.v1.Identity/Probe" file="utils.go:69"
time="2024-09-29T08:04:25Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:25Z" level=info msg=">>>>> Probe" file="identity_server.go:39"
time="2024-09-29T08:04:25Z" level=info msg="<<<<< Probe" file="identity_server.go:42"
time="2024-09-29T08:04:25Z" level=info msg="GRPC response: {}" file="utils.go:75"
time="2024-09-29T08:04:25Z" level=info msg="GRPC call: /csi.v1.Identity/GetPluginInfo" file="utils.go:69"
time="2024-09-29T08:04:25Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:25Z" level=info msg=">>>>> GetPluginInfo" file="identity_server.go:16"
time="2024-09-29T08:04:25Z" level=info msg="<<<<< GetPluginInfo" file="identity_server.go:19"
time="2024-09-29T08:04:25Z" level=info msg="GRPC response: {\"name\":\"csi.hpe.com\",\"vendor_version\":\"1.3\"}" file="utils.go:75"
time="2024-09-29T08:04:25Z" level=info msg="GRPC call: /csi.v1.Identity/GetPluginCapabilities" file="utils.go:69"
time="2024-09-29T08:04:25Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:25Z" level=info msg=">>>>> GetPluginCapabilities" file="identity_server.go:52"
time="2024-09-29T08:04:25Z" level=info msg="<<<<< GetPluginCapabilities" file="identity_server.go:55"
time="2024-09-29T08:04:25Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Service\":{\"type\":1}}},{\"Type\":{\"Service\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-09-29T08:04:25Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerGetCapabilities" file="utils.go:69"
time="2024-09-29T08:04:25Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:25Z" level=trace msg=">>>>> ControllerGetCapabilities" file="controller_server.go:1177"
time="2024-09-29T08:04:25Z" level=trace msg="<<<<< ControllerGetCapabilities" file="controller_server.go:1180"
time="2024-09-29T08:04:25Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":2}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":5}}},{\"Type\":{\"Rpc\":{\"type\":6}}},{\"Type\":{\"Rpc\":{\"type\":7}}},{\"Type\":{\"Rpc\":{\"type\":8}}},{\"Type\":{\"Rpc\":{\"type\":9}}}]}" file="utils.go:75"
time="2024-09-29T08:04:25Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerGetCapabilities" file="utils.go:69"
time="2024-09-29T08:04:25Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:25Z" level=trace msg=">>>>> ControllerGetCapabilities" file="controller_server.go:1177"
time="2024-09-29T08:04:25Z" level=trace msg="<<<<< ControllerGetCapabilities" file="controller_server.go:1180"
time="2024-09-29T08:04:25Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":2}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":5}}},{\"Type\":{\"Rpc\":{\"type\":6}}},{\"Type\":{\"Rpc\":{\"type\":7}}},{\"Type\":{\"Rpc\":{\"type\":8}}},{\"Type\":{\"Rpc\":{\"type\":9}}}]}" file="utils.go:75"
time="2024-09-29T08:04:25Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerGetCapabilities" file="utils.go:69"
time="2024-09-29T08:04:25Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-09-29T08:04:25Z" level=trace msg=">>>>> ControllerGetCapabilities" file="controller_server.go:1177"
time="2024-09-29T08:04:25Z" level=trace msg="<<<<< ControllerGetCapabilities" file="controller_server.go:1180"
time="2024-09-29T08:04:25Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":2}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":5}}},{\"Type\":{\"Rpc\":{\"type\":6}}},{\"Type\":{\"Rpc\":{\"type\":7}}},{\"Type\":{\"Rpc\":{\"type\":8}}},{\"Type\":{\"Rpc\":{\"type\":9}}}]}" file="utils.go:75"
W0929 08:04:25.282013       1 reflector.go:424] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
E0929 08:04:25.282411       1 reflector.go:140] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: Failed to watch *v1.VolumeSnapshot: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
W0929 08:04:27.764783       1 reflector.go:424] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
E0929 08:04:27.764858       1 reflector.go:140] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: Failed to watch *v1.VolumeSnapshot: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
W0929 08:04:33.987072       1 reflector.go:424] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
E0929 08:04:33.987141       1 reflector.go:140] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: Failed to watch *v1.VolumeSnapshot: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
W0929 08:04:44.838877       1 reflector.go:424] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
E0929 08:04:44.838919       1 reflector.go:140] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: Failed to watch *v1.VolumeSnapshot: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
time="2024-09-29T08:04:54Z" level=trace msg=">>>>> MonitorPod with label monitored-by=hpe-csi" file="flavor.go:838"
time="2024-09-29T08:04:54Z" level=trace msg="cannot find any nfs pod with label monitored-by=hpe-csi" file="flavor.go:852"
time="2024-09-29T08:04:54Z" level=trace msg="<<<<< MonitorPod" file="flavor.go:853"
W0929 08:05:08.304894       1 reflector.go:424] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
E0929 08:05:08.304955       1 reflector.go:140] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: Failed to watch *v1.VolumeSnapshot: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
time="2024-09-29T08:05:12Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2024-09-29T08:05:12Z" level=info msg="GRPC request: {\"node_id\":\"b1312db0-c69d-a568-6698-c1f50490d1f8\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/pv/name\":\"pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"csi.storage.k8s.io/pvc/name\":\"data-kafka-controller-2\",\"csi.storage.k8s.io/pvc/namespace\":\"mynamespace\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/first.dataset\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1727529462858-4320-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\"}" file="utils.go:70"

PUBLISH STARTS HERE

time="2024-09-29T08:05:12Z" level=trace msg=">>>>> ControllerPublishVolume" file="controller_server.go:704"
time="2024-09-29T08:05:12Z" level=trace msg=">>>>> HandleDuplicateRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:522"
time="2024-09-29T08:05:12Z" level=trace msg=">>>>> GetRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:579"
time="2024-09-29T08:05:12Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:05:12Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:05:12Z" level=trace msg="<<<<< GetRequest" file="driver.go:586"
time="2024-09-29T08:05:12Z" level=trace msg=">>>>> AddRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8, value: true" file="driver.go:591"
time="2024-09-29T08:05:12Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:05:12Z" level=trace msg="Print RequestCache: map[ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8:true]" file="driver.go:599"
time="2024-09-29T08:05:12Z" level=trace msg="Successfully inserted an entry with key ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8 to the cache map" file="driver.go:600"
time="2024-09-29T08:05:12Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:05:12Z" level=trace msg="<<<<< AddRequest" file="driver.go:601"
time="2024-09-29T08:05:12Z" level=trace msg="<<<<< HandleDuplicateRequest" file="driver.go:539"
time="2024-09-29T08:05:12Z" level=trace msg=">>>>> controllerPublishVolume with volumeID: my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1, nodeID: b1312db0-c69d-a568-6698-c1f50490d1f8, volumeCapability: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > , readOnlyFlag: false, volumeContext: map[allowOverrides:sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace compression:LZ4 csi.storage.k8s.io/pv/name:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 csi.storage.k8s.io/pvc/name:data-kafka-controller-2 csi.storage.k8s.io/pvc/namespace:mynamespace deduplication:OFF description:Volume for PVC {pvc} fsType:xfs hostEncryption:true hostEncryptionSecretName:storage-encryption-passphrase hostEncryptionSecretNamespace:hpe-storage root:my-data/first.dataset storage.kubernetes.io/csiProvisionerIdentity:1727529462858-4320-csi.hpe.com sync:STANDARD targetScope:volume volblocksize:8K volumeAccessMode:mount]" file="controller_server.go:754"
time="2024-09-29T08:05:12Z" level=trace msg=">>>>> IsValidVolumeCapability, volCapability: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:86"
time="2024-09-29T08:05:12Z" level=trace msg="Found access_mode: SINGLE_NODE_WRITER" file="controller_server.go:95"
time="2024-09-29T08:05:12Z" level=trace msg="Found Mount access_type fs_type:\"xfs\" " file="controller_server.go:119"
time="2024-09-29T08:05:12Z" level=trace msg="Found Mount access_type, FileSystem: xfs" file="controller_server.go:122"
time="2024-09-29T08:05:12Z" level=trace msg="Total length of mount flags: 0" file="controller_server.go:136"
time="2024-09-29T08:05:12Z" level=trace msg="<<<<< IsValidVolumeCapability" file="controller_server.go:150"
time="2024-09-29T08:05:12Z" level=trace msg=">>>>> getVolumeAccessType, volCap: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:26"
time="2024-09-29T08:05:12Z" level=trace msg="Found mount access type fs_type:\"xfs\" " file="controller_server.go:39"
time="2024-09-29T08:05:12Z" level=trace msg="<<<<< getVolumeAccessType" file="controller_server.go:40"
time="2024-09-29T08:05:12Z" level=trace msg=">>>>> GetVolume, ID: my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1" file="driver.go:431"
time="2024-09-29T08:05:12Z" level=trace msg="Secrets are provided. Checking with this particular storage provider." file="driver.go:438"
time="2024-09-29T08:05:12Z" level=trace msg=">>>>> GetStorageProvider" file="driver.go:360"
time="2024-09-29T08:05:12Z" level=trace msg=">>>>> CreateCredentials" file="storage_provider.go:65"
time="2024-09-29T08:05:12Z" level=trace msg="<<<<< CreateCredentials" file="storage_provider.go:125"
time="2024-09-29T08:05:12Z" level=trace msg=">>>>> AddStorageProvider" file="driver.go:331"
time="2024-09-29T08:05:12Z" level=info msg="Adding connection to CSP at IP truenas.internaldomain.com, port 8080, context path , with username ignored-when-using-api-key and serviceName truenas-csp-svc" file="driver.go:334"
time="2024-09-29T08:05:12Z" level=trace msg=">>>>> NewContainerStorageProvider" file="container_storage_provider.go:66"
time="2024-09-29T08:05:12Z" level=trace msg=">>>>> getCspClient (service) using URI http://truenas-csp-svc:8080 and username ignored-when-using-api-key with timeout set to 60 seconds" file="container_storage_provider.go:942"
time="2024-09-29T08:05:12Z" level=trace msg="<<<<< getCspClient" file="container_storage_provider.go:948"
time="2024-09-29T08:05:12Z" level=trace msg="Attempting initial login to CSP" file="container_storage_provider.go:81"
time="2024-09-29T08:05:12Z" level=info msg="About to attempt login to CSP for backend truenas.internaldomain.com" file="container_storage_provider.go:108"
time="2024-09-29T08:05:12Z" level=trace msg="Acquiring mutex lock for truenas.internaldomain.com" file="concurrent.go:44"
time="2024-09-29T08:05:12Z" level=trace msg="Request: action=POST path=http://truenas-csp-svc:8080/containers/v1/tokens" file="client.go:173"
time="2024-09-29T08:05:12Z" level=trace msg="response: 200 OK, length=232" file="client.go:224"
time="2024-09-29T08:05:12Z" level=debug msg="About to decode the error response &{0x4ff020 0xc0003ecf00 0x843500} into destination interface" file="client.go:231"
time="2024-09-29T08:05:12Z" level=trace msg="Releasing mutex lock for truenas.internaldomain.com" file="concurrent.go:67"
time="2024-09-29T08:05:12Z" level=trace msg="<<<<< NewContainerStorageProvider" file="container_storage_provider.go:92"
time="2024-09-29T08:05:12Z" level=trace msg="Number of cached/known storage providers: 1" file="driver.go:345"
time="2024-09-29T08:05:12Z" level=trace msg="<<<<< AddStorageProvider" file="driver.go:346"
time="2024-09-29T08:05:12Z" level=trace msg="<<<<< GetStorageProvider" file="driver.go:389"
time="2024-09-29T08:05:12Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:05:12Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:05:12Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:05:12Z" level=trace msg="Request: action=GET path=http://truenas-csp-svc:8080/containers/v1/volumes/my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1" file="client.go:173"
time="2024-09-29T08:05:14Z" level=trace msg="response: 200 OK, length=405" file="client.go:224"
time="2024-09-29T08:05:14Z" level=debug msg="About to decode the error response &{0x4ff020 0xc0004d9e40 0x843500} into destination interface" file="client.go:231"
time="2024-09-29T08:05:14Z" level=trace msg="Found Volume &{ID:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 Name:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 Size:2147483648 Description:Volume for PVC data-kafka-controller-2 InUse:false Published:false BaseSnapID: ParentVolID: Clone:false Config:map[compression:LZ4 deduplication:OFF sync:STANDARD target_scope:volume volblocksize:8K] Metadata:[] SerialNumber: AccessProtocol: Iqn: Iqns:[] DiscoveryIP: DiscoveryIPs:[] MountPoint: Status:map[] Chap:<nil> Networks:[] ConnectionMode: LunID: TargetScope: IscsiSessions:[] FcSessions:[] VolumeGroupId: SecondaryArrayDetails: UsedBytes:0 FreeBytes:0 EncryptionKey:}" file="driver.go:469"
time="2024-09-29T08:05:14Z" level=trace msg="<<<<< GetVolume" file="driver.go:470"
time="2024-09-29T08:05:14Z" level=trace msg="volume config is map[compression:LZ4 deduplication:OFF sync:STANDARD targetScope:volume volblocksize:8K]" file="controller_server.go:823"
time="2024-09-29T08:05:14Z" level=trace msg=">>>>>> GetNodeInfo from node ID b1312db0-c69d-a568-6698-c1f50490d1f8" file="flavor.go:331"
time="2024-09-29T08:05:14Z" level=trace msg="Found the following HPE Node Info objects: &{{ } { 751184  <nil>} [{{HPENodeInfo storage.hpe.com/v1} {c1    7f95b111-3eb1-4115-bc54-59ac2442ea4c 495583 1 2024-09-28 20:27:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:12 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {1d1d92c3-5fff-4b13-4f87-dba44e3067c6 [iqn.1993-08.org.debian:01:4efdaa484444] [192.168.150.115/24 10.42.0.0/32 10.42.0.1/24] []  }} {{HPENodeInfo storage.hpe.com/v1} {c2    fac777c1-03fe-43d2-998f-8c45c2b04642 495394 1 2024-09-28 20:27:02 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:02 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {b1312db0-c69d-a568-6698-c1f50490d1f8 [iqn.1993-08.org.debian:01:4efdaa484445] [192.168.150.116/24 10.42.1.0/32 10.42.1.1/24] []  }} {{HPENodeInfo storage.hpe.com/v1} {c3    5ba76957-0085-49f4-8761-04abc656873f 495465 1 2024-09-28 20:27:06 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:06 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {df97b3bc-a8d5-2492-cbdc-197130594330 [iqn.1993-08.org.debian:01:4efdaa484446] [192.168.150.117/24 10.42.2.0/32 10.42.2.1/24] []  }}]}" file="flavor.go:338"
time="2024-09-29T08:05:14Z" level=trace msg="Processing node info {{HPENodeInfo storage.hpe.com/v1} {c1    7f95b111-3eb1-4115-bc54-59ac2442ea4c 495583 1 2024-09-28 20:27:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:12 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {1d1d92c3-5fff-4b13-4f87-dba44e3067c6 [iqn.1993-08.org.debian:01:4efdaa484444] [192.168.150.115/24 10.42.0.0/32 10.42.0.1/24] []  }}" file="flavor.go:341"
time="2024-09-29T08:05:14Z" level=trace msg="Processing node info {{HPENodeInfo storage.hpe.com/v1} {c2    fac777c1-03fe-43d2-998f-8c45c2b04642 495394 1 2024-09-28 20:27:02 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:02 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {b1312db0-c69d-a568-6698-c1f50490d1f8 [iqn.1993-08.org.debian:01:4efdaa484445] [192.168.150.116/24 10.42.1.0/32 10.42.1.1/24] []  }}" file="flavor.go:341"
time="2024-09-29T08:05:14Z" level=trace msg="<<<<<< GetNodeInfo" file="flavor.go:364"
time="2024-09-29T08:05:14Z" level=info msg=GetChapCredentialsFromVolumeContext file="flavor.go:1009"
time="2024-09-29T08:05:14Z" level=info msg="CHAP secret name and namespace are not provided in the storage class parameters." file="flavor.go:1015"
time="2024-09-29T08:05:14Z" level=info msg=GetChapCredentialsFromEnvironment file="flavor.go:995"
time="2024-09-29T08:05:14Z" level=info msg="CHAP secret name and namespace are not provided as environment variables." file="flavor.go:1001"
time="2024-09-29T08:05:14Z" level=trace msg=">>>>> GetStorageProvider" file="driver.go:360"
time="2024-09-29T08:05:14Z" level=trace msg=">>>>> CreateCredentials" file="storage_provider.go:65"
time="2024-09-29T08:05:14Z" level=trace msg="<<<<< CreateCredentials" file="storage_provider.go:125"
time="2024-09-29T08:05:14Z" level=trace msg="Storage provider already exists. Returning it." file="driver.go:380"
time="2024-09-29T08:05:14Z" level=trace msg="<<<<< GetStorageProvider" file="driver.go:381"
time="2024-09-29T08:05:14Z" level=trace msg="Notifying CSP about Node with ID  and UUID b1312db0-c69d-a568-6698-c1f50490d1f8" file="controller_server.go:861"
time="2024-09-29T08:05:14Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:05:14Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:05:14Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:05:14Z" level=trace msg="Request: action=POST path=http://truenas-csp-svc:8080/containers/v1/hosts" file="client.go:173"
time="2024-09-29T08:05:18Z" level=trace msg="response: 200 OK, length=278" file="client.go:224"
time="2024-09-29T08:05:18Z" level=debug msg="Received a null reader. That is not expected." file="client.go:245"
time="2024-09-29T08:05:18Z" level=trace msg="Defaulting to access protocol iscsi" file="controller_server.go:877"
time="2024-09-29T08:05:18Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:05:18Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:05:18Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:05:18Z" level=trace msg="Request: action=PUT path=http://truenas-csp-svc:8080/containers/v1/volumes/my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1/actions/publish" file="client.go:173"
time="2024-09-29T08:05:24Z" level=trace msg=">>>>> MonitorPod with label monitored-by=hpe-csi" file="flavor.go:838"
time="2024-09-29T08:05:24Z" level=trace msg="cannot find any nfs pod with label monitored-by=hpe-csi" file="flavor.go:852"
time="2024-09-29T08:05:24Z" level=trace msg="<<<<< MonitorPod" file="flavor.go:853"
time="2024-09-29T08:05:27Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2024-09-29T08:05:27Z" level=info msg="GRPC request: {\"node_id\":\"b1312db0-c69d-a568-6698-c1f50490d1f8\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/pv/name\":\"pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"csi.storage.k8s.io/pvc/name\":\"data-kafka-controller-2\",\"csi.storage.k8s.io/pvc/namespace\":\"mynamespace\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/first.dataset\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1727529462858-4320-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\"}" file="utils.go:70"
time="2024-09-29T08:05:27Z" level=trace msg=">>>>> ControllerPublishVolume" file="controller_server.go:704"
time="2024-09-29T08:05:27Z" level=trace msg=">>>>> HandleDuplicateRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:522"
time="2024-09-29T08:05:27Z" level=trace msg=">>>>> GetRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:579"
time="2024-09-29T08:05:27Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:05:27Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:05:27Z" level=trace msg="<<<<< GetRequest" file="driver.go:586"
time="2024-09-29T08:05:27Z" level=trace msg="<<<<< HandleDuplicateRequest" file="driver.go:532"
time="2024-09-29T08:05:27Z" level=trace msg="<<<<< ControllerPublishVolume" file="controller_server.go:722"
time="2024-09-29T08:05:27Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="utils.go:73"
time="2024-09-29T08:05:28Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2024-09-29T08:05:28Z" level=info msg="GRPC request: {\"node_id\":\"b1312db0-c69d-a568-6698-c1f50490d1f8\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/pv/name\":\"pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"csi.storage.k8s.io/pvc/name\":\"data-kafka-controller-2\",\"csi.storage.k8s.io/pvc/namespace\":\"mynamespace\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/first.dataset\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1727529462858-4320-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\"}" file="utils.go:70"
time="2024-09-29T08:05:28Z" level=trace msg=">>>>> ControllerPublishVolume" file="controller_server.go:704"
time="2024-09-29T08:05:28Z" level=trace msg=">>>>> HandleDuplicateRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:522"
time="2024-09-29T08:05:28Z" level=trace msg=">>>>> GetRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:579"
time="2024-09-29T08:05:28Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:05:28Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:05:28Z" level=trace msg="<<<<< GetRequest" file="driver.go:586"
time="2024-09-29T08:05:28Z" level=trace msg="<<<<< HandleDuplicateRequest" file="driver.go:532"
time="2024-09-29T08:05:28Z" level=trace msg="<<<<< ControllerPublishVolume" file="controller_server.go:722"
time="2024-09-29T08:05:28Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="utils.go:73"
time="2024-09-29T08:05:32Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2024-09-29T08:05:32Z" level=info msg="GRPC request: {\"node_id\":\"b1312db0-c69d-a568-6698-c1f50490d1f8\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/pv/name\":\"pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"csi.storage.k8s.io/pvc/name\":\"data-kafka-controller-2\",\"csi.storage.k8s.io/pvc/namespace\":\"mynamespace\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/first.dataset\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1727529462858-4320-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\"}" file="utils.go:70"
time="2024-09-29T08:05:32Z" level=trace msg=">>>>> ControllerPublishVolume" file="controller_server.go:704"
time="2024-09-29T08:05:32Z" level=trace msg=">>>>> HandleDuplicateRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:522"
time="2024-09-29T08:05:32Z" level=trace msg=">>>>> GetRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:579"
time="2024-09-29T08:05:32Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:05:32Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:05:32Z" level=trace msg="<<<<< GetRequest" file="driver.go:586"
time="2024-09-29T08:05:32Z" level=trace msg="<<<<< HandleDuplicateRequest" file="driver.go:532"
time="2024-09-29T08:05:32Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="utils.go:73"
time="2024-09-29T08:05:32Z" level=trace msg="<<<<< ControllerPublishVolume" file="controller_server.go:722"
time="2024-09-29T08:05:32Z" level=trace msg="response: 200 OK, length=260" file="client.go:224"
time="2024-09-29T08:05:32Z" level=debug msg="About to decode the error response &{0x4ff020 0xc0003ec040 0x843500} into destination interface" file="client.go:231"
time="2024-09-29T08:05:32Z" level=trace msg="PublishInfo response from CSP: &{SerialNumber:6589cfc000000a69472b2e921fd54e29 AccessInfo:{BlockDeviceAccessInfo:{AccessProtocol:iscsi TargetNames:[iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1] LunID:0 SecondaryBackendDetails:{PeerArrayDetails:[]} IscsiAccessInfo:{DiscoveryIPs:[192.168.150.241] ChapUser: ChapPassword:}} VirtualDeviceAccessInfo:{}}}" file="controller_server.go:887"
time="2024-09-29T08:05:32Z" level=trace msg="Adding filesystem details to the publish context" file="controller_server.go:930"
time="2024-09-29T08:05:32Z" level=trace msg="Volume pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 with ID my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 published with the following details: [serialNumber 6589cfc000000a69472b2e921fd54e29 accessProtocol iscsi targetScope volume fsType xfs fsMode  fsCreateOptions  targetNames iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 lunId 0 discoveryIps 192.168.150.241 readOnly false volumeAccessMode mount fsOwner ]" file="controller_server.go:942"
time="2024-09-29T08:05:32Z" level=trace msg="<<<<< controllerPublishVolume" file="controller_server.go:944"

VOLUME SHOULD BE PUBLISHED AFTER THIS POINT

time="2024-09-29T08:05:32Z" level=trace msg=">>>>> ClearRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:605"
time="2024-09-29T08:05:32Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:05:32Z" level=trace msg="Print RequestCache: map[]" file="driver.go:625"
time="2024-09-29T08:05:32Z" level=trace msg="Successfully removed an entry with key ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8 from the cache map" file="driver.go:626"
time="2024-09-29T08:05:32Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:05:32Z" level=trace msg="<<<<< ClearRequest" file="driver.go:627"
time="2024-09-29T08:05:32Z" level=trace msg="<<<<< ControllerPublishVolume" file="controller_server.go:740"
time="2024-09-29T08:05:32Z" level=info msg="GRPC response: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"discoveryIps\":\"192.168.150.241\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"6589cfc000000a69472b2e921fd54e29\",\"targetNames\":\"iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"targetScope\":\"volume\",\"volumeAccessMode\":\"mount\"}}" file="utils.go:75"
time="2024-09-29T08:05:40Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2024-09-29T08:05:40Z" level=info msg="GRPC request: {\"node_id\":\"b1312db0-c69d-a568-6698-c1f50490d1f8\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/pv/name\":\"pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"csi.storage.k8s.io/pvc/name\":\"data-kafka-controller-2\",\"csi.storage.k8s.io/pvc/namespace\":\"mynamespace\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/first.dataset\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1727529462858-4320-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\"}" file="utils.go:70"
time="2024-09-29T08:05:40Z" level=trace msg=">>>>> ControllerPublishVolume" file="controller_server.go:704"
time="2024-09-29T08:05:40Z" level=trace msg=">>>>> HandleDuplicateRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:522"
time="2024-09-29T08:05:40Z" level=trace msg=">>>>> GetRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:579"
time="2024-09-29T08:05:40Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:05:40Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:05:40Z" level=trace msg="<<<<< GetRequest" file="driver.go:586"
time="2024-09-29T08:05:40Z" level=trace msg=">>>>> AddRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8, value: true" file="driver.go:591"
time="2024-09-29T08:05:40Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:05:40Z" level=trace msg="Print RequestCache: map[ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8:true]" file="driver.go:599"
time="2024-09-29T08:05:40Z" level=trace msg="Successfully inserted an entry with key ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8 to the cache map" file="driver.go:600"
time="2024-09-29T08:05:40Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:05:40Z" level=trace msg="<<<<< AddRequest" file="driver.go:601"
time="2024-09-29T08:05:40Z" level=trace msg="<<<<< HandleDuplicateRequest" file="driver.go:539"

A SECOND PUBLISH FOR THE SAME VOLUME STARTS HERE

time="2024-09-29T08:05:40Z" level=trace msg=">>>>> controllerPublishVolume with volumeID: my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1, nodeID: b1312db0-c69d-a568-6698-c1f50490d1f8, volumeCapability: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > , readOnlyFlag: false, volumeContext: map[allowOverrides:sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace compression:LZ4 csi.storage.k8s.io/pv/name:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 csi.storage.k8s.io/pvc/name:data-kafka-controller-2 csi.storage.k8s.io/pvc/namespace:mynamespace deduplication:OFF description:Volume for PVC {pvc} fsType:xfs hostEncryption:true hostEncryptionSecretName:storage-encryption-passphrase hostEncryptionSecretNamespace:hpe-storage root:my-data/first.dataset storage.kubernetes.io/csiProvisionerIdentity:1727529462858-4320-csi.hpe.com sync:STANDARD targetScope:volume volblocksize:8K volumeAccessMode:mount]" file="controller_server.go:754"
time="2024-09-29T08:05:40Z" level=trace msg=">>>>> IsValidVolumeCapability, volCapability: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:86"
time="2024-09-29T08:05:40Z" level=trace msg="Found access_mode: SINGLE_NODE_WRITER" file="controller_server.go:95"
time="2024-09-29T08:05:40Z" level=trace msg="Found Mount access_type fs_type:\"xfs\" " file="controller_server.go:119"
time="2024-09-29T08:05:40Z" level=trace msg="Found Mount access_type, FileSystem: xfs" file="controller_server.go:122"
time="2024-09-29T08:05:40Z" level=trace msg="Total length of mount flags: 0" file="controller_server.go:136"
time="2024-09-29T08:05:40Z" level=trace msg="<<<<< IsValidVolumeCapability" file="controller_server.go:150"
time="2024-09-29T08:05:40Z" level=trace msg=">>>>> getVolumeAccessType, volCap: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:26"
time="2024-09-29T08:05:40Z" level=trace msg="Found mount access type fs_type:\"xfs\" " file="controller_server.go:39"
time="2024-09-29T08:05:40Z" level=trace msg="<<<<< getVolumeAccessType" file="controller_server.go:40"
time="2024-09-29T08:05:40Z" level=trace msg=">>>>> GetVolume, ID: my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1" file="driver.go:431"
time="2024-09-29T08:05:40Z" level=trace msg="Secrets are provided. Checking with this particular storage provider." file="driver.go:438"
time="2024-09-29T08:05:40Z" level=trace msg=">>>>> GetStorageProvider" file="driver.go:360"
time="2024-09-29T08:05:40Z" level=trace msg=">>>>> CreateCredentials" file="storage_provider.go:65"
time="2024-09-29T08:05:40Z" level=trace msg="<<<<< CreateCredentials" file="storage_provider.go:125"
time="2024-09-29T08:05:40Z" level=trace msg="Storage provider already exists. Returning it." file="driver.go:380"
time="2024-09-29T08:05:40Z" level=trace msg="<<<<< GetStorageProvider" file="driver.go:381"
time="2024-09-29T08:05:40Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:05:40Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:05:40Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:05:40Z" level=trace msg="Request: action=GET path=http://truenas-csp-svc:8080/containers/v1/volumes/my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1" file="client.go:173"
time="2024-09-29T08:05:42Z" level=trace msg="response: 200 OK, length=404" file="client.go:224"
time="2024-09-29T08:05:42Z" level=debug msg="About to decode the error response &{0x4ff020 0xc0000ae280 0x843500} into destination interface" file="client.go:231"
time="2024-09-29T08:05:42Z" level=trace msg="Found Volume &{ID:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 Name:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 Size:2147483648 Description:Volume for PVC data-kafka-controller-2 InUse:false Published:true BaseSnapID: ParentVolID: Clone:false Config:map[compression:LZ4 deduplication:OFF sync:STANDARD target_scope:volume volblocksize:8K] Metadata:[] SerialNumber: AccessProtocol: Iqn: Iqns:[] DiscoveryIP: DiscoveryIPs:[] MountPoint: Status:map[] Chap:<nil> Networks:[] ConnectionMode: LunID: TargetScope: IscsiSessions:[] FcSessions:[] VolumeGroupId: SecondaryArrayDetails: UsedBytes:0 FreeBytes:0 EncryptionKey:}" file="driver.go:469"
time="2024-09-29T08:05:42Z" level=trace msg="<<<<< GetVolume" file="driver.go:470"
time="2024-09-29T08:05:42Z" level=trace msg="volume config is map[compression:LZ4 deduplication:OFF sync:STANDARD targetScope:volume volblocksize:8K]" file="controller_server.go:823"
time="2024-09-29T08:05:42Z" level=trace msg=">>>>>> GetNodeInfo from node ID b1312db0-c69d-a568-6698-c1f50490d1f8" file="flavor.go:331"
time="2024-09-29T08:05:42Z" level=trace msg="Found the following HPE Node Info objects: &{{ } { 751366  <nil>} [{{HPENodeInfo storage.hpe.com/v1} {c1    7f95b111-3eb1-4115-bc54-59ac2442ea4c 495583 1 2024-09-28 20:27:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:12 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {1d1d92c3-5fff-4b13-4f87-dba44e3067c6 [iqn.1993-08.org.debian:01:4efdaa484444] [192.168.150.115/24 10.42.0.0/32 10.42.0.1/24] []  }} {{HPENodeInfo storage.hpe.com/v1} {c2    fac777c1-03fe-43d2-998f-8c45c2b04642 495394 1 2024-09-28 20:27:02 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:02 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {b1312db0-c69d-a568-6698-c1f50490d1f8 [iqn.1993-08.org.debian:01:4efdaa484445] [192.168.150.116/24 10.42.1.0/32 10.42.1.1/24] []  }} {{HPENodeInfo storage.hpe.com/v1} {c3    5ba76957-0085-49f4-8761-04abc656873f 495465 1 2024-09-28 20:27:06 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:06 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {df97b3bc-a8d5-2492-cbdc-197130594330 [iqn.1993-08.org.debian:01:4efdaa484446] [192.168.150.117/24 10.42.2.0/32 10.42.2.1/24] []  }}]}" file="flavor.go:338"
time="2024-09-29T08:05:42Z" level=trace msg="Processing node info {{HPENodeInfo storage.hpe.com/v1} {c1    7f95b111-3eb1-4115-bc54-59ac2442ea4c 495583 1 2024-09-28 20:27:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:12 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {1d1d92c3-5fff-4b13-4f87-dba44e3067c6 [iqn.1993-08.org.debian:01:4efdaa484444] [192.168.150.115/24 10.42.0.0/32 10.42.0.1/24] []  }}" file="flavor.go:341"
time="2024-09-29T08:05:42Z" level=trace msg="Processing node info {{HPENodeInfo storage.hpe.com/v1} {c2    fac777c1-03fe-43d2-998f-8c45c2b04642 495394 1 2024-09-28 20:27:02 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:02 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {b1312db0-c69d-a568-6698-c1f50490d1f8 [iqn.1993-08.org.debian:01:4efdaa484445] [192.168.150.116/24 10.42.1.0/32 10.42.1.1/24] []  }}" file="flavor.go:341"
time="2024-09-29T08:05:42Z" level=trace msg="<<<<<< GetNodeInfo" file="flavor.go:364"
time="2024-09-29T08:05:42Z" level=info msg=GetChapCredentialsFromVolumeContext file="flavor.go:1009"
time="2024-09-29T08:05:42Z" level=info msg="CHAP secret name and namespace are not provided in the storage class parameters." file="flavor.go:1015"
time="2024-09-29T08:05:42Z" level=info msg=GetChapCredentialsFromEnvironment file="flavor.go:995"
time="2024-09-29T08:05:42Z" level=info msg="CHAP secret name and namespace are not provided as environment variables." file="flavor.go:1001"
time="2024-09-29T08:05:42Z" level=trace msg=">>>>> GetStorageProvider" file="driver.go:360"
time="2024-09-29T08:05:42Z" level=trace msg=">>>>> CreateCredentials" file="storage_provider.go:65"
time="2024-09-29T08:05:42Z" level=trace msg="<<<<< CreateCredentials" file="storage_provider.go:125"
time="2024-09-29T08:05:42Z" level=trace msg="Storage provider already exists. Returning it." file="driver.go:380"
time="2024-09-29T08:05:42Z" level=trace msg="<<<<< GetStorageProvider" file="driver.go:381"
time="2024-09-29T08:05:42Z" level=trace msg="Notifying CSP about Node with ID  and UUID b1312db0-c69d-a568-6698-c1f50490d1f8" file="controller_server.go:861"
time="2024-09-29T08:05:42Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:05:42Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:05:42Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:05:42Z" level=trace msg="Request: action=POST path=http://truenas-csp-svc:8080/containers/v1/hosts" file="client.go:173"
W0929 08:05:43.304023       1 reflector.go:424] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
E0929 08:05:43.304089       1 reflector.go:140] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: Failed to watch *v1.VolumeSnapshot: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
time="2024-09-29T08:05:46Z" level=trace msg="response: 200 OK, length=278" file="client.go:224"
time="2024-09-29T08:05:46Z" level=debug msg="Received a null reader. That is not expected." file="client.go:245"
time="2024-09-29T08:05:46Z" level=trace msg="Defaulting to access protocol iscsi" file="controller_server.go:877"
time="2024-09-29T08:05:46Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:05:46Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:05:46Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:05:46Z" level=trace msg="Request: action=PUT path=http://truenas-csp-svc:8080/containers/v1/volumes/my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1/actions/publish" file="client.go:173"
time="2024-09-29T08:05:54Z" level=trace msg=">>>>> MonitorPod with label monitored-by=hpe-csi" file="flavor.go:838"
time="2024-09-29T08:05:54Z" level=trace msg="cannot find any nfs pod with label monitored-by=hpe-csi" file="flavor.go:852"
time="2024-09-29T08:05:54Z" level=trace msg="<<<<< MonitorPod" file="flavor.go:853"
time="2024-09-29T08:05:57Z" level=trace msg="response: 200 OK, length=260" file="client.go:224"
time="2024-09-29T08:05:57Z" level=debug msg="About to decode the error response &{0x4ff020 0xc0003ecb80 0x843500} into destination interface" file="client.go:231"
time="2024-09-29T08:05:57Z" level=trace msg="PublishInfo response from CSP: &{SerialNumber:6589cfc000000a69472b2e921fd54e29 AccessInfo:{BlockDeviceAccessInfo:{AccessProtocol:iscsi TargetNames:[iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1] LunID:0 SecondaryBackendDetails:{PeerArrayDetails:[]} IscsiAccessInfo:{DiscoveryIPs:[192.168.150.241] ChapUser: ChapPassword:}} VirtualDeviceAccessInfo:{}}}" file="controller_server.go:887"
time="2024-09-29T08:05:57Z" level=trace msg="Adding filesystem details to the publish context" file="controller_server.go:930"
time="2024-09-29T08:05:57Z" level=trace msg="Volume pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 with ID my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 published with the following details: [volumeAccessMode mount fsType xfs fsOwner  accessProtocol iscsi targetScope volume lunId 0 discoveryIps 192.168.150.241 readOnly false fsMode  serialNumber 6589cfc000000a69472b2e921fd54e29 targetNames iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 fsCreateOptions ]" file="controller_server.go:942"
time="2024-09-29T08:05:57Z" level=trace msg="<<<<< controllerPublishVolume" file="controller_server.go:944"
time="2024-09-29T08:05:57Z" level=trace msg=">>>>> ClearRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:605"
time="2024-09-29T08:05:57Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:05:57Z" level=trace msg="Print RequestCache: map[]" file="driver.go:625"
time="2024-09-29T08:05:57Z" level=trace msg="Successfully removed an entry with key ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8 from the cache map" file="driver.go:626"
time="2024-09-29T08:05:57Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:05:57Z" level=trace msg="<<<<< ClearRequest" file="driver.go:627"
time="2024-09-29T08:05:57Z" level=trace msg="<<<<< ControllerPublishVolume" file="controller_server.go:740"
time="2024-09-29T08:05:57Z" level=info msg="GRPC response: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"discoveryIps\":\"192.168.150.241\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"6589cfc000000a69472b2e921fd54e29\",\"targetNames\":\"iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"targetScope\":\"volume\",\"volumeAccessMode\":\"mount\"}}" file="utils.go:75"
time="2024-09-29T08:06:11Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2024-09-29T08:06:11Z" level=info msg="GRPC request: {\"node_id\":\"b1312db0-c69d-a568-6698-c1f50490d1f8\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/pv/name\":\"pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"csi.storage.k8s.io/pvc/name\":\"data-kafka-controller-2\",\"csi.storage.k8s.io/pvc/namespace\":\"mynamespace\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/first.dataset\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1727529462858-4320-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\"}" file="utils.go:70"
time="2024-09-29T08:06:11Z" level=trace msg=">>>>> ControllerPublishVolume" file="controller_server.go:704"
time="2024-09-29T08:06:11Z" level=trace msg=">>>>> HandleDuplicateRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:522"
time="2024-09-29T08:06:11Z" level=trace msg=">>>>> GetRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:579"
time="2024-09-29T08:06:11Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:06:11Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:06:11Z" level=trace msg="<<<<< GetRequest" file="driver.go:586"
time="2024-09-29T08:06:11Z" level=trace msg=">>>>> AddRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8, value: true" file="driver.go:591"
time="2024-09-29T08:06:11Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:06:11Z" level=trace msg="Print RequestCache: map[ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8:true]" file="driver.go:599"
time="2024-09-29T08:06:11Z" level=trace msg="Successfully inserted an entry with key ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8 to the cache map" file="driver.go:600"
time="2024-09-29T08:06:11Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:06:11Z" level=trace msg="<<<<< AddRequest" file="driver.go:601"
time="2024-09-29T08:06:11Z" level=trace msg="<<<<< HandleDuplicateRequest" file="driver.go:539"
time="2024-09-29T08:06:11Z" level=trace msg=">>>>> controllerPublishVolume with volumeID: my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1, nodeID: b1312db0-c69d-a568-6698-c1f50490d1f8, volumeCapability: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > , readOnlyFlag: false, volumeContext: map[allowOverrides:sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace compression:LZ4 csi.storage.k8s.io/pv/name:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 csi.storage.k8s.io/pvc/name:data-kafka-controller-2 csi.storage.k8s.io/pvc/namespace:mynamespace deduplication:OFF description:Volume for PVC {pvc} fsType:xfs hostEncryption:true hostEncryptionSecretName:storage-encryption-passphrase hostEncryptionSecretNamespace:hpe-storage root:my-data/first.dataset storage.kubernetes.io/csiProvisionerIdentity:1727529462858-4320-csi.hpe.com sync:STANDARD targetScope:volume volblocksize:8K volumeAccessMode:mount]" file="controller_server.go:754"
time="2024-09-29T08:06:11Z" level=trace msg=">>>>> IsValidVolumeCapability, volCapability: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:86"
time="2024-09-29T08:06:11Z" level=trace msg="Found access_mode: SINGLE_NODE_WRITER" file="controller_server.go:95"
time="2024-09-29T08:06:11Z" level=trace msg="Found Mount access_type fs_type:\"xfs\" " file="controller_server.go:119"
time="2024-09-29T08:06:11Z" level=trace msg="Found Mount access_type, FileSystem: xfs" file="controller_server.go:122"
time="2024-09-29T08:06:11Z" level=trace msg="Total length of mount flags: 0" file="controller_server.go:136"
time="2024-09-29T08:06:11Z" level=trace msg="<<<<< IsValidVolumeCapability" file="controller_server.go:150"
time="2024-09-29T08:06:11Z" level=trace msg=">>>>> getVolumeAccessType, volCap: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:26"
time="2024-09-29T08:06:11Z" level=trace msg="Found mount access type fs_type:\"xfs\" " file="controller_server.go:39"
time="2024-09-29T08:06:11Z" level=trace msg="<<<<< getVolumeAccessType" file="controller_server.go:40"
time="2024-09-29T08:06:11Z" level=trace msg=">>>>> GetVolume, ID: my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1" file="driver.go:431"
time="2024-09-29T08:06:11Z" level=trace msg="Secrets are provided. Checking with this particular storage provider." file="driver.go:438"
time="2024-09-29T08:06:11Z" level=trace msg=">>>>> GetStorageProvider" file="driver.go:360"
time="2024-09-29T08:06:11Z" level=trace msg=">>>>> CreateCredentials" file="storage_provider.go:65"
time="2024-09-29T08:06:11Z" level=trace msg="<<<<< CreateCredentials" file="storage_provider.go:125"
time="2024-09-29T08:06:11Z" level=trace msg="Storage provider already exists. Returning it." file="driver.go:380"
time="2024-09-29T08:06:11Z" level=trace msg="<<<<< GetStorageProvider" file="driver.go:381"
time="2024-09-29T08:06:11Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:06:11Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:06:11Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:06:11Z" level=trace msg="Request: action=GET path=http://truenas-csp-svc:8080/containers/v1/volumes/my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1" file="client.go:173"
time="2024-09-29T08:06:13Z" level=trace msg="response: 200 OK, length=404" file="client.go:224"
time="2024-09-29T08:06:13Z" level=debug msg="About to decode the error response &{0x4ff020 0xc000564a40 0x843500} into destination interface" file="client.go:231"
time="2024-09-29T08:06:13Z" level=trace msg="Found Volume &{ID:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 Name:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 Size:2147483648 Description:Volume for PVC data-kafka-controller-2 InUse:false Published:true BaseSnapID: ParentVolID: Clone:false Config:map[compression:LZ4 deduplication:OFF sync:STANDARD target_scope:volume volblocksize:8K] Metadata:[] SerialNumber: AccessProtocol: Iqn: Iqns:[] DiscoveryIP: DiscoveryIPs:[] MountPoint: Status:map[] Chap:<nil> Networks:[] ConnectionMode: LunID: TargetScope: IscsiSessions:[] FcSessions:[] VolumeGroupId: SecondaryArrayDetails: UsedBytes:0 FreeBytes:0 EncryptionKey:}" file="driver.go:469"
time="2024-09-29T08:06:13Z" level=trace msg="<<<<< GetVolume" file="driver.go:470"
time="2024-09-29T08:06:13Z" level=trace msg="volume config is map[compression:LZ4 deduplication:OFF sync:STANDARD targetScope:volume volblocksize:8K]" file="controller_server.go:823"
time="2024-09-29T08:06:13Z" level=trace msg=">>>>>> GetNodeInfo from node ID b1312db0-c69d-a568-6698-c1f50490d1f8" file="flavor.go:331"
time="2024-09-29T08:06:13Z" level=trace msg="Found the following HPE Node Info objects: &{{ } { 751556  <nil>} [{{HPENodeInfo storage.hpe.com/v1} {c1    7f95b111-3eb1-4115-bc54-59ac2442ea4c 495583 1 2024-09-28 20:27:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:12 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {1d1d92c3-5fff-4b13-4f87-dba44e3067c6 [iqn.1993-08.org.debian:01:4efdaa484444] [192.168.150.115/24 10.42.0.0/32 10.42.0.1/24] []  }} {{HPENodeInfo storage.hpe.com/v1} {c2    fac777c1-03fe-43d2-998f-8c45c2b04642 495394 1 2024-09-28 20:27:02 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:02 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {b1312db0-c69d-a568-6698-c1f50490d1f8 [iqn.1993-08.org.debian:01:4efdaa484445] [192.168.150.116/24 10.42.1.0/32 10.42.1.1/24] []  }} {{HPENodeInfo storage.hpe.com/v1} {c3    5ba76957-0085-49f4-8761-04abc656873f 495465 1 2024-09-28 20:27:06 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:06 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {df97b3bc-a8d5-2492-cbdc-197130594330 [iqn.1993-08.org.debian:01:4efdaa484446] [192.168.150.117/24 10.42.2.0/32 10.42.2.1/24] []  }}]}" file="flavor.go:338"
time="2024-09-29T08:06:13Z" level=trace msg="Processing node info {{HPENodeInfo storage.hpe.com/v1} {c1    7f95b111-3eb1-4115-bc54-59ac2442ea4c 495583 1 2024-09-28 20:27:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:12 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {1d1d92c3-5fff-4b13-4f87-dba44e3067c6 [iqn.1993-08.org.debian:01:4efdaa484444] [192.168.150.115/24 10.42.0.0/32 10.42.0.1/24] []  }}" file="flavor.go:341"
time="2024-09-29T08:06:13Z" level=trace msg="Processing node info {{HPENodeInfo storage.hpe.com/v1} {c2    fac777c1-03fe-43d2-998f-8c45c2b04642 495394 1 2024-09-28 20:27:02 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:02 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {b1312db0-c69d-a568-6698-c1f50490d1f8 [iqn.1993-08.org.debian:01:4efdaa484445] [192.168.150.116/24 10.42.1.0/32 10.42.1.1/24] []  }}" file="flavor.go:341"
time="2024-09-29T08:06:13Z" level=trace msg="<<<<<< GetNodeInfo" file="flavor.go:364"
time="2024-09-29T08:06:13Z" level=info msg=GetChapCredentialsFromVolumeContext file="flavor.go:1009"
time="2024-09-29T08:06:13Z" level=info msg="CHAP secret name and namespace are not provided in the storage class parameters." file="flavor.go:1015"
time="2024-09-29T08:06:13Z" level=info msg=GetChapCredentialsFromEnvironment file="flavor.go:995"
time="2024-09-29T08:06:13Z" level=info msg="CHAP secret name and namespace are not provided as environment variables." file="flavor.go:1001"
time="2024-09-29T08:06:13Z" level=trace msg=">>>>> GetStorageProvider" file="driver.go:360"
time="2024-09-29T08:06:13Z" level=trace msg=">>>>> CreateCredentials" file="storage_provider.go:65"
time="2024-09-29T08:06:13Z" level=trace msg="<<<<< CreateCredentials" file="storage_provider.go:125"
time="2024-09-29T08:06:13Z" level=trace msg="Storage provider already exists. Returning it." file="driver.go:380"
time="2024-09-29T08:06:13Z" level=trace msg="<<<<< GetStorageProvider" file="driver.go:381"
time="2024-09-29T08:06:13Z" level=trace msg="Notifying CSP about Node with ID  and UUID b1312db0-c69d-a568-6698-c1f50490d1f8" file="controller_server.go:861"
time="2024-09-29T08:06:13Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:06:13Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:06:13Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:06:13Z" level=trace msg="Request: action=POST path=http://truenas-csp-svc:8080/containers/v1/hosts" file="client.go:173"
time="2024-09-29T08:06:17Z" level=trace msg="response: 200 OK, length=278" file="client.go:224"
time="2024-09-29T08:06:17Z" level=debug msg="Received a null reader. That is not expected." file="client.go:245"
time="2024-09-29T08:06:17Z" level=trace msg="Defaulting to access protocol iscsi" file="controller_server.go:877"
time="2024-09-29T08:06:17Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:06:17Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:06:17Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:06:17Z" level=trace msg="Request: action=PUT path=http://truenas-csp-svc:8080/containers/v1/volumes/my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1/actions/publish" file="client.go:173"
time="2024-09-29T08:06:24Z" level=trace msg=">>>>> MonitorPod with label monitored-by=hpe-csi" file="flavor.go:838"
time="2024-09-29T08:06:24Z" level=trace msg="cannot find any nfs pod with label monitored-by=hpe-csi" file="flavor.go:852"
time="2024-09-29T08:06:24Z" level=trace msg="<<<<< MonitorPod" file="flavor.go:853"
W0929 08:06:24.469429       1 reflector.go:424] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
E0929 08:06:24.469464       1 reflector.go:140] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: Failed to watch *v1.VolumeSnapshot: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
time="2024-09-29T08:06:28Z" level=trace msg="response: 200 OK, length=260" file="client.go:224"
time="2024-09-29T08:06:28Z" level=debug msg="About to decode the error response &{0x4ff020 0xc0003ed440 0x843500} into destination interface" file="client.go:231"
time="2024-09-29T08:06:28Z" level=trace msg="PublishInfo response from CSP: &{SerialNumber:6589cfc000000a69472b2e921fd54e29 AccessInfo:{BlockDeviceAccessInfo:{AccessProtocol:iscsi TargetNames:[iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1] LunID:0 SecondaryBackendDetails:{PeerArrayDetails:[]} IscsiAccessInfo:{DiscoveryIPs:[192.168.150.241] ChapUser: ChapPassword:}} VirtualDeviceAccessInfo:{}}}" file="controller_server.go:887"
time="2024-09-29T08:06:28Z" level=trace msg="Adding filesystem details to the publish context" file="controller_server.go:930"
time="2024-09-29T08:06:28Z" level=trace msg="Volume pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 with ID my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 published with the following details: [targetNames iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 readOnly false fsType xfs fsOwner  fsMode  volumeAccessMode mount fsCreateOptions  serialNumber 6589cfc000000a69472b2e921fd54e29 accessProtocol iscsi targetScope volume lunId 0 discoveryIps 192.168.150.241]" file="controller_server.go:942"
time="2024-09-29T08:06:28Z" level=trace msg="<<<<< controllerPublishVolume" file="controller_server.go:944"

SECOND PUBLISH ENDS WITHOUT ERROR AS WELL

time="2024-09-29T08:06:28Z" level=trace msg=">>>>> ClearRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:605"
time="2024-09-29T08:06:28Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:06:28Z" level=trace msg="Print RequestCache: map[]" file="driver.go:625"
time="2024-09-29T08:06:28Z" level=trace msg="Successfully removed an entry with key ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8 from the cache map" file="driver.go:626"
time="2024-09-29T08:06:28Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:06:28Z" level=trace msg="<<<<< ClearRequest" file="driver.go:627"
time="2024-09-29T08:06:28Z" level=trace msg="<<<<< ControllerPublishVolume" file="controller_server.go:740"
time="2024-09-29T08:06:28Z" level=info msg="GRPC response: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"discoveryIps\":\"192.168.150.241\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"6589cfc000000a69472b2e921fd54e29\",\"targetNames\":\"iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"targetScope\":\"volume\",\"volumeAccessMode\":\"mount\"}}" file="utils.go:75"
time="2024-09-29T08:06:54Z" level=trace msg=">>>>> MonitorPod with label monitored-by=hpe-csi" file="flavor.go:838"
time="2024-09-29T08:06:54Z" level=trace msg="cannot find any nfs pod with label monitored-by=hpe-csi" file="flavor.go:852"
time="2024-09-29T08:06:54Z" level=trace msg="<<<<< MonitorPod" file="flavor.go:853"
time="2024-09-29T08:06:58Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2024-09-29T08:06:58Z" level=info msg="GRPC request: {\"node_id\":\"b1312db0-c69d-a568-6698-c1f50490d1f8\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/pv/name\":\"pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"csi.storage.k8s.io/pvc/name\":\"data-kafka-controller-2\",\"csi.storage.k8s.io/pvc/namespace\":\"mynamespace\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/first.dataset\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1727529462858-4320-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\"}" file="utils.go:70"
time="2024-09-29T08:06:58Z" level=trace msg=">>>>> ControllerPublishVolume" file="controller_server.go:704"
time="2024-09-29T08:06:58Z" level=trace msg=">>>>> HandleDuplicateRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:522"
time="2024-09-29T08:06:58Z" level=trace msg=">>>>> GetRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:579"
time="2024-09-29T08:06:58Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:06:58Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:06:58Z" level=trace msg="<<<<< GetRequest" file="driver.go:586"
time="2024-09-29T08:06:58Z" level=trace msg=">>>>> AddRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8, value: true" file="driver.go:591"
time="2024-09-29T08:06:58Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:06:58Z" level=trace msg="Print RequestCache: map[ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8:true]" file="driver.go:599"
time="2024-09-29T08:06:58Z" level=trace msg="Successfully inserted an entry with key ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8 to the cache map" file="driver.go:600"
time="2024-09-29T08:06:58Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:06:58Z" level=trace msg="<<<<< AddRequest" file="driver.go:601"
time="2024-09-29T08:06:58Z" level=trace msg="<<<<< HandleDuplicateRequest" file="driver.go:539"
time="2024-09-29T08:06:58Z" level=trace msg=">>>>> controllerPublishVolume with volumeID: my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1, nodeID: b1312db0-c69d-a568-6698-c1f50490d1f8, volumeCapability: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > , readOnlyFlag: false, volumeContext: map[allowOverrides:sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace compression:LZ4 csi.storage.k8s.io/pv/name:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 csi.storage.k8s.io/pvc/name:data-kafka-controller-2 csi.storage.k8s.io/pvc/namespace:mynamespace deduplication:OFF description:Volume for PVC {pvc} fsType:xfs hostEncryption:true hostEncryptionSecretName:storage-encryption-passphrase hostEncryptionSecretNamespace:hpe-storage root:my-data/first.dataset storage.kubernetes.io/csiProvisionerIdentity:1727529462858-4320-csi.hpe.com sync:STANDARD targetScope:volume volblocksize:8K volumeAccessMode:mount]" file="controller_server.go:754"
time="2024-09-29T08:06:58Z" level=trace msg=">>>>> IsValidVolumeCapability, volCapability: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:86"
time="2024-09-29T08:06:58Z" level=trace msg="Found access_mode: SINGLE_NODE_WRITER" file="controller_server.go:95"
time="2024-09-29T08:06:58Z" level=trace msg="Found Mount access_type fs_type:\"xfs\" " file="controller_server.go:119"
time="2024-09-29T08:06:58Z" level=trace msg="Found Mount access_type, FileSystem: xfs" file="controller_server.go:122"
time="2024-09-29T08:06:58Z" level=trace msg="Total length of mount flags: 0" file="controller_server.go:136"
time="2024-09-29T08:06:58Z" level=trace msg="<<<<< IsValidVolumeCapability" file="controller_server.go:150"
time="2024-09-29T08:06:58Z" level=trace msg=">>>>> getVolumeAccessType, volCap: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:26"
time="2024-09-29T08:06:58Z" level=trace msg="Found mount access type fs_type:\"xfs\" " file="controller_server.go:39"
time="2024-09-29T08:06:58Z" level=trace msg="<<<<< getVolumeAccessType" file="controller_server.go:40"
time="2024-09-29T08:06:58Z" level=trace msg=">>>>> GetVolume, ID: my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1" file="driver.go:431"
time="2024-09-29T08:06:58Z" level=trace msg="Secrets are provided. Checking with this particular storage provider." file="driver.go:438"
time="2024-09-29T08:06:58Z" level=trace msg=">>>>> GetStorageProvider" file="driver.go:360"
time="2024-09-29T08:06:58Z" level=trace msg=">>>>> CreateCredentials" file="storage_provider.go:65"
time="2024-09-29T08:06:58Z" level=trace msg="<<<<< CreateCredentials" file="storage_provider.go:125"
time="2024-09-29T08:06:58Z" level=trace msg="Storage provider already exists. Returning it." file="driver.go:380"
time="2024-09-29T08:06:58Z" level=trace msg="<<<<< GetStorageProvider" file="driver.go:381"
time="2024-09-29T08:06:58Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:06:58Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:06:58Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:06:58Z" level=trace msg="Request: action=GET path=http://truenas-csp-svc:8080/containers/v1/volumes/my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1" file="client.go:173"
time="2024-09-29T08:07:00Z" level=trace msg="response: 200 OK, length=404" file="client.go:224"
time="2024-09-29T08:07:00Z" level=debug msg="About to decode the error response &{0x4ff020 0xc0004d9a40 0x843500} into destination interface" file="client.go:231"
time="2024-09-29T08:07:00Z" level=trace msg="Found Volume &{ID:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 Name:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 Size:2147483648 Description:Volume for PVC data-kafka-controller-2 InUse:false Published:true BaseSnapID: ParentVolID: Clone:false Config:map[compression:LZ4 deduplication:OFF sync:STANDARD target_scope:volume volblocksize:8K] Metadata:[] SerialNumber: AccessProtocol: Iqn: Iqns:[] DiscoveryIP: DiscoveryIPs:[] MountPoint: Status:map[] Chap:<nil> Networks:[] ConnectionMode: LunID: TargetScope: IscsiSessions:[] FcSessions:[] VolumeGroupId: SecondaryArrayDetails: UsedBytes:0 FreeBytes:0 EncryptionKey:}" file="driver.go:469"
time="2024-09-29T08:07:00Z" level=trace msg="<<<<< GetVolume" file="driver.go:470"
time="2024-09-29T08:07:00Z" level=trace msg="volume config is map[compression:LZ4 deduplication:OFF sync:STANDARD targetScope:volume volblocksize:8K]" file="controller_server.go:823"
time="2024-09-29T08:07:00Z" level=trace msg=">>>>>> GetNodeInfo from node ID b1312db0-c69d-a568-6698-c1f50490d1f8" file="flavor.go:331"
time="2024-09-29T08:07:00Z" level=trace msg="Found the following HPE Node Info objects: &{{ } { 751841  <nil>} [{{HPENodeInfo storage.hpe.com/v1} {c1    7f95b111-3eb1-4115-bc54-59ac2442ea4c 495583 1 2024-09-28 20:27:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:12 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {1d1d92c3-5fff-4b13-4f87-dba44e3067c6 [iqn.1993-08.org.debian:01:4efdaa484444] [192.168.150.115/24 10.42.0.0/32 10.42.0.1/24] []  }} {{HPENodeInfo storage.hpe.com/v1} {c2    fac777c1-03fe-43d2-998f-8c45c2b04642 495394 1 2024-09-28 20:27:02 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:02 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {b1312db0-c69d-a568-6698-c1f50490d1f8 [iqn.1993-08.org.debian:01:4efdaa484445] [192.168.150.116/24 10.42.1.0/32 10.42.1.1/24] []  }} {{HPENodeInfo storage.hpe.com/v1} {c3    5ba76957-0085-49f4-8761-04abc656873f 495465 1 2024-09-28 20:27:06 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:06 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {df97b3bc-a8d5-2492-cbdc-197130594330 [iqn.1993-08.org.debian:01:4efdaa484446] [192.168.150.117/24 10.42.2.0/32 10.42.2.1/24] []  }}]}" file="flavor.go:338"
time="2024-09-29T08:07:00Z" level=trace msg="Processing node info {{HPENodeInfo storage.hpe.com/v1} {c1    7f95b111-3eb1-4115-bc54-59ac2442ea4c 495583 1 2024-09-28 20:27:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:12 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {1d1d92c3-5fff-4b13-4f87-dba44e3067c6 [iqn.1993-08.org.debian:01:4efdaa484444] [192.168.150.115/24 10.42.0.0/32 10.42.0.1/24] []  }}" file="flavor.go:341"
time="2024-09-29T08:07:00Z" level=trace msg="Processing node info {{HPENodeInfo storage.hpe.com/v1} {c2    fac777c1-03fe-43d2-998f-8c45c2b04642 495394 1 2024-09-28 20:27:02 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:02 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {b1312db0-c69d-a568-6698-c1f50490d1f8 [iqn.1993-08.org.debian:01:4efdaa484445] [192.168.150.116/24 10.42.1.0/32 10.42.1.1/24] []  }}" file="flavor.go:341"
time="2024-09-29T08:07:00Z" level=trace msg="<<<<<< GetNodeInfo" file="flavor.go:364"
time="2024-09-29T08:07:00Z" level=info msg=GetChapCredentialsFromVolumeContext file="flavor.go:1009"
time="2024-09-29T08:07:00Z" level=info msg="CHAP secret name and namespace are not provided in the storage class parameters." file="flavor.go:1015"
time="2024-09-29T08:07:00Z" level=info msg=GetChapCredentialsFromEnvironment file="flavor.go:995"
time="2024-09-29T08:07:00Z" level=info msg="CHAP secret name and namespace are not provided as environment variables." file="flavor.go:1001"
time="2024-09-29T08:07:00Z" level=trace msg=">>>>> GetStorageProvider" file="driver.go:360"
time="2024-09-29T08:07:00Z" level=trace msg=">>>>> CreateCredentials" file="storage_provider.go:65"
time="2024-09-29T08:07:00Z" level=trace msg="<<<<< CreateCredentials" file="storage_provider.go:125"
time="2024-09-29T08:07:00Z" level=trace msg="Storage provider already exists. Returning it." file="driver.go:380"
time="2024-09-29T08:07:00Z" level=trace msg="<<<<< GetStorageProvider" file="driver.go:381"
time="2024-09-29T08:07:00Z" level=trace msg="Notifying CSP about Node with ID  and UUID b1312db0-c69d-a568-6698-c1f50490d1f8" file="controller_server.go:861"
time="2024-09-29T08:07:00Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:07:00Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:07:00Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:07:00Z" level=trace msg="Request: action=POST path=http://truenas-csp-svc:8080/containers/v1/hosts" file="client.go:173"
time="2024-09-29T08:07:04Z" level=trace msg="response: 200 OK, length=278" file="client.go:224"
time="2024-09-29T08:07:04Z" level=debug msg="Received a null reader. That is not expected." file="client.go:245"
time="2024-09-29T08:07:04Z" level=trace msg="Defaulting to access protocol iscsi" file="controller_server.go:877"
time="2024-09-29T08:07:04Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:07:04Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:07:04Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:07:04Z" level=trace msg="Request: action=PUT path=http://truenas-csp-svc:8080/containers/v1/volumes/my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1/actions/publish" file="client.go:173"
W0929 08:07:04.562554       1 reflector.go:424] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
E0929 08:07:04.562619       1 reflector.go:140] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: Failed to watch *v1.VolumeSnapshot: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
time="2024-09-29T08:07:15Z" level=trace msg="response: 200 OK, length=260" file="client.go:224"
time="2024-09-29T08:07:15Z" level=debug msg="About to decode the error response &{0x4ff020 0xc0002ee000 0x843500} into destination interface" file="client.go:231"
time="2024-09-29T08:07:15Z" level=trace msg="PublishInfo response from CSP: &{SerialNumber:6589cfc000000a69472b2e921fd54e29 AccessInfo:{BlockDeviceAccessInfo:{AccessProtocol:iscsi TargetNames:[iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1] LunID:0 SecondaryBackendDetails:{PeerArrayDetails:[]} IscsiAccessInfo:{DiscoveryIPs:[192.168.150.241] ChapUser: ChapPassword:}} VirtualDeviceAccessInfo:{}}}" file="controller_server.go:887"
time="2024-09-29T08:07:15Z" level=trace msg="Adding filesystem details to the publish context" file="controller_server.go:930"
time="2024-09-29T08:07:15Z" level=trace msg="Volume pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 with ID my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 published with the following details: [fsOwner  serialNumber 6589cfc000000a69472b2e921fd54e29 targetScope volume lunId 0 discoveryIps 192.168.150.241 readOnly false fsType xfs accessProtocol iscsi targetNames iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 volumeAccessMode mount fsMode  fsCreateOptions ]" file="controller_server.go:942"
time="2024-09-29T08:07:15Z" level=trace msg="<<<<< controllerPublishVolume" file="controller_server.go:944"
time="2024-09-29T08:07:15Z" level=trace msg=">>>>> ClearRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:605"
time="2024-09-29T08:07:15Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:07:15Z" level=trace msg="Print RequestCache: map[]" file="driver.go:625"
time="2024-09-29T08:07:15Z" level=trace msg="Successfully removed an entry with key ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8 from the cache map" file="driver.go:626"
time="2024-09-29T08:07:15Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:07:15Z" level=trace msg="<<<<< ClearRequest" file="driver.go:627"
time="2024-09-29T08:07:15Z" level=trace msg="<<<<< ControllerPublishVolume" file="controller_server.go:740"
time="2024-09-29T08:07:15Z" level=info msg="GRPC response: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"discoveryIps\":\"192.168.150.241\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"6589cfc000000a69472b2e921fd54e29\",\"targetNames\":\"iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"targetScope\":\"volume\",\"volumeAccessMode\":\"mount\"}}" file="utils.go:75"
time="2024-09-29T08:07:24Z" level=trace msg=">>>>> MonitorPod with label monitored-by=hpe-csi" file="flavor.go:838"
time="2024-09-29T08:07:24Z" level=trace msg="cannot find any nfs pod with label monitored-by=hpe-csi" file="flavor.go:852"
time="2024-09-29T08:07:24Z" level=trace msg="<<<<< MonitorPod" file="flavor.go:853"
W0929 08:07:37.449606       1 reflector.go:424] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
E0929 08:07:37.449669       1 reflector.go:140] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: Failed to watch *v1.VolumeSnapshot: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
time="2024-09-29T08:07:54Z" level=trace msg=">>>>> MonitorPod with label monitored-by=hpe-csi" file="flavor.go:838"
time="2024-09-29T08:07:54Z" level=trace msg="cannot find any nfs pod with label monitored-by=hpe-csi" file="flavor.go:852"
time="2024-09-29T08:07:54Z" level=trace msg="<<<<< MonitorPod" file="flavor.go:853"
W0929 08:08:13.921832       1 reflector.go:424] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
E0929 08:08:13.921872       1 reflector.go:140] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: Failed to watch *v1.VolumeSnapshot: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
time="2024-09-29T08:08:17Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2024-09-29T08:08:17Z" level=info msg="GRPC request: {\"node_id\":\"b1312db0-c69d-a568-6698-c1f50490d1f8\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/pv/name\":\"pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"csi.storage.k8s.io/pvc/name\":\"data-kafka-controller-2\",\"csi.storage.k8s.io/pvc/namespace\":\"mynamespace\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/first.dataset\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1727529462858-4320-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\"}" file="utils.go:70"
time="2024-09-29T08:08:17Z" level=trace msg=">>>>> ControllerPublishVolume" file="controller_server.go:704"
time="2024-09-29T08:08:17Z" level=trace msg=">>>>> HandleDuplicateRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:522"
time="2024-09-29T08:08:17Z" level=trace msg=">>>>> GetRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:579"
time="2024-09-29T08:08:17Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:08:17Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:08:17Z" level=trace msg="<<<<< GetRequest" file="driver.go:586"
time="2024-09-29T08:08:17Z" level=trace msg=">>>>> AddRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8, value: true" file="driver.go:591"
time="2024-09-29T08:08:17Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:08:17Z" level=trace msg="Print RequestCache: map[ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8:true]" file="driver.go:599"
time="2024-09-29T08:08:17Z" level=trace msg="Successfully inserted an entry with key ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8 to the cache map" file="driver.go:600"
time="2024-09-29T08:08:17Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:08:17Z" level=trace msg="<<<<< AddRequest" file="driver.go:601"
time="2024-09-29T08:08:17Z" level=trace msg="<<<<< HandleDuplicateRequest" file="driver.go:539"
time="2024-09-29T08:08:17Z" level=trace msg=">>>>> controllerPublishVolume with volumeID: my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1, nodeID: b1312db0-c69d-a568-6698-c1f50490d1f8, volumeCapability: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > , readOnlyFlag: false, volumeContext: map[allowOverrides:sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace compression:LZ4 csi.storage.k8s.io/pv/name:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 csi.storage.k8s.io/pvc/name:data-kafka-controller-2 csi.storage.k8s.io/pvc/namespace:mynamespace deduplication:OFF description:Volume for PVC {pvc} fsType:xfs hostEncryption:true hostEncryptionSecretName:storage-encryption-passphrase hostEncryptionSecretNamespace:hpe-storage root:my-data/first.dataset storage.kubernetes.io/csiProvisionerIdentity:1727529462858-4320-csi.hpe.com sync:STANDARD targetScope:volume volblocksize:8K volumeAccessMode:mount]" file="controller_server.go:754"
time="2024-09-29T08:08:17Z" level=trace msg=">>>>> IsValidVolumeCapability, volCapability: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:86"
time="2024-09-29T08:08:17Z" level=trace msg="Found access_mode: SINGLE_NODE_WRITER" file="controller_server.go:95"
time="2024-09-29T08:08:17Z" level=trace msg="Found Mount access_type fs_type:\"xfs\" " file="controller_server.go:119"
time="2024-09-29T08:08:17Z" level=trace msg="Found Mount access_type, FileSystem: xfs" file="controller_server.go:122"
time="2024-09-29T08:08:17Z" level=trace msg="Total length of mount flags: 0" file="controller_server.go:136"
time="2024-09-29T08:08:17Z" level=trace msg="<<<<< IsValidVolumeCapability" file="controller_server.go:150"
time="2024-09-29T08:08:17Z" level=trace msg=">>>>> getVolumeAccessType, volCap: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:26"
time="2024-09-29T08:08:17Z" level=trace msg="Found mount access type fs_type:\"xfs\" " file="controller_server.go:39"
time="2024-09-29T08:08:17Z" level=trace msg="<<<<< getVolumeAccessType" file="controller_server.go:40"
time="2024-09-29T08:08:17Z" level=trace msg=">>>>> GetVolume, ID: my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1" file="driver.go:431"
time="2024-09-29T08:08:17Z" level=trace msg="Secrets are provided. Checking with this particular storage provider." file="driver.go:438"
time="2024-09-29T08:08:17Z" level=trace msg=">>>>> GetStorageProvider" file="driver.go:360"
time="2024-09-29T08:08:17Z" level=trace msg=">>>>> CreateCredentials" file="storage_provider.go:65"
time="2024-09-29T08:08:17Z" level=trace msg="<<<<< CreateCredentials" file="storage_provider.go:125"
time="2024-09-29T08:08:17Z" level=trace msg="Storage provider already exists. Returning it." file="driver.go:380"
time="2024-09-29T08:08:17Z" level=trace msg="<<<<< GetStorageProvider" file="driver.go:381"
time="2024-09-29T08:08:17Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:08:17Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:08:17Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:08:17Z" level=trace msg="Request: action=GET path=http://truenas-csp-svc:8080/containers/v1/volumes/my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1" file="client.go:173"
time="2024-09-29T08:08:19Z" level=trace msg="response: 200 OK, length=404" file="client.go:224"
time="2024-09-29T08:08:19Z" level=debug msg="About to decode the error response &{0x4ff020 0xc0000af2c0 0x843500} into destination interface" file="client.go:231"
time="2024-09-29T08:08:19Z" level=trace msg="Found Volume &{ID:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 Name:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 Size:2147483648 Description:Volume for PVC data-kafka-controller-2 InUse:false Published:true BaseSnapID: ParentVolID: Clone:false Config:map[compression:LZ4 deduplication:OFF sync:STANDARD target_scope:volume volblocksize:8K] Metadata:[] SerialNumber: AccessProtocol: Iqn: Iqns:[] DiscoveryIP: DiscoveryIPs:[] MountPoint: Status:map[] Chap:<nil> Networks:[] ConnectionMode: LunID: TargetScope: IscsiSessions:[] FcSessions:[] VolumeGroupId: SecondaryArrayDetails: UsedBytes:0 FreeBytes:0 EncryptionKey:}" file="driver.go:469"
time="2024-09-29T08:08:19Z" level=trace msg="<<<<< GetVolume" file="driver.go:470"
time="2024-09-29T08:08:19Z" level=trace msg="volume config is map[compression:LZ4 deduplication:OFF sync:STANDARD targetScope:volume volblocksize:8K]" file="controller_server.go:823"
time="2024-09-29T08:08:19Z" level=trace msg=">>>>>> GetNodeInfo from node ID b1312db0-c69d-a568-6698-c1f50490d1f8" file="flavor.go:331"
time="2024-09-29T08:08:19Z" level=trace msg="Found the following HPE Node Info objects: &{{ } { 752323  <nil>} [{{HPENodeInfo storage.hpe.com/v1} {c1    7f95b111-3eb1-4115-bc54-59ac2442ea4c 495583 1 2024-09-28 20:27:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:12 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {1d1d92c3-5fff-4b13-4f87-dba44e3067c6 [iqn.1993-08.org.debian:01:4efdaa484444] [192.168.150.115/24 10.42.0.0/32 10.42.0.1/24] []  }} {{HPENodeInfo storage.hpe.com/v1} {c2    fac777c1-03fe-43d2-998f-8c45c2b04642 495394 1 2024-09-28 20:27:02 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:02 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {b1312db0-c69d-a568-6698-c1f50490d1f8 [iqn.1993-08.org.debian:01:4efdaa484445] [192.168.150.116/24 10.42.1.0/32 10.42.1.1/24] []  }} {{HPENodeInfo storage.hpe.com/v1} {c3    5ba76957-0085-49f4-8761-04abc656873f 495465 1 2024-09-28 20:27:06 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:06 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {df97b3bc-a8d5-2492-cbdc-197130594330 [iqn.1993-08.org.debian:01:4efdaa484446] [192.168.150.117/24 10.42.2.0/32 10.42.2.1/24] []  }}]}" file="flavor.go:338"
time="2024-09-29T08:08:19Z" level=trace msg="Processing node info {{HPENodeInfo storage.hpe.com/v1} {c1    7f95b111-3eb1-4115-bc54-59ac2442ea4c 495583 1 2024-09-28 20:27:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:12 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {1d1d92c3-5fff-4b13-4f87-dba44e3067c6 [iqn.1993-08.org.debian:01:4efdaa484444] [192.168.150.115/24 10.42.0.0/32 10.42.0.1/24] []  }}" file="flavor.go:341"
time="2024-09-29T08:08:19Z" level=trace msg="Processing node info {{HPENodeInfo storage.hpe.com/v1} {c2    fac777c1-03fe-43d2-998f-8c45c2b04642 495394 1 2024-09-28 20:27:02 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-09-28 20:27:02 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {b1312db0-c69d-a568-6698-c1f50490d1f8 [iqn.1993-08.org.debian:01:4efdaa484445] [192.168.150.116/24 10.42.1.0/32 10.42.1.1/24] []  }}" file="flavor.go:341"
time="2024-09-29T08:08:19Z" level=trace msg="<<<<<< GetNodeInfo" file="flavor.go:364"
time="2024-09-29T08:08:19Z" level=info msg=GetChapCredentialsFromVolumeContext file="flavor.go:1009"
time="2024-09-29T08:08:19Z" level=info msg="CHAP secret name and namespace are not provided in the storage class parameters." file="flavor.go:1015"
time="2024-09-29T08:08:19Z" level=info msg=GetChapCredentialsFromEnvironment file="flavor.go:995"
time="2024-09-29T08:08:19Z" level=info msg="CHAP secret name and namespace are not provided as environment variables." file="flavor.go:1001"
time="2024-09-29T08:08:19Z" level=trace msg=">>>>> GetStorageProvider" file="driver.go:360"
time="2024-09-29T08:08:19Z" level=trace msg=">>>>> CreateCredentials" file="storage_provider.go:65"
time="2024-09-29T08:08:19Z" level=trace msg="<<<<< CreateCredentials" file="storage_provider.go:125"
time="2024-09-29T08:08:19Z" level=trace msg="Storage provider already exists. Returning it." file="driver.go:380"
time="2024-09-29T08:08:19Z" level=trace msg="<<<<< GetStorageProvider" file="driver.go:381"
time="2024-09-29T08:08:19Z" level=trace msg="Notifying CSP about Node with ID  and UUID b1312db0-c69d-a568-6698-c1f50490d1f8" file="controller_server.go:861"
time="2024-09-29T08:08:19Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:08:19Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:08:19Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:08:19Z" level=trace msg="Request: action=POST path=http://truenas-csp-svc:8080/containers/v1/hosts" file="client.go:173"
time="2024-09-29T08:08:23Z" level=trace msg="response: 200 OK, length=278" file="client.go:224"
time="2024-09-29T08:08:23Z" level=debug msg="Received a null reader. That is not expected." file="client.go:245"
time="2024-09-29T08:08:23Z" level=trace msg="Defaulting to access protocol iscsi" file="controller_server.go:877"
time="2024-09-29T08:08:23Z" level=trace msg="About to invoke CSP request for backend truenas.internaldomain.com" file="container_storage_provider.go:153"
time="2024-09-29T08:08:23Z" level=trace msg="Header: {x-auth-token : *****}\n" file="client.go:166"
time="2024-09-29T08:08:23Z" level=trace msg="Header: {x-array-ip : truenas.internaldomain.com}\n" file="client.go:168"
time="2024-09-29T08:08:23Z" level=trace msg="Request: action=PUT path=http://truenas-csp-svc:8080/containers/v1/volumes/my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1/actions/publish" file="client.go:173"
time="2024-09-29T08:08:24Z" level=trace msg=">>>>> MonitorPod with label monitored-by=hpe-csi" file="flavor.go:838"
time="2024-09-29T08:08:24Z" level=trace msg="cannot find any nfs pod with label monitored-by=hpe-csi" file="flavor.go:852"
time="2024-09-29T08:08:24Z" level=trace msg="<<<<< MonitorPod" file="flavor.go:853"
time="2024-09-29T08:08:35Z" level=trace msg="response: 200 OK, length=260" file="client.go:224"
time="2024-09-29T08:08:35Z" level=debug msg="About to decode the error response &{0x4ff020 0xc0003ec6c0 0x843500} into destination interface" file="client.go:231"
time="2024-09-29T08:08:35Z" level=trace msg="PublishInfo response from CSP: &{SerialNumber:6589cfc000000a69472b2e921fd54e29 AccessInfo:{BlockDeviceAccessInfo:{AccessProtocol:iscsi TargetNames:[iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1] LunID:0 SecondaryBackendDetails:{PeerArrayDetails:[]} IscsiAccessInfo:{DiscoveryIPs:[192.168.150.241] ChapUser: ChapPassword:}} VirtualDeviceAccessInfo:{}}}" file="controller_server.go:887"
time="2024-09-29T08:08:35Z" level=trace msg="Adding filesystem details to the publish context" file="controller_server.go:930"
time="2024-09-29T08:08:35Z" level=trace msg="Volume pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 with ID my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 published with the following details: [fsMode  serialNumber 6589cfc000000a69472b2e921fd54e29 targetNames iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 targetScope volume lunId 0 readOnly false volumeAccessMode mount fsOwner  accessProtocol iscsi discoveryIps 192.168.150.241 fsType xfs fsCreateOptions ]" file="controller_server.go:942"
time="2024-09-29T08:08:35Z" level=trace msg="<<<<< controllerPublishVolume" file="controller_server.go:944"
time="2024-09-29T08:08:35Z" level=trace msg=">>>>> ClearRequest, key: ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="driver.go:605"
time="2024-09-29T08:08:35Z" level=trace msg="Acquiring mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:44"
time="2024-09-29T08:08:35Z" level=trace msg="Print RequestCache: map[]" file="driver.go:625"
time="2024-09-29T08:08:35Z" level=trace msg="Successfully removed an entry with key ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8 from the cache map" file="driver.go:626"
time="2024-09-29T08:08:35Z" level=trace msg="Releasing mutex lock for ControllerPublishVolume:my-data_first.dataset_pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1:b1312db0-c69d-a568-6698-c1f50490d1f8" file="concurrent.go:67"
time="2024-09-29T08:08:35Z" level=trace msg="<<<<< ClearRequest" file="driver.go:627"
time="2024-09-29T08:08:35Z" level=trace msg="<<<<< ControllerPublishVolume" file="controller_server.go:740"
time="2024-09-29T08:08:35Z" level=info msg="GRPC response: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"discoveryIps\":\"192.168.150.241\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"6589cfc000000a69472b2e921fd54e29\",\"targetNames\":\"iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1\",\"targetScope\":\"volume\",\"volumeAccessMode\":\"mount\"}}" file="utils.go:75"
time="2024-09-29T08:08:54Z" level=trace msg=">>>>> MonitorPod with label monitored-by=hpe-csi" file="flavor.go:838"
time="2024-09-29T08:08:54Z" level=trace msg="cannot find any nfs pod with label monitored-by=hpe-csi" file="flavor.go:852"
time="2024-09-29T08:08:54Z" level=trace msg="<<<<< MonitorPod" file="flavor.go:853"
W0929 08:09:08.752555       1 reflector.go:424] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
E0929 08:09:08.752694       1 reflector.go:140] hpe-csi-driver/pkg/flavor/kubernetes/flavor.go:147: Failed to watch *v1.VolumeSnapshot: failed to list *v1.VolumeSnapshot: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)
time="2024-09-29T08:09:24Z" level=trace msg=">>>>> MonitorPod with label monitored-by=hpe-csi" file="flavor.go:838"
time="2024-09-29T08:09:24Z" level=trace msg="cannot find any nfs pod with label monitored-by=hpe-csi" file="flavor.go:852"
time="2024-09-29T08:09:24Z" level=trace msg="<<<<< MonitorPod" file="flavor.go:853"

Even if volumes should be published, if i run multipath -ll i don't see the volume mounted. Looking at timing, i see that the http response from the truenas-csp services takes a while, but i suppose this is due to the size of the response.

I also tried to manually mount the volume via ssh and it seems that in such a case there is no problem

iscsiadm -m discovery -t sendtargets -p truenas.internaldomain.com
iscsiadm -m node -T iqn.2011-08.org.truenas.ctl:pvc-916c05db-b7d8-4a7f-aa48-2d10cbb3c2b1 -l

after this, both lsblk and iscsiadm -m session confirm that the volume is mounted

datamattsson commented 1 month ago

Thanks for providing the logs and manually verifying the data path. This is definitely a control plane issue.

I have a hard time following the logs on my phone and I'll have to get back to you later.

datamattsson commented 1 month ago

Can you capture the logs from the node driver as well? Something must error out there somehow.

santimar commented 4 weeks ago

I made some more experiments with this issue.

I tried to use k3s with calico instead of flannel and also used rke2 with calico, but in both case I got the same error, so i't not something flannel or k3s related. After more investigation, i found in https://scod.hpedev.io/csi_driver/operations.html#iscsidconf at pro tip section, that every node need to have unique IQN.

My cluster are all provisioned by using the same vm template on vsphere, so i thought this was the problem. After a complete review and making sure all nodes have unique IQN, i have now 6 clusters and 18 nodes connected to the same truenas server.

The strange thing is that it appeared after a certain amount of nodes are connected, even if at that point i already had 10+ nodes with duplicated IQN

Anyway, it seems that even now, with unique IQN for every nodes, scaling up makes the same VolumeAttachment error appear

This is for example a cluster i created few minutes ago

Normal  SuccessfulAttachVolume  1.9 mins ago    AttachVolume.Attach succeeded for volume "pvc-a89a4939-db62-4522-8b41-a292229692ad"
Warning FailedAttachVolume  3 mins ago  AttachVolume.Attach failed for volume "pvc-a89a4939-db62-4522-8b41-a292229692ad" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
Warning FailedAttachVolume (7)  3.5 mins ago    AttachVolume.Attach failed for volume "pvc-a89a4939-db62-4522-8b41-a292229692ad" : rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:my-data_mynodename_pvc-a89a4939-db62-4522-8b41-a292229692ad:0166a132-1823-2f58-8329-f047671c993f
Normal  Scheduled           4.4 mins ago    Successfully assigned mynamespace/minio-5884f9796b-9skl2 to c2

it still took 2+ minutes and a lot of failures to attach this volume

Finally, here are the requested logs node-driver-registrar

I1007 07:31:19.785586       1 main.go:135] Version: v2.10.1
I1007 07:31:19.785681       1 main.go:136] Running node-driver-registrar in mode=
I1007 07:31:19.785689       1 main.go:157] Attempting to open a gRPC connection with: "/csi/csi.sock"
I1007 07:31:19.785712       1 connection.go:215] Connecting to unix:///csi/csi.sock
I1007 07:31:20.787312       1 main.go:164] Calling CSI driver to discover driver name
I1007 07:31:20.787335       1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo
I1007 07:31:20.787340       1 connection.go:245] GRPC request: {}
I1007 07:31:20.792846       1 connection.go:251] GRPC response: {"name":"csi.hpe.com","vendor_version":"1.3"}
I1007 07:31:20.792863       1 connection.go:252] GRPC error: <nil>
I1007 07:31:20.792873       1 main.go:173] CSI driver name: "csi.hpe.com"
I1007 07:31:20.793257       1 node_register.go:55] Starting Registration Server at: /registration/csi.hpe.com-reg.sock
I1007 07:31:20.793467       1 node_register.go:64] Registration Server started at: /registration/csi.hpe.com-reg.sock
I1007 07:31:20.793528       1 node_register.go:88] Skipping HTTP server because endpoint is set to: ""
I1007 07:31:21.274559       1 main.go:90] Received GetInfo call: &InfoRequest{}
I1007 07:31:21.430338       1 main.go:101] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}

csi-node-init

+ for arg in "$@"
+ '[' --node-init = --node-service ']'
+ '[' --node-init = --node-init ']'
+ nodeInit=true
+ for arg in "$@"
+ '[' --endpoint=unix:///csi/csi.sock = --node-service ']'
+ '[' --endpoint=unix:///csi/csi.sock = --node-init ']'
+ for arg in "$@"
+ '[' --flavor=kubernetes = --node-service ']'
+ '[' --flavor=kubernetes = --node-init ']'
+ disableNodeConformance=
+ disableNodeConfiguration=
+ '[' '' = true ']'
+ '[' true = true ']'
+ echo 'copying hpe log collector diag script'
+ cp -f /opt/hpe-storage/bin/hpe-logcollector.sh /usr/local/bin/hpe-logcollector.sh
+ chmod +x /usr/local/bin/hpe-logcollector.sh
+ ln -s /host/etc/multipath.conf /etc/multipath.conf
copying hpe log collector diag script
+ ln -s /host/etc/multipath /etc/multipath
+ ln -s /host/etc/iscsi /etc/iscsi
+ '[' -f /host/etc/redhat-release ']'
+ '[' -f /host/etc/os-release ']'
+ rm /etc/os-release
+ ln -s /host/etc/os-release /etc/os-release
+ '[' true = true ']'
+ '[' '' = true ']'
+ '[' '' = true ']'
+ '[' '' '!=' true ']'
+ cp -f /opt/hpe-storage/lib/hpe-storage-node.service /etc/systemd/system/hpe-storage-node.service
+ cp -f /opt/hpe-storage/lib/hpe-storage-node.sh /etc/hpe-storage/hpe-storage-node.sh
+ chmod +x /etc/hpe-storage/hpe-storage-node.sh
+ echo 'running node conformance checks...'
+ systemctl daemon-reload
running node conformance checks...
+ systemctl restart hpe-storage-node
+ '[' '!' -f /host/etc/multipath.conf ']'
+ exec /bin/csi-driver --node-init --endpoint=unix:///csi/csi.sock --flavor=kubernetes
time="2024-10-07T07:31:17Z" level=info msg="Initialized logging." alsoLogToStderr=true logFileLocation=/var/log/hpe-csi-controller.log logLevel=info
time="2024-10-07T07:31:17Z" level=info msg="**********************************************" file="csi-driver.go:56"
time="2024-10-07T07:31:17Z" level=info msg="*************** HPE CSI DRIVER ***************" file="csi-driver.go:57"
time="2024-10-07T07:31:17Z" level=info msg="**********************************************" file="csi-driver.go:58"
time="2024-10-07T07:31:17Z" level=info msg=">>>>> CMDLINE Exec, args: []" file="csi-driver.go:60"
time="2024-10-07T07:31:17Z" level=info msg="got OS details as [redhat 9 4 5.4.0-131-generic]\n" file="os.go:95"
time="2024-10-07T07:31:18Z" level=warning msg="Distro section: Ubuntu , not present for deviceType: Nimble , using default config" file="config.go:248"
time="2024-10-07T07:31:18Z" level=info msg="Successfully set iSCSI recommendations on host" file="iscsi.go:397"
time="2024-10-07T07:31:18Z" level=info msg="Successfully configured multipath.conf settings" file="multipath.go:370"
time="2024-10-07T07:31:18Z" level=info msg=">>>>> node init container " file="nodeinit.go:38"
time="2024-10-07T07:31:18Z" level=info msg="Found 0 multipath devices []" file="multipath.go:423"
time="2024-10-07T07:31:18Z" level=info msg="No multipath devices found on this node ." file="utils.go:45"

csi-driver

copying hpe log collector diag script
+ for arg in "$@"
+ '[' --endpoint=unix:///csi/csi.sock = --node-service ']'
+ '[' --endpoint=unix:///csi/csi.sock = --node-init ']'
+ for arg in "$@"
+ '[' --node-service = --node-service ']'
+ nodeService=true
+ '[' --node-service = --node-init ']'
+ for arg in "$@"
+ '[' --flavor=kubernetes = --node-service ']'
+ '[' --flavor=kubernetes = --node-init ']'
+ for arg in "$@"
+ '[' --node-monitor = --node-service ']'
+ '[' --node-monitor = --node-init ']'
+ for arg in "$@"
+ '[' --node-monitor-interval=90 = --node-service ']'
+ '[' --node-monitor-interval=90 = --node-init ']'
+ disableNodeConformance=
+ disableNodeConfiguration=
+ '[' true = true ']'
+ echo 'copying hpe log collector diag script'
+ cp -f /opt/hpe-storage/bin/hpe-logcollector.sh /usr/local/bin/hpe-logcollector.sh
+ chmod +x /usr/local/bin/hpe-logcollector.sh
+ ln -s /host/etc/multipath.conf /etc/multipath.conf
+ ln -s /host/etc/multipath /etc/multipath
+ ln -s /host/etc/iscsi /etc/iscsi
+ '[' -f /host/etc/redhat-release ']'
+ '[' -f /host/etc/os-release ']'
+ rm /etc/os-release
+ ln -s /host/etc/os-release /etc/os-release
+ '[' '' = true ']'
+ echo 'Starting CSI plugin...'
+ exec /bin/csi-driver --endpoint=unix:///csi/csi.sock --node-service --flavor=kubernetes --node-monitor --node-monitor-interval=90
Starting CSI plugin...
time="2024-10-07T07:31:19Z" level=info msg="Initialized logging." alsoLogToStderr=true logFileLocation=/var/log/hpe-csi-node.log logLevel=info
time="2024-10-07T07:31:19Z" level=info msg="**********************************************" file="csi-driver.go:56"
time="2024-10-07T07:31:19Z" level=info msg="*************** HPE CSI DRIVER ***************" file="csi-driver.go:57"
time="2024-10-07T07:31:19Z" level=info msg="**********************************************" file="csi-driver.go:58"
time="2024-10-07T07:31:19Z" level=info msg=">>>>> CMDLINE Exec, args: []" file="csi-driver.go:60"
time="2024-10-07T07:31:19Z" level=info msg="Enabling controller service capability: CREATE_DELETE_VOLUME" file="driver.go:250"
time="2024-10-07T07:31:19Z" level=info msg="Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME" file="driver.go:250"
time="2024-10-07T07:31:19Z" level=info msg="Enabling controller service capability: LIST_VOLUMES" file="driver.go:250"
time="2024-10-07T07:31:19Z" level=info msg="Enabling controller service capability: CREATE_DELETE_SNAPSHOT" file="driver.go:250"
time="2024-10-07T07:31:19Z" level=info msg="Enabling controller service capability: LIST_SNAPSHOTS" file="driver.go:250"
time="2024-10-07T07:31:19Z" level=info msg="Enabling controller service capability: CLONE_VOLUME" file="driver.go:250"
time="2024-10-07T07:31:19Z" level=info msg="Enabling controller service capability: PUBLISH_READONLY" file="driver.go:250"
time="2024-10-07T07:31:19Z" level=info msg="Enabling controller service capability: EXPAND_VOLUME" file="driver.go:250"
time="2024-10-07T07:31:19Z" level=info msg="Enabling node service capability: STAGE_UNSTAGE_VOLUME" file="driver.go:267"
time="2024-10-07T07:31:19Z" level=info msg="Enabling node service capability: EXPAND_VOLUME" file="driver.go:267"
time="2024-10-07T07:31:19Z" level=info msg="Enabling node service capability: GET_VOLUME_STATS" file="driver.go:267"
time="2024-10-07T07:31:19Z" level=info msg="Enabling volume expansion type: ONLINE" file="driver.go:281"
time="2024-10-07T07:31:19Z" level=info msg="Enabling volume access mode: SINGLE_NODE_WRITER" file="driver.go:293"
time="2024-10-07T07:31:19Z" level=info msg="Enabling volume access mode: SINGLE_NODE_READER_ONLY" file="driver.go:293"
time="2024-10-07T07:31:19Z" level=info msg="Enabling volume access mode: MULTI_NODE_READER_ONLY" file="driver.go:293"
time="2024-10-07T07:31:19Z" level=info msg="Enabling volume access mode: MULTI_NODE_SINGLE_WRITER" file="driver.go:293"
time="2024-10-07T07:31:19Z" level=info msg="Enabling volume access mode: MULTI_NODE_MULTI_WRITER" file="driver.go:293"
time="2024-10-07T07:31:19Z" level=info msg="DB service disabled!!!" file="driver.go:145"
time="2024-10-07T07:31:19Z" level=info msg="About to start the CSI driver 'csi.hpe.com with KubeletRootDirectory /var/lib/kubelet/'" file="csi-driver.go:192"
time="2024-10-07T07:31:19Z" level=info msg="[1] reply  : [/bin/csi-driver --endpoint=unix:///csi/csi.sock --node-service --flavor=kubernetes --node-monitor --node-monitor-interval=90]" file="csi-driver.go:195"
time="2024-10-07T07:31:19Z" level=info msg="Scheduled ephemeral inline volumes scrubber task to run every 3600 seconds, PodsDirPath: [/var/lib/kubelet/pods]" file="driver.go:214"
time="2024-10-07T07:31:19Z" level=info msg=">>>>> Scrubber task invoked at 2024-10-07 07:31:19.942948189 +0000 UTC m=+0.020769820" file="driver.go:746"
time="2024-10-07T07:31:19Z" level=info msg="Listening for connections on address: &net.UnixAddr{Name:\"//csi/csi.sock\", Net:\"unix\"}" file="server.go:86"
time="2024-10-07T07:31:19Z" level=info msg="No ephemeral inline volumes found" file="driver.go:815"
time="2024-10-07T07:31:19Z" level=info msg="<<<<< Scrubber task completed at 2024-10-07 07:31:19.946012451 +0000 UTC m=+0.023834092" file="driver.go:751"
time="2024-10-07T07:31:20Z" level=info msg="GRPC call: /csi.v1.Identity/GetPluginInfo" file="utils.go:69"
time="2024-10-07T07:31:20Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:31:20Z" level=info msg=">>>>> GetPluginInfo" file="identity_server.go:16"
time="2024-10-07T07:31:20Z" level=info msg="<<<<< GetPluginInfo" file="identity_server.go:19"
time="2024-10-07T07:31:20Z" level=info msg="GRPC response: {\"name\":\"csi.hpe.com\",\"vendor_version\":\"1.3\"}" file="utils.go:75"
time="2024-10-07T07:31:21Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetInfo" file="utils.go:69"
time="2024-10-07T07:31:21Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:31:21Z" level=info msg="Writing uuid to file:/etc/hpe-storage/node.gob uuid:0166a132-1823-2f58-8329-f047671c993f" file="chapidriver_linux.go:52"
time="2024-10-07T07:31:21Z" level=info msg="Host name reported as c2" file="node_server.go:2093"
time="2024-10-07T07:31:21Z" level=warning msg="no fc adapters found on the host" file="fc.go:49"
time="2024-10-07T07:31:21Z" level=info msg="Processing network named ens192 with IpV4 CIDR 192.168.150.16/24" file="node_server.go:2123"
time="2024-10-07T07:31:21Z" level=info msg="Processing network named flannel-wg with IpV4 CIDR 10.42.1.0/32" file="node_server.go:2123"
time="2024-10-07T07:31:21Z" level=info msg="Processing network named cni0 with IpV4 CIDR 10.42.1.1/24" file="node_server.go:2123"
time="2024-10-07T07:31:21Z" level=info msg="Adding node with name c2" file="flavor.go:268"
time="2024-10-07T07:31:21Z" level=info msg="Successfully added node info for node &{{ } {c2    a4667196-3442-418c-b06d-c8207959cd08 2487 1 2024-10-07 07:31:21 +0000 UTC <nil> <nil> map[] map[] [] [] [{csi-driver Update storage.hpe.com/v1 2024-10-07 07:31:21 +0000 UTC FieldsV1 {\"f:spec\":{\".\":{},\"f:iqns\":{},\"f:networks\":{},\"f:uuid\":{}}} }]} {0166a132-1823-2f58-8329-f047671c993f [iqn.2024-10.mynodename:c2] [192.168.150.16/24 10.42.1.0/32 10.42.1.1/24] []  }}" file="flavor.go:276"
time="2024-10-07T07:31:21Z" level=info msg="node c2 nodeLabels: map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:k3s beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:c2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane:true node-role.kubernetes.io/etcd:true node-role.kubernetes.io/master:true node.kubernetes.io/instance-type:k3s]" file="node_server.go:2000"
time="2024-10-07T07:31:21Z" level=info msg="node c2 topo: <nil>" file="node_server.go:2021"
time="2024-10-07T07:31:21Z" level=warning msg="Failed to add [/etc/sysconfig/network-scripts/] file to watch list, err no such file or directory :" file="watcher.go:82"
time="2024-10-07T07:31:21Z" level=info msg="GRPC response: {\"max_volumes_per_node\":100,\"node_id\":\"0166a132-1823-2f58-8329-f047671c993f\"}" file="utils.go:75"
time="2024-10-07T07:32:49Z" level=info msg="Node monitor started monitoring the node c2" file="nodemonitor.go:101"
time="2024-10-07T07:32:49Z" level=info msg="Found 0 multipath devices []" file="multipath.go:423"
time="2024-10-07T07:32:49Z" level=info msg="No multipath devices found on this node c2." file="utils.go:45"
time="2024-10-07T07:34:19Z" level=info msg="Node monitor started monitoring the node c2" file="nodemonitor.go:101"
time="2024-10-07T07:34:19Z" level=info msg="Found 0 multipath devices []" file="multipath.go:423"
time="2024-10-07T07:34:19Z" level=info msg="No multipath devices found on this node c2." file="utils.go:45"
time="2024-10-07T07:35:16Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:35:16Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:35:16Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:35:16Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:35:16Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:35:16Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:35:16Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:35:16Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:35:16Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:35:16Z" level=info msg="GRPC call: /csi.v1.Node/NodeStageVolume" file="utils.go:69"
time="2024-10-07T07:35:16Z" level=info msg="GRPC request: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"discoveryIps\":\"192.168.150.241\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"6589cfc0000000af4de1ef97990e1716\",\"targetNames\":\"iqn.2011-08.org.truenas.ctl:pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd\",\"targetScope\":\"volume\",\"volumeAccessMode\":\"mount\"},\"secrets\":\"***stripped***\",\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/2a6afbdc585cf1404d9b223db62815892fcd37c2861fa40a94519d8f90d2c3a8/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/pv/name\":\"pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd\",\"csi.storage.k8s.io/pvc/name\":\"prometheus-kube-prometheus-stack-prometheus-db-prometheus-kube-prometheus-stack-prometheus-0\",\"csi.storage.k8s.io/pvc/namespace\":\"prometheus\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/mynodename\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1728286278715-3559-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_mynodename_pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd\"}" file="utils.go:70"
time="2024-10-07T07:35:16Z" level=info msg="NodeStageVolume requested volume my-data_mynodename_pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd with access type mount, targetPath /var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/2a6afbdc585cf1404d9b223db62815892fcd37c2861fa40a94519d8f90d2c3a8/globalmount, capability mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > , publishContext map[accessProtocol:iscsi discoveryIps:192.168.150.241 fsCreateOptions: fsMode: fsOwner: fsType:xfs lunId:0 readOnly:false serialNumber:6589cfc0000000af4de1ef97990e1716 targetNames:iqn.2011-08.org.truenas.ctl:pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd targetScope:volume volumeAccessMode:mount] and volumeContext map[allowOverrides:sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace compression:LZ4 csi.storage.k8s.io/pv/name:pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd csi.storage.k8s.io/pvc/name:prometheus-kube-prometheus-stack-prometheus-db-prometheus-kube-prometheus-stack-prometheus-0 csi.storage.k8s.io/pvc/namespace:prometheus deduplication:OFF description:Volume for PVC {pvc} fsType:xfs hostEncryption:true hostEncryptionSecretName:storage-encryption-passphrase hostEncryptionSecretNamespace:hpe-storage root:my-data/mynodename storage.kubernetes.io/csiProvisionerIdentity:1728286278715-3559-csi.hpe.com sync:STANDARD targetScope:volume volblocksize:8K volumeAccessMode:mount]" file="node_server.go:221"
time="2024-10-07T07:35:16Z" level=info msg="Requested volume needs encryption. Received Secret name: storage-encryption-passphrase, Secret namespace: hpe-storage" file="node_server.go:254"
time="2024-10-07T07:35:16Z" level=info msg=GetChapCredentialsFromVolumeContext file="flavor.go:1009"
time="2024-10-07T07:35:16Z" level=info msg="CHAP secret name and namespace are not provided in the storage class parameters." file="flavor.go:1015"
time="2024-10-07T07:35:16Z" level=info msg=GetChapCredentialsFromEnvironment file="flavor.go:995"
time="2024-10-07T07:35:16Z" level=info msg="CHAP secret name and namespace are not provided as environment variables." file="flavor.go:1001"
time="2024-10-07T07:35:16Z" level=error msg="\n Error in GetSecondaryBackends unexpected end of JSON input" file="volume.go:87"
time="2024-10-07T07:35:16Z" level=error msg="\n Passed details " file="volume.go:88"
time="2024-10-07T07:35:16Z" level=error msg="\n Error in GetSecondaryBackends unexpected end of JSON input" file="volume.go:87"
time="2024-10-07T07:35:16Z" level=error msg="\n Passed details " file="volume.go:88"
time="2024-10-07T07:35:17Z" level=info msg="Not a LUKS device - /dev/dm-1" file="device.go:482"
time="2024-10-07T07:35:17Z" level=error msg="process with pid : 45 finished with error = exit status 1" file="cmd.go:63"
time="2024-10-07T07:35:17Z" level=info msg="Device /dev/dm-1 is a new device. LUKS formatting it..." file="device.go:529"
time="2024-10-07T07:35:21Z" level=info msg="Device /dev/dm-1 has been LUKS formatted successfully" file="device.go:538"
time="2024-10-07T07:35:21Z" level=info msg="Opening LUKS device /dev/dm-1 with mapped device enc-mpathb..." file="device.go:542"
time="2024-10-07T07:35:23Z" level=info msg="Opened LUKS device /dev/dm-1 with mapped device enc-mpathb successfully" file="device.go:551"
time="2024-10-07T07:35:23Z" level=info msg="Device setup successful, Device: &{VolumeID: Pathname:dm-1 LuksPathname:enc-mpathb SerialNumber:6589cfc0000000af4de1ef97990e1716 Major:253 Minor:1 AltFullPathName:/dev/mapper/mpathb AltFullLuksPathName:/dev/mapper/enc-mpathb MpathName:mpathb Size:3072 Slaves:[sdb] IscsiTargets:[0xc0000b5a40] Hcils:[33:0:0:0] TargetScope:volume State:active Filesystem: StorageVendor:}" file="node_server.go:431"
time="2024-10-07T07:35:23Z" level=error msg="process with pid : 50 finished with error = exit status 2" file="cmd.go:63"
time="2024-10-07T07:35:24Z" level=info msg="GRPC response: {}" file="utils.go:75"
time="2024-10-07T07:35:24Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:35:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:35:24Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:35:24Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:35:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:35:24Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:35:24Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:35:24Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:35:24Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:35:24Z" level=info msg="GRPC call: /csi.v1.Node/NodePublishVolume" file="utils.go:69"
time="2024-10-07T07:35:24Z" level=info msg="GRPC request: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"discoveryIps\":\"192.168.150.241\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"6589cfc0000000af4de1ef97990e1716\",\"targetNames\":\"iqn.2011-08.org.truenas.ctl:pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd\",\"targetScope\":\"volume\",\"volumeAccessMode\":\"mount\"},\"secrets\":\"***stripped***\",\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/2a6afbdc585cf1404d9b223db62815892fcd37c2861fa40a94519d8f90d2c3a8/globalmount\",\"target_path\":\"/var/lib/kubelet/pods/5293f592-c69c-4aac-8151-667028cb2d27/volumes/kubernetes.io~csi/pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/ephemeral\":\"false\",\"csi.storage.k8s.io/pod.name\":\"prometheus-kube-prometheus-stack-prometheus-0\",\"csi.storage.k8s.io/pod.namespace\":\"prometheus\",\"csi.storage.k8s.io/pod.uid\":\"5293f592-c69c-4aac-8151-667028cb2d27\",\"csi.storage.k8s.io/pv/name\":\"pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd\",\"csi.storage.k8s.io/pvc/name\":\"prometheus-kube-prometheus-stack-prometheus-db-prometheus-kube-prometheus-stack-prometheus-0\",\"csi.storage.k8s.io/pvc/namespace\":\"prometheus\",\"csi.storage.k8s.io/serviceAccount.name\":\"kube-prometheus-stack-prometheus\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/mynodename\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1728286278715-3559-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_mynodename_pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd\"}" file="utils.go:70"
time="2024-10-07T07:35:24Z" level=info msg="NodePublishVolume requested volume my-data_mynodename_pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd with access type mount, targetPath /var/lib/kubelet/pods/5293f592-c69c-4aac-8151-667028cb2d27/volumes/kubernetes.io~csi/pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd/mount, capability mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > , publishContext map[accessProtocol:iscsi discoveryIps:192.168.150.241 fsCreateOptions: fsMode: fsOwner: fsType:xfs lunId:0 readOnly:false serialNumber:6589cfc0000000af4de1ef97990e1716 targetNames:iqn.2011-08.org.truenas.ctl:pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd targetScope:volume volumeAccessMode:mount] and volumeContext map[allowOverrides:sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace compression:LZ4 csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:prometheus-kube-prometheus-stack-prometheus-0 csi.storage.k8s.io/pod.namespace:prometheus csi.storage.k8s.io/pod.uid:5293f592-c69c-4aac-8151-667028cb2d27 csi.storage.k8s.io/pv/name:pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd csi.storage.k8s.io/pvc/name:prometheus-kube-prometheus-stack-prometheus-db-prometheus-kube-prometheus-stack-prometheus-0 csi.storage.k8s.io/pvc/namespace:prometheus csi.storage.k8s.io/serviceAccount.name:kube-prometheus-stack-prometheus deduplication:OFF description:Volume for PVC {pvc} fsType:xfs hostEncryption:true hostEncryptionSecretName:storage-encryption-passphrase hostEncryptionSecretNamespace:hpe-storage root:my-data/mynodename storage.kubernetes.io/csiProvisionerIdentity:1728286278715-3559-csi.hpe.com sync:STANDARD targetScope:volume volblocksize:8K volumeAccessMode:mount]" file="node_server.go:833"
time="2024-10-07T07:35:24Z" level=info msg="Adding connection to CSP at IP truenas.internaldomain.com, port 8080, context path , with username ignored-when-using-api-key and serviceName truenas-csp-svc" file="driver.go:334"
time="2024-10-07T07:35:24Z" level=info msg="About to attempt login to CSP for backend truenas.internaldomain.com" file="container_storage_provider.go:108"
time="2024-10-07T07:35:26Z" level=info msg="Successfully published the volume my-data_mynodename_pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd to the target path /var/lib/kubelet/pods/5293f592-c69c-4aac-8151-667028cb2d27/volumes/kubernetes.io~csi/pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd/mount" file="node_server.go:893"
time="2024-10-07T07:35:26Z" level=info msg="GRPC response: {}" file="utils.go:75"
time="2024-10-07T07:35:49Z" level=info msg="Node monitor started monitoring the node c2" file="nodemonitor.go:101"
time="2024-10-07T07:35:49Z" level=info msg="Found 1 multipath devices [{Name:mpathb UUID:36589cfc0000000af4de1ef97990e1716 Sysfs:dm-1 Failback:- Queueing:- Paths:1 WriteProt:rw DmSt:active Features:0 Hwhandler:0 Action:create PathFaults:0 Vend:TrueNAS Prod:iSCSI Disk Rev:380 SwitchGrp:0 MapLoads:1 TotalQTime:0 QTimeouts:0 PathGroups:[{Selector:service-time 0 Pri:1 DmSt:active Group:1 Paths:[{Dev:sdb DevT:8:16 DmSt:active DevSt:running ChkSt:ready Checker:tur Pri:1 HostWwnn:[undef] TargetWwnn:iqn.2011-08.org.truenas.ctl:pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd HostWwpn:[undef] TargetWwpn:[undef] HostAdapter:192.168.150.16}]}] IsUnhealthy:false}]" file="multipath.go:423"
time="2024-10-07T07:35:49Z" level=info msg=" 1 multipath devices found on the node c2" file="utils.go:49"
time="2024-10-07T07:35:49Z" level=info msg="Node c2 has a proper connection with the control plane" file="utils.go:79"
time="2024-10-07T07:35:49Z" level=info msg="3 volume attachments found" file="utils.go:88"
time="2024-10-07T07:35:49Z" level=info msg="Assessing the multipath device mpathb" file="utils.go:93"
time="2024-10-07T07:35:49Z" level=info msg="The multipath device mpathb belongs to this node c2 and is healthy." file="utils.go:98"
time="2024-10-07T07:37:11Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:37:11Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:37:11Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:37:11Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetVolumeStats" file="utils.go:69"
time="2024-10-07T07:37:11Z" level=info msg="GRPC request: {\"volume_id\":\"my-data_mynodename_pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd\",\"volume_path\":\"/var/lib/kubelet/pods/5293f592-c69c-4aac-8151-667028cb2d27/volumes/kubernetes.io~csi/pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd/mount\"}" file="utils.go:70"
time="2024-10-07T07:37:11Z" level=info msg="GRPC response: {\"usage\":[{\"available\":3135651840,\"total\":3208642560,\"unit\":1,\"used\":72990720},{\"available\":1571831,\"total\":1571840,\"unit\":2,\"used\":9}]}" file="utils.go:75"
time="2024-10-07T07:37:19Z" level=info msg="Node monitor started monitoring the node c2" file="nodemonitor.go:101"
time="2024-10-07T07:37:19Z" level=info msg="Found 1 multipath devices [{Name:mpathb UUID:36589cfc0000000af4de1ef97990e1716 Sysfs:dm-1 Failback:- Queueing:- Paths:1 WriteProt:rw DmSt:active Features:0 Hwhandler:0 Action:create PathFaults:0 Vend:TrueNAS Prod:iSCSI Disk Rev:380 SwitchGrp:0 MapLoads:1 TotalQTime:0 QTimeouts:0 PathGroups:[{Selector:service-time 0 Pri:1 DmSt:active Group:1 Paths:[{Dev:sdb DevT:8:16 DmSt:active DevSt:running ChkSt:ready Checker:tur Pri:1 HostWwnn:[undef] TargetWwnn:iqn.2011-08.org.truenas.ctl:pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd HostWwpn:[undef] TargetWwpn:[undef] HostAdapter:192.168.150.16}]}] IsUnhealthy:false}]" file="multipath.go:423"
time="2024-10-07T07:37:19Z" level=info msg=" 1 multipath devices found on the node c2" file="utils.go:49"
time="2024-10-07T07:37:19Z" level=info msg="Node c2 has a proper connection with the control plane" file="utils.go:79"
time="2024-10-07T07:37:19Z" level=info msg="4 volume attachments found" file="utils.go:88"
time="2024-10-07T07:37:19Z" level=info msg="Assessing the multipath device mpathb" file="utils.go:93"
time="2024-10-07T07:37:19Z" level=info msg="The multipath device mpathb belongs to this node c2 and is healthy." file="utils.go:98"
time="2024-10-07T07:38:26Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:38:26Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:38:26Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:38:26Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetVolumeStats" file="utils.go:69"
time="2024-10-07T07:38:26Z" level=info msg="GRPC request: {\"volume_id\":\"my-data_mynodename_pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd\",\"volume_path\":\"/var/lib/kubelet/pods/5293f592-c69c-4aac-8151-667028cb2d27/volumes/kubernetes.io~csi/pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd/mount\"}" file="utils.go:70"
time="2024-10-07T07:38:26Z" level=info msg="GRPC response: {\"usage\":[{\"available\":3135651840,\"total\":3208642560,\"unit\":1,\"used\":72990720},{\"available\":1571831,\"total\":1571840,\"unit\":2,\"used\":9}]}" file="utils.go:75"
time="2024-10-07T07:38:49Z" level=info msg="Node monitor started monitoring the node c2" file="nodemonitor.go:101"
time="2024-10-07T07:38:49Z" level=info msg="Found 1 multipath devices [{Name:mpathb UUID:36589cfc0000000af4de1ef97990e1716 Sysfs:dm-1 Failback:- Queueing:- Paths:1 WriteProt:rw DmSt:active Features:0 Hwhandler:0 Action:create PathFaults:0 Vend:TrueNAS Prod:iSCSI Disk Rev:380 SwitchGrp:0 MapLoads:1 TotalQTime:0 QTimeouts:0 PathGroups:[{Selector:service-time 0 Pri:1 DmSt:active Group:1 Paths:[{Dev:sdb DevT:8:16 DmSt:active DevSt:running ChkSt:ready Checker:tur Pri:1 HostWwnn:[undef] TargetWwnn:iqn.2011-08.org.truenas.ctl:pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd HostWwpn:[undef] TargetWwpn:[undef] HostAdapter:192.168.150.16}]}] IsUnhealthy:false}]" file="multipath.go:423"
time="2024-10-07T07:38:49Z" level=info msg=" 1 multipath devices found on the node c2" file="utils.go:49"
time="2024-10-07T07:38:49Z" level=info msg="Node c2 has a proper connection with the control plane" file="utils.go:79"
time="2024-10-07T07:38:49Z" level=info msg="6 volume attachments found" file="utils.go:88"
time="2024-10-07T07:38:49Z" level=info msg="Assessing the multipath device mpathb" file="utils.go:93"
time="2024-10-07T07:38:49Z" level=info msg="The multipath device mpathb belongs to this node c2 and is healthy." file="utils.go:98"
time="2024-10-07T07:40:07Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:07Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:07Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:07Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetVolumeStats" file="utils.go:69"
time="2024-10-07T07:40:07Z" level=info msg="GRPC request: {\"volume_id\":\"my-data_mynodename_pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd\",\"volume_path\":\"/var/lib/kubelet/pods/5293f592-c69c-4aac-8151-667028cb2d27/volumes/kubernetes.io~csi/pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd/mount\"}" file="utils.go:70"
time="2024-10-07T07:40:07Z" level=info msg="GRPC response: {\"usage\":[{\"available\":3118874624,\"total\":3208642560,\"unit\":1,\"used\":89767936},{\"available\":1571831,\"total\":1571840,\"unit\":2,\"used\":9}]}" file="utils.go:75"
time="2024-10-07T07:40:12Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:12Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:12Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:12Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:12Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:12Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:12Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:12Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:12Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:12Z" level=info msg="GRPC call: /csi.v1.Node/NodeStageVolume" file="utils.go:69"
time="2024-10-07T07:40:12Z" level=info msg="GRPC request: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"discoveryIps\":\"192.168.150.241\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"6589cfc000000f5be3780c8c78c68cc8\",\"targetNames\":\"iqn.2011-08.org.truenas.ctl:pvc-a89a4939-db62-4522-8b41-a292229692ad\",\"targetScope\":\"volume\",\"volumeAccessMode\":\"mount\"},\"secrets\":\"***stripped***\",\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/85ee573119d9ecda887cac10b425d418c54d575bdad3ac0f749bec1791191ee4/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/pv/name\":\"pvc-a89a4939-db62-4522-8b41-a292229692ad\",\"csi.storage.k8s.io/pvc/name\":\"minio\",\"csi.storage.k8s.io/pvc/namespace\":\"mynamespace\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/mynodename\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1728286278715-3559-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_mynodename_pvc-a89a4939-db62-4522-8b41-a292229692ad\"}" file="utils.go:70"
time="2024-10-07T07:40:12Z" level=info msg="NodeStageVolume requested volume my-data_mynodename_pvc-a89a4939-db62-4522-8b41-a292229692ad with access type mount, targetPath /var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/85ee573119d9ecda887cac10b425d418c54d575bdad3ac0f749bec1791191ee4/globalmount, capability mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > , publishContext map[accessProtocol:iscsi discoveryIps:192.168.150.241 fsCreateOptions: fsMode: fsOwner: fsType:xfs lunId:0 readOnly:false serialNumber:6589cfc000000f5be3780c8c78c68cc8 targetNames:iqn.2011-08.org.truenas.ctl:pvc-a89a4939-db62-4522-8b41-a292229692ad targetScope:volume volumeAccessMode:mount] and volumeContext map[allowOverrides:sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace compression:LZ4 csi.storage.k8s.io/pv/name:pvc-a89a4939-db62-4522-8b41-a292229692ad csi.storage.k8s.io/pvc/name:minio csi.storage.k8s.io/pvc/namespace:mynamespace deduplication:OFF description:Volume for PVC {pvc} fsType:xfs hostEncryption:true hostEncryptionSecretName:storage-encryption-passphrase hostEncryptionSecretNamespace:hpe-storage root:my-data/mynodename storage.kubernetes.io/csiProvisionerIdentity:1728286278715-3559-csi.hpe.com sync:STANDARD targetScope:volume volblocksize:8K volumeAccessMode:mount]" file="node_server.go:221"
time="2024-10-07T07:40:12Z" level=info msg="Requested volume needs encryption. Received Secret name: storage-encryption-passphrase, Secret namespace: hpe-storage" file="node_server.go:254"
time="2024-10-07T07:40:12Z" level=info msg=GetChapCredentialsFromVolumeContext file="flavor.go:1009"
time="2024-10-07T07:40:12Z" level=info msg="CHAP secret name and namespace are not provided in the storage class parameters." file="flavor.go:1015"
time="2024-10-07T07:40:12Z" level=info msg=GetChapCredentialsFromEnvironment file="flavor.go:995"
time="2024-10-07T07:40:12Z" level=info msg="CHAP secret name and namespace are not provided as environment variables." file="flavor.go:1001"
time="2024-10-07T07:40:12Z" level=error msg="\n Error in GetSecondaryBackends unexpected end of JSON input" file="volume.go:87"
time="2024-10-07T07:40:12Z" level=error msg="\n Passed details " file="volume.go:88"
time="2024-10-07T07:40:12Z" level=error msg="\n Error in GetSecondaryBackends unexpected end of JSON input" file="volume.go:87"
time="2024-10-07T07:40:12Z" level=error msg="\n Passed details " file="volume.go:88"
time="2024-10-07T07:40:13Z" level=error msg="process with pid : 92 finished with error = exit status 1" file="cmd.go:63"
time="2024-10-07T07:40:13Z" level=info msg="Not a LUKS device - /dev/dm-3" file="device.go:482"
time="2024-10-07T07:40:13Z" level=info msg="Device /dev/dm-3 is a new device. LUKS formatting it..." file="device.go:529"
time="2024-10-07T07:40:18Z" level=info msg="Device /dev/dm-3 has been LUKS formatted successfully" file="device.go:538"
time="2024-10-07T07:40:18Z" level=info msg="Opening LUKS device /dev/dm-3 with mapped device enc-mpathc..." file="device.go:542"
time="2024-10-07T07:40:19Z" level=info msg="Node monitor started monitoring the node c2" file="nodemonitor.go:101"
time="2024-10-07T07:40:19Z" level=info msg="Found 2 multipath devices [{Name:mpathb UUID:36589cfc0000000af4de1ef97990e1716 Sysfs:dm-1 Failback:- Queueing:- Paths:1 WriteProt:rw DmSt:active Features:0 Hwhandler:0 Action:create PathFaults:0 Vend:TrueNAS Prod:iSCSI Disk Rev:380 SwitchGrp:0 MapLoads:1 TotalQTime:0 QTimeouts:0 PathGroups:[{Selector:service-time 0 Pri:1 DmSt:active Group:1 Paths:[{Dev:sdb DevT:8:16 DmSt:active DevSt:running ChkSt:ready Checker:tur Pri:1 HostWwnn:[undef] TargetWwnn:iqn.2011-08.org.truenas.ctl:pvc-b9be8d82-2d72-4ea9-acf4-63f19a813ddd HostWwpn:[undef] TargetWwpn:[undef] HostAdapter:192.168.150.16}]}] IsUnhealthy:false} {Name:mpathc UUID:36589cfc000000f5be3780c8c78c68cc8 Sysfs:dm-3 Failback:- Queueing:- Paths:1 WriteProt:rw DmSt:active Features:0 Hwhandler:0 Action:create PathFaults:0 Vend:TrueNAS Prod:iSCSI Disk Rev:380 SwitchGrp:0 MapLoads:1 TotalQTime:0 QTimeouts:0 PathGroups:[{Selector:service-time 0 Pri:1 DmSt:active Group:1 Paths:[{Dev:sdc DevT:8:32 DmSt:active DevSt:running ChkSt:ready Checker:tur Pri:1 HostWwnn:[undef] TargetWwnn:iqn.2011-08.org.truenas.ctl:pvc-a89a4939-db62-4522-8b41-a292229692ad HostWwpn:[undef] TargetWwpn:[undef] HostAdapter:192.168.150.16}]}] IsUnhealthy:false}]" file="multipath.go:423"
time="2024-10-07T07:40:19Z" level=info msg=" 2 multipath devices found on the node c2" file="utils.go:49"
time="2024-10-07T07:40:19Z" level=info msg="Node c2 has a proper connection with the control plane" file="utils.go:79"
time="2024-10-07T07:40:19Z" level=info msg="7 volume attachments found" file="utils.go:88"
time="2024-10-07T07:40:19Z" level=info msg="Assessing the multipath device mpathb" file="utils.go:93"
time="2024-10-07T07:40:19Z" level=info msg="The multipath device mpathb belongs to this node c2 and is healthy." file="utils.go:98"
time="2024-10-07T07:40:19Z" level=info msg="Assessing the multipath device mpathc" file="utils.go:93"
time="2024-10-07T07:40:19Z" level=info msg="The multipath device mpathc belongs to this node c2 and is healthy." file="utils.go:98"
time="2024-10-07T07:40:20Z" level=info msg="Opened LUKS device /dev/dm-3 with mapped device enc-mpathc successfully" file="device.go:551"
time="2024-10-07T07:40:20Z" level=info msg="Device setup successful, Device: &{VolumeID: Pathname:dm-3 LuksPathname:enc-mpathc SerialNumber:6589cfc000000f5be3780c8c78c68cc8 Major:253 Minor:3 AltFullPathName:/dev/mapper/mpathc AltFullLuksPathName:/dev/mapper/enc-mpathc MpathName:mpathc Size:40960 Slaves:[sdc] IscsiTargets:[0xc0004ce500] Hcils:[34:0:0:0] TargetScope:volume State:active Filesystem: StorageVendor:}" file="node_server.go:431"
time="2024-10-07T07:40:20Z" level=error msg="process with pid : 100 finished with error = exit status 2" file="cmd.go:63"
time="2024-10-07T07:40:21Z" level=info msg="GRPC response: {}" file="utils.go:75"
time="2024-10-07T07:40:21Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:21Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:21Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:21Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:21Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:21Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:21Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:21Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:21Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:21Z" level=info msg="GRPC call: /csi.v1.Node/NodePublishVolume" file="utils.go:69"
time="2024-10-07T07:40:21Z" level=info msg="GRPC request: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"discoveryIps\":\"192.168.150.241\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"6589cfc000000f5be3780c8c78c68cc8\",\"targetNames\":\"iqn.2011-08.org.truenas.ctl:pvc-a89a4939-db62-4522-8b41-a292229692ad\",\"targetScope\":\"volume\",\"volumeAccessMode\":\"mount\"},\"secrets\":\"***stripped***\",\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/85ee573119d9ecda887cac10b425d418c54d575bdad3ac0f749bec1791191ee4/globalmount\",\"target_path\":\"/var/lib/kubelet/pods/ddc75ff1-ac44-4bba-879d-1f6bd7608b67/volumes/kubernetes.io~csi/pvc-a89a4939-db62-4522-8b41-a292229692ad/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/ephemeral\":\"false\",\"csi.storage.k8s.io/pod.name\":\"minio-5884f9796b-9skl2\",\"csi.storage.k8s.io/pod.namespace\":\"mynamespace\",\"csi.storage.k8s.io/pod.uid\":\"ddc75ff1-ac44-4bba-879d-1f6bd7608b67\",\"csi.storage.k8s.io/pv/name\":\"pvc-a89a4939-db62-4522-8b41-a292229692ad\",\"csi.storage.k8s.io/pvc/name\":\"minio\",\"csi.storage.k8s.io/pvc/namespace\":\"mynamespace\",\"csi.storage.k8s.io/serviceAccount.name\":\"minio\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/mynodename\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1728286278715-3559-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_mynodename_pvc-a89a4939-db62-4522-8b41-a292229692ad\"}" file="utils.go:70"
time="2024-10-07T07:40:21Z" level=info msg="NodePublishVolume requested volume my-data_mynodename_pvc-a89a4939-db62-4522-8b41-a292229692ad with access type mount, targetPath /var/lib/kubelet/pods/ddc75ff1-ac44-4bba-879d-1f6bd7608b67/volumes/kubernetes.io~csi/pvc-a89a4939-db62-4522-8b41-a292229692ad/mount, capability mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > , publishContext map[accessProtocol:iscsi discoveryIps:192.168.150.241 fsCreateOptions: fsMode: fsOwner: fsType:xfs lunId:0 readOnly:false serialNumber:6589cfc000000f5be3780c8c78c68cc8 targetNames:iqn.2011-08.org.truenas.ctl:pvc-a89a4939-db62-4522-8b41-a292229692ad targetScope:volume volumeAccessMode:mount] and volumeContext map[allowOverrides:sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace compression:LZ4 csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:minio-5884f9796b-9skl2 csi.storage.k8s.io/pod.namespace:mynamespace csi.storage.k8s.io/pod.uid:ddc75ff1-ac44-4bba-879d-1f6bd7608b67 csi.storage.k8s.io/pv/name:pvc-a89a4939-db62-4522-8b41-a292229692ad csi.storage.k8s.io/pvc/name:minio csi.storage.k8s.io/pvc/namespace:mynamespace csi.storage.k8s.io/serviceAccount.name:minio deduplication:OFF description:Volume for PVC {pvc} fsType:xfs hostEncryption:true hostEncryptionSecretName:storage-encryption-passphrase hostEncryptionSecretNamespace:hpe-storage root:my-data/mynodename storage.kubernetes.io/csiProvisionerIdentity:1728286278715-3559-csi.hpe.com sync:STANDARD targetScope:volume volblocksize:8K volumeAccessMode:mount]" file="node_server.go:833"
time="2024-10-07T07:40:23Z" level=info msg="Successfully published the volume my-data_mynodename_pvc-a89a4939-db62-4522-8b41-a292229692ad to the target path /var/lib/kubelet/pods/ddc75ff1-ac44-4bba-879d-1f6bd7608b67/volumes/kubernetes.io~csi/pvc-a89a4939-db62-4522-8b41-a292229692ad/mount" file="node_server.go:893"
time="2024-10-07T07:40:23Z" level=info msg="GRPC response: {}" file="utils.go:75"
time="2024-10-07T07:40:33Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:33Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:33Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:33Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:33Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:33Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:33Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:33Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:33Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:33Z" level=info msg="GRPC call: /csi.v1.Node/NodeStageVolume" file="utils.go:69"
time="2024-10-07T07:40:33Z" level=info msg="GRPC request: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"discoveryIps\":\"192.168.150.241\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"6589cfc000000eb6d21c8877e4eb9ed0\",\"targetNames\":\"iqn.2011-08.org.truenas.ctl:pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a\",\"targetScope\":\"volume\",\"volumeAccessMode\":\"mount\"},\"secrets\":\"***stripped***\",\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/75983644f28b4f1d31190f70fc2a8132761851c6e33f8748918fd4871f8a2e8f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/pv/name\":\"pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a\",\"csi.storage.k8s.io/pvc/name\":\"data-kafka-controller-1\",\"csi.storage.k8s.io/pvc/namespace\":\"mynamespace\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/mynodename\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1728286278715-3559-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_mynodename_pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a\"}" file="utils.go:70"
time="2024-10-07T07:40:33Z" level=info msg="NodeStageVolume requested volume my-data_mynodename_pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a with access type mount, targetPath /var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/75983644f28b4f1d31190f70fc2a8132761851c6e33f8748918fd4871f8a2e8f/globalmount, capability mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > , publishContext map[accessProtocol:iscsi discoveryIps:192.168.150.241 fsCreateOptions: fsMode: fsOwner: fsType:xfs lunId:0 readOnly:false serialNumber:6589cfc000000eb6d21c8877e4eb9ed0 targetNames:iqn.2011-08.org.truenas.ctl:pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a targetScope:volume volumeAccessMode:mount] and volumeContext map[allowOverrides:sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace compression:LZ4 csi.storage.k8s.io/pv/name:pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a csi.storage.k8s.io/pvc/name:data-kafka-controller-1 csi.storage.k8s.io/pvc/namespace:mynamespace deduplication:OFF description:Volume for PVC {pvc} fsType:xfs hostEncryption:true hostEncryptionSecretName:storage-encryption-passphrase hostEncryptionSecretNamespace:hpe-storage root:my-data/mynodename storage.kubernetes.io/csiProvisionerIdentity:1728286278715-3559-csi.hpe.com sync:STANDARD targetScope:volume volblocksize:8K volumeAccessMode:mount]" file="node_server.go:221"
time="2024-10-07T07:40:33Z" level=info msg="Requested volume needs encryption. Received Secret name: storage-encryption-passphrase, Secret namespace: hpe-storage" file="node_server.go:254"
time="2024-10-07T07:40:33Z" level=info msg=GetChapCredentialsFromVolumeContext file="flavor.go:1009"
time="2024-10-07T07:40:33Z" level=info msg="CHAP secret name and namespace are not provided in the storage class parameters." file="flavor.go:1015"
time="2024-10-07T07:40:33Z" level=info msg=GetChapCredentialsFromEnvironment file="flavor.go:995"
time="2024-10-07T07:40:33Z" level=info msg="CHAP secret name and namespace are not provided as environment variables." file="flavor.go:1001"
time="2024-10-07T07:40:34Z" level=error msg="\n Error in GetSecondaryBackends unexpected end of JSON input" file="volume.go:87"
time="2024-10-07T07:40:34Z" level=error msg="\n Passed details " file="volume.go:88"
time="2024-10-07T07:40:34Z" level=error msg="\n Error in GetSecondaryBackends unexpected end of JSON input" file="volume.go:87"
time="2024-10-07T07:40:34Z" level=error msg="\n Passed details " file="volume.go:88"
time="2024-10-07T07:40:35Z" level=error msg="process with pid : 136 finished with error = exit status 1" file="cmd.go:63"
time="2024-10-07T07:40:35Z" level=info msg="Not a LUKS device - /dev/dm-5" file="device.go:482"
time="2024-10-07T07:40:35Z" level=info msg="Device /dev/dm-5 is a new device. LUKS formatting it..." file="device.go:529"
time="2024-10-07T07:40:39Z" level=info msg="Device /dev/dm-5 has been LUKS formatted successfully" file="device.go:538"
time="2024-10-07T07:40:39Z" level=info msg="Opening LUKS device /dev/dm-5 with mapped device enc-mpathd..." file="device.go:542"
time="2024-10-07T07:40:40Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:40Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:40Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:40Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetVolumeStats" file="utils.go:69"
time="2024-10-07T07:40:40Z" level=info msg="GRPC request: {\"volume_id\":\"my-data_mynodename_pvc-a89a4939-db62-4522-8b41-a292229692ad\",\"volume_path\":\"/var/lib/kubelet/pods/ddc75ff1-ac44-4bba-879d-1f6bd7608b67/volumes/kubernetes.io~csi/pvc-a89a4939-db62-4522-8b41-a292229692ad/mount\"}" file="utils.go:70"
time="2024-10-07T07:40:40Z" level=info msg="GRPC response: {\"usage\":[{\"available\":42593300480,\"total\":42926608384,\"unit\":1,\"used\":333307904},{\"available\":20970477,\"total\":20970496,\"unit\":2,\"used\":19}]}" file="utils.go:75"
time="2024-10-07T07:40:41Z" level=info msg="Opened LUKS device /dev/dm-5 with mapped device enc-mpathd successfully" file="device.go:551"
time="2024-10-07T07:40:41Z" level=info msg="Device setup successful, Device: &{VolumeID: Pathname:dm-5 LuksPathname:enc-mpathd SerialNumber:6589cfc000000eb6d21c8877e4eb9ed0 Major:253 Minor:5 AltFullPathName:/dev/mapper/mpathd AltFullLuksPathName:/dev/mapper/enc-mpathd MpathName:mpathd Size:2048 Slaves:[sdd] IscsiTargets:[0xc000229e50] Hcils:[35:0:0:0] TargetScope:volume State:active Filesystem: StorageVendor:}" file="node_server.go:431"
time="2024-10-07T07:40:41Z" level=error msg="process with pid : 141 finished with error = exit status 2" file="cmd.go:63"
time="2024-10-07T07:40:42Z" level=info msg="GRPC response: {}" file="utils.go:75"
time="2024-10-07T07:40:42Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:42Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:42Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:42Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:42Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:42Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:42Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2024-10-07T07:40:42Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2024-10-07T07:40:42Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}},{\"Type\":{\"Rpc\":{\"type\":2}}}]}" file="utils.go:75"
time="2024-10-07T07:40:42Z" level=info msg="GRPC call: /csi.v1.Node/NodePublishVolume" file="utils.go:69"
time="2024-10-07T07:40:42Z" level=info msg="GRPC request: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"discoveryIps\":\"192.168.150.241\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"6589cfc000000eb6d21c8877e4eb9ed0\",\"targetNames\":\"iqn.2011-08.org.truenas.ctl:pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a\",\"targetScope\":\"volume\",\"volumeAccessMode\":\"mount\"},\"secrets\":\"***stripped***\",\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/75983644f28b4f1d31190f70fc2a8132761851c6e33f8748918fd4871f8a2e8f/globalmount\",\"target_path\":\"/var/lib/kubelet/pods/15ad7a06-02f3-4bed-bfed-b3d2e6ed99c0/volumes/kubernetes.io~csi/pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/ephemeral\":\"false\",\"csi.storage.k8s.io/pod.name\":\"kafka-controller-1\",\"csi.storage.k8s.io/pod.namespace\":\"mynamespace\",\"csi.storage.k8s.io/pod.uid\":\"15ad7a06-02f3-4bed-bfed-b3d2e6ed99c0\",\"csi.storage.k8s.io/pv/name\":\"pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a\",\"csi.storage.k8s.io/pvc/name\":\"data-kafka-controller-1\",\"csi.storage.k8s.io/pvc/namespace\":\"mynamespace\",\"csi.storage.k8s.io/serviceAccount.name\":\"kafka\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/mynodename\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1728286278715-3559-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_mynodename_pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a\"}" file="utils.go:70"
time="2024-10-07T07:40:42Z" level=info msg="NodePublishVolume requested volume my-data_mynodename_pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a with access type mount, targetPath /var/lib/kubelet/pods/15ad7a06-02f3-4bed-bfed-b3d2e6ed99c0/volumes/kubernetes.io~csi/pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a/mount, capability mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > , publishContext map[accessProtocol:iscsi discoveryIps:192.168.150.241 fsCreateOptions: fsMode: fsOwner: fsType:xfs lunId:0 readOnly:false serialNumber:6589cfc000000eb6d21c8877e4eb9ed0 targetNames:iqn.2011-08.org.truenas.ctl:pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a targetScope:volume volumeAccessMode:mount] and volumeContext map[allowOverrides:sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace compression:LZ4 csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:kafka-controller-1 csi.storage.k8s.io/pod.namespace:mynamespace csi.storage.k8s.io/pod.uid:15ad7a06-02f3-4bed-bfed-b3d2e6ed99c0 csi.storage.k8s.io/pv/name:pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a csi.storage.k8s.io/pvc/name:data-kafka-controller-1 csi.storage.k8s.io/pvc/namespace:mynamespace csi.storage.k8s.io/serviceAccount.name:kafka deduplication:OFF description:Volume for PVC {pvc} fsType:xfs hostEncryption:true hostEncryptionSecretName:storage-encryption-passphrase hostEncryptionSecretNamespace:hpe-storage root:my-data/mynodename storage.kubernetes.io/csiProvisionerIdentity:1728286278715-3559-csi.hpe.com sync:STANDARD targetScope:volume volblocksize:8K volumeAccessMode:mount]" file="node_server.go:833"
time="2024-10-07T07:40:44Z" level=info msg="Successfully published the volume my-data_mynodename_pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a to the target path /var/lib/kubelet/pods/15ad7a06-02f3-4bed-bfed-b3d2e6ed99c0/volumes/kubernetes.io~csi/pvc-01296b36-9a4f-45c2-a80e-1007cc03a20a/mount" file="node_server.go:893"
time="2024-10-07T07:40:44Z" level=info msg="GRPC response: {}" file="utils.go:75"
datamattsson commented 4 weeks ago
time="2024-10-07T07:40:20Z" level=error msg="process with pid : 100 finished with error = exit status 2" file="cmd.go:63"

This we need to investigate. Can you turn on tracing for the CSI driver? If you're installing with the chart, use --set hpe-csi-driver.logLevel=trace (there's a slight chance this will fail, come to think of it, if it does, edit the ds/hpe-csi-driver and change the environment variable manually).

While not ideal perhaps, are you getting the same issues if you would disable volume encryption?

santimar commented 3 weeks ago

These are the logs with tracing enabled. Logs were recorded while a volume was failing to mount in this node. As you can see, it took a while to make something move

Normal  SuccessfulAttachVolume  17 mins ago AttachVolume.Attach succeeded for volume "pvc-bf2a8a72-f705-4fa5-a81a-011f1a4a9cbe"
Warning FailedAttachVolume (5)  19 mins ago AttachVolume.Attach failed for volume "pvc-bf2a8a72-f705-4fa5-a81a-011f1a4a9cbe" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
Warning FailedAttachVolume (7)  27 mins ago AttachVolume.Attach failed for volume "pvc-bf2a8a72-f705-4fa5-a81a-011f1a4a9cbe" : rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:my-data_mynodename_pvc-bf2a8a72-f705-4fa5-a81a-011f1a4a9cbe:42abc0cc-4f78-fe1a-fb6b-94e2f83ad6ec
Normal  Scheduled           28 mins ago Successfully assigned prometheus/prometheus-kube-prometheus-stack-prometheus-0 to c3

csi-node.log

I didn't try with volume encryption disabled but I can have a try if you think this test may be helpful

datamattsson commented 3 weeks ago

The node is receiving the publish request:

time="2024-10-11T15:22:11Z" level=info msg="GRPC request: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"discoveryIps\":\"192.168.150.241\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"xfs\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"6589cfc000000121ee11514b81d73107\",\"targetNames\":\"iqn.2011-08.org.truenas.ctl:pvc-bf2a8a72-f705-4fa5-a81a-011f1a4a9cbe\",\"targetScope\":\"volume\",\"volumeAccessMode\":\"mount\"},\"secrets\":\"***stripped***\",\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/4288dff2a734802703ed6dea085e5eab651243750af7f2fae9ecab4b4c3b7159/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"allowOverrides\":\"sparse,compression,deduplication,sync,description,hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace\",\"compression\":\"LZ4\",\"csi.storage.k8s.io/pv/name\":\"pvc-bf2a8a72-f705-4fa5-a81a-011f1a4a9cbe\",\"csi.storage.k8s.io/pvc/name\":\"prometheus-kube-prometheus-stack-prometheus-db-prometheus-kube-prometheus-stack-prometheus-0\",\"csi.storage.k8s.io/pvc/namespace\":\"prometheus\",\"deduplication\":\"OFF\",\"description\":\"Volume for PVC {pvc}\",\"fsType\":\"xfs\",\"hostEncryption\":\"true\",\"hostEncryptionSecretName\":\"storage-encryption-passphrase\",\"hostEncryptionSecretNamespace\":\"hpe-storage\",\"root\":\"my-data/mynodename\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1728659407371-4141-csi.hpe.com\",\"sync\":\"STANDARD\",\"targetScope\":\"volume\",\"volblocksize\":\"8K\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"my-data_mynodename_pvc-bf2a8a72-f705-4fa5-a81a-011f1a4a9cbe\"}" file="utils.go:70"

... and completes it a few seconds later.

time="2024-10-11T15:22:22Z" level=info msg="Successfully published the volume my-data_mynodename_pvc-bf2a8a72-f705-4fa5-a81a-011f1a4a9cbe to the target path /var/lib/kubelet/pods/19add800-a426-4f53-81f3-6907d9687ec8/volumes/kubernetes.io~csi/pvc-bf2a8a72-f705-4fa5-a81a-011f1a4a9cbe/mount" file="node_server.go:893"

So, this leaves us to the CSI controller that something is stalling in the control plane. The cmd.go error was a red herring altogether and was expected, so it's not the encryption.

Can you grab the CSI controller logs with tracing on too for a failed request? It's the "hpe-csi-driver" container in the deploy/hpe-csi-controller Pod.

santimar commented 3 weeks ago

hpe-csi-controller-7466fcdf9-fc8fl_hpe-csi-driver.log

datamattsson commented 3 weeks ago

I see a lot of these:

time="2024-10-11T15:20:10Z" level=trace msg="Request: action=POST path=http://truenas-csp-svc:8080/containers/v1/hosts" file="client.go:173"
time="2024-10-11T15:20:13Z" level=trace msg="response: 200 OK, length=261" file="client.go:224"
time="2024-10-11T15:20:13Z" level=debug msg="Received a null reader. That is not expected." file="client.go:245"

Which is strange why it's repeatedly POST'ing the host and repeatably getting nothing useful back. Do you have anything in the CSP log that corresponds to all these POSTs?

santimar commented 3 weeks ago

I don't see anything too strange, this post seems to behave like it should

Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Last backend requests Response: 200 OK
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG API Key detected. Will use token authentication.
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG TrueNAS GET request URI: core/ping
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG TrueNAS response: "pong"
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG API fetch caught 1 item
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG HPE CSI Request <==============================>
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG          uri: http://truenas-csp-svc:8080/containers/v1/hosts
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG         body: {'name': 'c1', 'uuid': 'd5df59b4-6c61-608a-53f8-a229a64d7bf0', 'iqns': ['iqn.2024-10.mynodename:c1'], 'networks': ['192.168.150.45/24', '10.42.0.0/32', '10.42.0.1/24']}
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG        query: 
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG       method: POST
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG content_type: application/json
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG      headers: {"HOST": "truenas-csp-svc:8080", "USER-AGENT": "Go-http-client/1.1", "CONNECTION": "close", "CONTENT-LENGTH": "157", "ACCEPT": "application/json", "CONTENT-TYPE": "application/json", "X-ARRAY-IP": "truenas.internaldomain.com", "X-AUTH-TOKEN": "*****", "ACCEPT-ENCODING": "gzip"}
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG API Key detected. Will use token authentication.
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG TrueNAS GET request URI: iscsi/initiator
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG TrueNAS response: [
 {
  "id": 9,
  "initiators": [
   "iqn.1993-08.org.debian:01:4efdaa48c143"
  ],
  "comment": "f2935c49-4f82-6c2e-2fdd-9619e03313a6"
 },
 {
  "id": 16,
  "initiators": [
   "iqn.1993-08.org.debian:01:4efdaa48c143"
  ],
  "comment": "bd3aae40-7aed-0b7a-7262-cfae9a3ec6a6"
 },

 <other ~750 lines>

 {
  "id": 467,
  "initiators": [
   "iqn.2024-10.mynodename:c1"
  ],
  "comment": "pvc-58af381b-ec0a-491d-845b-e84d5fd22bc6"
 }
]
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Looking for field=comment and value=d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG API fetch caught 1 item
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG API Key detected. Will use token authentication.
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG TrueNAS GET request URI: system/version
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG TrueNAS response: "TrueNAS-SCALE-24.04.2.2"
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG API fetch caught 1 item
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG Version: TrueNAS-SCALE-24.04.2.2
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG API Key detected. Will use token authentication.
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG TrueNAS PUT request URI: iscsi/initiator/id/459
Fri, 11 Oct 2024 19:27:00 +0000 backend DEBUG TrueNAS request: {'comment': 'd5df59b4-6c61-608a-53f8-a229a64d7bf0', 'initiators': ['iqn.2024-10.mynodename:c1']}
Fri, 11 Oct 2024 19:27:03 +0000 backend DEBUG TrueNAS response: {'id': 459, 'initiators': ['iqn.2024-10.mynodename:c1'], 'comment': 'd5df59b4-6c61-608a-53f8-a229a64d7bf0'}
Fri, 11 Oct 2024 19:27:03 +0000 backend INFO Initiator updated: d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:03 +0000 backend DEBUG CSP response: {"id": 459, "name": "d5df59b4-6c61-608a-53f8-a229a64d7bf0", "uuid": "d5df59b4-6c61-608a-53f8-a229a64d7bf0", "iqns": ["iqn.2024-10.mynodename:c1"], "networks": ["192.168.150.45/24", "10.42.0.0/32", "10.42.0.1/24"], "chap_user": "", "chap_password": "", "wwpns": []}
Fri, 11 Oct 2024 19:27:03 +0000 backend INFO Host initiator created: d5df59b4-6c61-608a-53f8-a229a64d7bf0
Fri, 11 Oct 2024 19:27:03 +0000 backend DEBUG Falcon Response (to HPE CSI): 200 OK
Fri, 11 Oct 2024 19:27:03 +0000 backend DEBUG Last backend requests Response: 200 OK
Fri, 11 Oct 2024 19:27:03 +0000 backend DEBUG API Key detected. Will use token authentication.
Fri, 11 Oct 2024 19:27:03 +0000 backend DEBUG TrueNAS GET request URI: core/ping
Fri, 11 Oct 2024 19:27:03 +0000 backend DEBUG TrueNAS response: "pong"

A suspicious thing may be the following

Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG API Key detected. Will use token authentication.
Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG TrueNAS GET request URI: core/ping
Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG TrueNAS response: "pong"
Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG API fetch caught 1 item
Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG HPE CSI Request <==============================>
Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG          uri: http://truenas-csp-svc:8080/containers/v1/volumes/my-data_mynodename_pvc-58af381b-ec0a-491d-845b-e84d5fd22bc6
Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG         body: None
Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG        query: 
Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG       method: GET
Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG content_type: application/json
Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG      headers: {"HOST": "truenas-csp-svc:8080", "USER-AGENT": "Go-http-client/1.1", "CONNECTION": "close", "ACCEPT": "application/json", "CONTENT-TYPE": "application/json", "X-ARRAY-IP": "truenas-.internaldomain.com", "X-AUTH-TOKEN": "*****", "ACCEPT-ENCODING": "gzip"}
Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG API Key detected. Will use token authentication.
Fri, 11 Oct 2024 19:26:38 +0000 backend DEBUG TrueNAS GET request URI: pool/dataset
Fri, 11 Oct 2024 19:26:40 +0000 backend DEBUG TrueNAS response: [
 {

the response is ~35k lines long

is there any limitation on the response size that the system can handle?

datamattsson commented 3 weeks ago

is there any limitation on the response size that the system can handle?

I'm sure there is but I've not seen anything hitting any boundaries. There are some parameters to tighten up the response for pool/dataset but I haven't gotten to those optimizations. If we were to theorize around this, could your symptoms be related to how many datasets you have?

santimar commented 3 weeks ago

While I can't be sure that this is the problem, I saw that the VolumeAttachFailure is more likely to occur when:

I also leave here a complete CSP log file, maybe I missed something truenas-csp-7b864cf6-bdv47_truenas-csp (1).log

datamattsson commented 3 weeks ago

I have seen troubles of parallelism. An example is running multiple e2e tests suites in parallel on a single cluster. Fails 100% with TrueNAS and succeeds 100% on any other CSP (like Nimble). If this is a TrueNAS REST API issue or CSP issue I haven't isolated yet.

I'm just baffled there's no obvious bugs that produce an error that give any sensible clues.

santimar commented 3 weeks ago

ok, next week I'll try to rearrange my deployment pipeline to avoid parallel volume mounts and I'll report back

santimar commented 3 weeks ago

I've reordered the deployment of my charts to mount a single PV at time but I didn't got any improvements