Open brngates98 opened 9 months ago
The error you're seeing is because the worker node is unable to find the block device on the host. Before anything else, is this iSCSI or FC?
If iSCSI:
If FC:
They are using ISCSI, the VM's for my cluster are on subnet 192.168.70.X VLAN 70, our ISCSI connection is on 192.168.221.x and 192.168.222.X VLAN221/222
The ESXi hosts have a direct connection and everything works fine there, the virtual machines that run our kubernetes cluster seem to be creating new volumes on the nimbles, and even attaching(i think) to them, as the volumes show as online in the Nimbles. Everything is routable between the subnets.
our ISCSI connection is on 192.168.221.x and 192.168.222.X VLAN221/222
The VMs need in-guest network interfaces on these VLANs. The creation of the volume initially is a simple control-plane operation only that completes successfully through your VLAN 70.
OOOOOOOOOOOO so i need to add secondary NIC's to our VM's that place them on to the ISCSI VLAN's then eh, now i just feel dumb :)
so i am playing with the HPE CSI Driver on our Nimbles, i thought i had everything configured right, it is creating the volumes on the Nimbles, and bounds in Kubernetes but fails to mount to pods.
We are using vmware csi and a smb csi already without issue so not entirely sure what i am doing wrong here:
Storage Class YAML:
Any ideas?
The end goal is to build a storage class for both of our Nimbles, and once things are working to play with the NFS Provisioner so we can make use of some RWX Volumes.
DETAILS:
Nodes: Ubuntu Server 22.04.3 LTS
RKE2 1.27
Provisioned by Rancher