Closed bonetsm closed 3 years ago
Thanks for the inquiry. The CSP version is dependent on what the corresponding version of HPE CSI Driver has been tested with. v1.3.0 is qualified and tested with Kubernetes 1.15 to 1.18. K8s 1.19 works but PV expansion is broken and 1.20 has not been tested at all AFAIK. The next version (release in the next couple of weeks) of the HPE CSI Driver will support 1.19 thoroughly and I know 1.20 works, but will not be tested and qualified until mid 2021.
For what it's worth I just tried with version 1.20 and I got some errors. I won't bother opening another issue for it because it's obviously unsupported as yet.
Thanks for kicking the tires @2fst4u and @bonetsm. It seems the next release, v1.4.0, is passing K8s 1.20 testing and will then be supported sooner than I anticipated.
I've just tried the current version of this csp on hpe 1.4.0, and except for changing the default namespace of the csp, it worked in a VM, kubeadm 1.20
@ishioni thanks for verifying. A new release (v1.4.0) is available and has been tested against Kubernetes 1.20.1.
I've just tried the current version of this csp on hpe 1.4.0, and except for changing the default namespace of the csp, it worked in a VM, kubeadm 1.20
Hmm not sure why it didn't work for me. Was the namespace change required to get it to work? That might be what I didn't do but I didn't look into it too much assuming it was just incompatible.
If there's something I'm missing I'll try again and open a new issue to figure out what I'm doing wrong now we know it will work.
@2fst4u yes, the CSI driver v1.4.0 deploys all components in the "hpe-storage" Namespace
and the CSPs and their respective Services
and Secrets
need to reside in the same Namespace
. Also make sure the correct Namespace
is called out in your StorageClass
.
@datamattsson hmm everything is in that namespace as it appears to be the default throughout unless there's somewhere I'm missing that needs it changed.
My storageclass is the same as the example given except for the "root", which I'm hoping I've named correctly but I'm not sure where to validate this. I have it set as "[pool]/[dataset]".
My secret is a copy/paste of the example except for my own credentials being added too.
I'm sure I'm doing something wrong but I've got no clue what it is.
@2fst4u I'm sure it's something simple. There are some diagnostic steps you can follow here: https://scod.hpedev.io/csi_driver/diagnostics.html what does the event log say on the PVC when you do a kubectl describe pvc/your-pvcname
?
I don't have a PVC yet because the containers the install instructions are trying to create are continually in backoff state. Is that my issue? Do I need a PVC before the containers will start properly? I was holding off because I was sure that was a step I needed to sort out first.
Ouch, if they're in CrashloopBackoff
there's usually something note quite right on the nodes themselves. What OS are you running on the worker nodes? What does the event logs on the Pods
in the driver controllers?
This may lead down the garden path to what I'm doing wrong, but it's Raspberry pi OS (one is arm64, the other arm). Might I be missing some key iSCSI components in that case?
What command are you after for those logs? I might be reading wrong but the logs command doesn't work for a node.
Yeah, full stop right there. We don't build ARM images for the CSI driver (that's what you'll find in the logs). I've brought it up with the team but it has not been prioritized. You can build the CSI driver and TrueNAS CSP image yourself on ARM if you're feeling adventurous. I just got a RPi so I might look into it.
You can imagine how hard I'm slapping my forehead. I was just about to check if the image supported arm.
Not to worry, x86 is necessary for a few other things I want to run so I'll just get a node to handle that eventually.
Cheers for putting up with the noob-ness. I'll watch the repo for maybe Arm one day 🤞
Thank you very much for your effort!
Has this been tested with Kubernetes versions > 1.18?
Specifically, I am referring to 1.20.1.