Closed guangbochen closed 3 weeks ago
Both NeuVector and Epinio require RWX.
What is the status of this feature request? The longhorn dependency seems to have been implemented already, if I read that issue correctly?
We are evaluating wether we can use Harvester CSI Driver as the only storage options or if we need to deploy other storage options, which depends mostly on the timeline when this feature will be implemented.
Wondering this too, seems that Harvester 1.1.2 uses Longhorn 1.3.2 which supports RWX, I can create one in the Longhorn UI. Installed nfs-common on the worker nodes according to the Longhorn documentation and tried to create a ReadWriteMany PVC, but the harvester-csi-driver:0.1.1600 driver doesn't seem to allow it, looks like it's still present in master: https://github.com/harvester/harvester-csi-driver/blob/509123316e6150307e0f9c39b0eaef3e678ad914/pkg/csi/controller_server.go#L426C8-L426C70
Creating through Rancher 2.7.6, getting the message:
failed to provision volume with StorageClass "harvester": rpc error: code = InvalidArgument desc = access mode MULTI_NODE_MULTI_WRITER is not supported
Hi @staedter, @egrist I thought that would be introduced with Harvester v1.3.0 Longhorn already supports the RWX volume currently, so we will start working on it!
Hello! Can you please advise if there is any workaround for this issue? How can I create a ReadWriteMany (RWX) Persistent Volume for an RKE2 cluster deployed in Harvester?
Bump
I am running harvester 1.3 , is there any way to create RWX volumes on k8s cluster provisioned on harvester ?
Greetings everyone, are there any updates on this?
We've just hit this issue - currently running 1.2.1 and was looking at the upgrade path, but if RWX still doesnt work I'm not sure we want to rush things, whats the state of play - took us a while to spot what was happening here - for anyone else who assumed ReadWriteMany would work describing your PVC will tell you the detail you need such as not supported -
e.g. kubectl describe persistentvolumeclaim <volume>
Check the messages for "failed to provision volume with StorageClass "harvester": rpc error: code = InvalidArgument desc = access mode MULTI_NODE_MULTI_WRITER is not supported"
As mentioned above we worked around this by provisioning an NFS VM in harvester and then mouting NFS volumes into our cluster - but it means running additional VM's and abstration to the filesystem to support a feature that longhorn already provides... frustrating...
Any news?
Hi folks, sorry for the late update. This feature is planned for the v1.4.0. We are currently working on it.
Hi @web-engineer
As mentioned above we worked around this by provisioning an NFS VM in harvester and then mouting NFS volumes into our cluster - but it means running additional VM's and abstration to the filesystem to support a feature that longhorn already provides... frustrating...
Did you mean you are provisioning the NFS VM and the guest cluster VM mount this NFS endpoint for the workload pod? Or, in your case, if the VM is on the Harvester, then it needs to use the NFS RWX volume.
I've got RWX working if the volume is from NFS - however expected that volumes should be mountable RWX without running another service if longhorn supported this more "natively"
[x] If labeled: require/HEP Has the Harvester Enhancement Proposal PR submitted? The HEP PR is at: https://github.com/harvester/harvester/pull/5861
[x] Where is the reproduce steps/test steps documented? The reproduce steps/test steps are at:
Test plan:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn-rwx
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "2880"
fromBackup: ""
fsType: "ext4"
nfsOptions: "vers=4.2,noresvport,softerr,timeo=600,retrans=5"
allowVolumeExpansion: false
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rwx-sc
parameters:
hostStorageClass: longhorn-rwx
provisioner: driver.harvesterhci.io
reclaimPolicy: Delete
volumeBindingMode: Immediate
Reference the test plan on RWX support and stability improvement
None
[x] Have the backend code been merged (harvester, harvester-installer, etc) (including backport-needed/*
)?
The PR is at: https://github.com/harvester/harvester-csi-driver/pull/43
[x] Does the PR include the explanation for the fix or the feature?
[ ] Does the PR include deployment change (YAML/Chart)? If so, where are the PRs for both YAML file and Chart? The PR for the YAML change is at: TBD The PR for the chart change is at:
[ ] If labeled: area/ui Has the UI issue filed or ready to be merged? The UI issue/PR is at:
[ ] If labeled: require/doc, require/knowledge-base Has the necessary document PR submitted or merged? The documentation/KB PR is at: TBD
~* [ ] If NOT labeled: not-require/test-plan Has the e2e test plan been merged? Have QAs agreed on the automation test case? If only test case skeleton w/o implementation, have you created an implementation issue?
~* [ ] If the fix introduces the code for backward compatibility Has a separate issue been filed with the label release/obsolete-compatibility
?
The compatibility issue is filed at:~
Automation e2e test issue: harvester/tests#1486
watching with enthusiasm ... if you have a pod 1.0 and you update to 2.0, and there is a pvc involved, the 2.0 has to first get access to the pvc before it can become ready, while 1.0 will not release the pvc and terminate until 2.0 is ready, locked they are, forever waiting on each other ... when using rwo
watching with enthusiasm ... if you have a pod 1.0 and you update to 2.0, and there is a pvc involved, the 2.0 has to first get access to the pvc before it can become ready, while 1.0 will not release the pvc and terminate until 2.0 is ready, locked they are, forever waiting on each other ... when using rwo
Hi @lknite, could you explain this more?
Did you share about the RWO case when updating the pod from 1.0 to 2.0?
Verified fixed on v1.4.0-rc2
with Rancher v2.8.8
(csi-driver 0.1.19) and v2.9.2
(csi-driver 0.2.0).
Close this issue.
Hi @TachunLin Did you replace the harvester csi driver? We did not bump the new harvester csi driver chart, so you need to replace it manually.
UPDATE: The corresponding version of harvester csi driver should be v0.2.0
Thanks!
Thanks Vicente for the reminder, after setting the repository and upgrade the csi-driver to 0.1.19 (with 0.2.0 image) We can correctly create the RWX volume, attached to multiple pods and write files accordingly.
Describe the bug Failed to deploy the Neuvector on the guest k8s cluster spin-up by the Harvester RKE2 node driver.
To Reproduce Steps to reproduce the behavior:
Expected behavior The Neuvector is able to up and running successfully.
Support bundle
Environment:
Additional context