Closed Bizyroth closed 1 month ago
Hello you need to ensure that PVCs are dynamically provisioned in the same zone as the VM (node) requesting the storage.
1- Use allowedTopologies in Your StorageClass:
You can configure your StorageClass to support provisioning PVs in different zones by defining topology constraints. This allows the storage to be provisioned in a zone that matches the requesting node. Specifically, adding allowedTopologies will ensure that the PV is provisioned in a specific zone that matches the zone of the pod.
2- Set volumeBindingMode to WaitForFirstConsumer:
Setting volumeBindingMode to WaitForFirstConsumer ensures that the PV is not created until a pod requests it. This means k8s will know which zone the pod is scheduled in, and it will then provision the volume in the appropriate zone.
Example: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: bsu-sc provisioner: bsu.csi.outscale.com volumeBindingMode: WaitForFirstConsumer allowedTopologies:
Feel free to reach out if you still have any issues or need more help.
I will close this issue for now, feel free to reopen it if the problem is still remaining
/kind bug
Hello
What happened?
We declare some Vm in the cloudgouv-eu-west-1a. We declare a StorageClass and installed the csi driver. It works well, PVC declare PV that was created on Outscale.
Recently, we declare a new VM but in the cloudgouv-eu-west-1b. But PVC still create PV on the zone A. Due to the system of label on node and nodeAffinity on pv, pv cannot be attached to the VM.
But this will not solve another issues, all our PV will be on the zone A so if we want some replication on different zone, it is currently impossible.
What you expected to happen?
PV created by a PVC are on the same zone as the node where the PVC is attached.
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?:
Environment
Kubernetes version (use
kubectl version
): Client Version: v1.26.10 Kustomize Version: v4.5.7 Server Version: v1.24.8+rke2r1Driver version: annotations: meta.helm.sh/release-name: osc-bsu-csi-driver meta.helm.sh/release-namespace: kube-system labels: app.kubernetes.io/instance: osc-bsu-csi-driver app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: osc-bsu-csi-driver app.kubernetes.io/version: v1.2.4 helm.sh/chart: osc-bsu-csi-driver-1.5.0