Closed sogos closed 6 months ago
Oh, just tried to rename the "undocumented" field on PV
volumeHandle: s3-csi-driver-volume
By something different on each PV and it worked ! (s3-csi-driver-volume-front
)
Keeping this issue for documentation :)
Glad you figured out the fix here. The CSI persistent volume fields are documented here for reference. We'll add some better documentation/examples on our end as well. Kubernetes seemingly silently drops volumes with duplicate ids and does not pass them on to the driver, so the error reporting is not helpful.
Thanks for the example !!! :heart:
/kind bug
What happened?
Tried to mount two different buckets in the same pod:
Pod volume mounts
Pod Volumes
If I comment one or the other VolumeMounts, it's okay, the pod start correctly
What you expected to happen?
I except the Pod run with two S3 mount points attached :)
How to reproduce it (as minimally and precisely as possible)?
Create two PV (one different bucket for each), PVC and try to mount in a pod.
Anything else we need to know?:
I also use CSI Secret Store + CSI Secret AWS Plugin
Logs of one S3 CSI Driver Controller, we only see 1/2 is detected/mounted
Environment
kubectl version
): 1.28 (EKS)