kubernetes / cloud-provider-openstack

Apache License 2.0
613 stars 599 forks source link

[manila-csi-plugin] If Manila CSI plugin supports Access Mode ReadWriteOnce #2564

Closed syy6 closed 6 days ago

syy6 commented 5 months ago

Hi, In our envrioment, we create Manila PV with Access Mode ReadWriteOnce as below

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: nfs.manila.csi.openstack.org
    volume.kubernetes.io/provisioner-deletion-secret-name: manila-csi-plugin
    volume.kubernetes.io/provisioner-deletion-secret-namespace: kube-system
  finalizers:
  - kubernetes.io/pv-protection
  name: XXXXX
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 20Gi

and we don't get any error message. But per our test, it seems the ReadWriteOnce doesn't reallt take effect, when a Pod A on Node M still are in Terminating status, another Pod B on Node N already can mount it and running.

We checked the code, VolumeCapability_AccessMode_SINGLE_NODE_WRITER(ReadWriteOnce) is mentioned, but it seems there is no real logic to control it in the manila-csi-plugin driver.

We also notice a document from OpenShift, actually OpenStack Manila only supports ReadWriteOncePod & ReadWriteMany.

Could U please help to check if Manila CSI plugin supports Access Mode ReadWriteOnce? If not, if there is any other way we can achieve the same function here? Thanks!

jichenjc commented 5 months ago

there are some issues opened before for this but I am guessing it's not supported ReadWriteOnce I am not sure whether openshift downstream made improvement directly instead of upstream..

@gman0 @zetaab correct me if I am wrong

zetaab commented 5 months ago

we are not using manila, so difficult to answer this issue. But in general manila is readwritemany

gman0 commented 5 months ago

Hello @syy6, indeed the driver never supported this, and the mode validation is very relaxed because of historical reasons.

We've never really seen too convincing use cases to get this implemented, but in any case it shouldn't be too hard if you only need attachment tracking within the cluster. See https://github.com/kubernetes-csi/external-attacher

Note that the Manila service itself does not support attachments at the moment, and there would be nothing stopping other clients from accessing the share.

syy6 commented 5 months ago

Thanks @jichenjc @zetaab @gman0, we have a tricky issue here. We are using ReplicaSet (with replica = 1) for our service, and we have podAntiAffinity for the ReplicaSet, combined with accessModes ReadWriteOnce, we hope to use it to prevent simultaneous access to PVC. But we see the multiple "mount" to manila share can happen when an old version of the Pod of the ReplicaSet is still in Terminating status, but new version of the Pod already starts. In this case, the old & new Pod would write to same file in same time and the file might be corrupted.

kayrus commented 5 months ago

What I'm curious is whether we need to implement changes listed here?

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 6 days ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 6 days ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/cloud-provider-openstack/issues/2564#issuecomment-2321815967): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.