kubernetes-sigs / vsphere-csi-driver

vSphere storage Container Storage Interface (CSI) plugin
https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/index.html
Apache License 2.0
293 stars 177 forks source link

datastore migration / storage vmotion renamed volumes #23

Closed rust84 closed 4 years ago

rust84 commented 5 years ago

/kind feature

What happened:

I have successfully deployed services using vsphere csi on a local datastore. I am planning on migrating everything to an iSCSI LUN. When migrating the first node to the iSCSI datastore using storage vmotion the volumes are renamed using the node name.

volume

Now when a pod is restarted I get the following error because the volume path has changed

Warning  FailedAttachVolume  39s (x16 over 17m)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-8a0d07c8-7989-11e9-837e-005056bb3b01" : File []/vmfs/volumes/5ccd2965-c10c950c-feba-94c691af21f8/kubevols/kubernetes-dynamic-pvc-8a0d07c8-7989-11e9-837e-005056bb3b01.vmdk was not found

What you expected to happen:

I understand that this is a feature of storage vmotion and not strictly a driver bug. Is there a procedure that will allow me to migrate my existing volumes to a new datastore? I have searched around as well as reading the docs and am unable to find anything specific to my situation.

I have previously backed up the volumes using velero (formely ark) so restoring may be an option if I am forced to recreate. This is my lab cluster so it is not critical for me. However I would like to clarify the functionality for future.

How to reproduce it (as minimally and precisely as possible):

  1. create a pod which depends on a pvc using the vsphere storage class
  2. migrate the nodes storage to a different datastore with vmotion (storage only)
  3. restart the pod and the volume attach fails

Anything else we need to know?:

Environment:

dvonthenen commented 5 years ago

Hi @rust84,

Currently, Storage vMotion isn't supported. That feature will be added at the time of GA.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

shalini-b commented 4 years ago

Kindly move to the latest CSI Driver implementation backed by CNS. This should fix your problem.

brainkiller commented 4 years ago

Can someone elaborate on the last comment ? Is there a new implementation that fixes this issue or what is this about ?

RaunakShah commented 4 years ago

@brainkiller vsphere-csi-driver was GA'd last month, and now we support features like Storage vMotion, on the 6.7U3 vSphere release. Once you upgrade your vSphere version to 6.7U3, and use the new csi driver image available, Storage vMotion will be supported.

SandeepPissay commented 4 years ago

/close

k8s-ci-robot commented 4 years ago

@SandeepPissay: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/23#issuecomment-541211103): >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
MarkMorsing commented 1 year ago

We're currently seeing this issue in vSphere 7 Update 3.