Closed ejschoen closed 1 year ago
do you have two PVs with same volumeHandle
value?
Not concurrently, but I have deleted and recreated PVs reusing the same volume handles.
— Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/blob-csi-driver/issues/762#issuecomment-1261826725, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACQ5IYRIWDO3K2CUFM24WI3WAUZQ3ANCNFSM6AAAAAAQYHL7PU . You are receiving this because you authored the thread.Message ID: @.***>
pls don't use same volumeHandle value, if orignial PV is still used, and second PV with same volumeHandle value would not be mounted on the same node
Changing the volume handles did seem to solve the problem, even though I was not using the same volumeHandle twice at the same time.
Does this mean that if I deploy an application through a Helm chart, and that chart creates CSI-implemented PVs, PVCs, and pods that use those PVCs, then I have to generate new unique volumeHandles for each PV with each Helm chart deployment or upgrade?
if PV is created by the driver, then volumeHandle would be unique, the original problem is that the self configured PV is still mounted on the node, and there is conflict with new PV with same volumeHandle.
Thanks. I think I understand. Since PersistentVolumes are immutable, if I want to change something about the volume when redeploying a Helm chart, I have to create a new PV name and make sure it doesn't use the same volumeHandle as a previous PersistentVolume that the Helm chart would eventually cause to be destroyed (because there would no longer be any Pods mounting PVCs referring to the old PVs).
If I were to not use a different volumeHandle, I understand that during the termination grace period for the pods, there would be duplicate volumeHandles... the old-not-yet-destroyed PV and new PV sharing the same volumeHandle. But after the old PVs get destoyed by Kubernetes when they're no longer referenced, there is no longer volumeHandle duplication. But by then, is it too late? Does the driver not do anything that would detect that the volumeHandle is no longer ambiguous?
Just so I understand your explanation above, the help pages that I am reading (for example, https://learn.microsoft.com/en-us/azure/aks/azure-csi-blob-storage-static?tabs=secret) distinguish between static and dynamic blobs. Is this is the same distinction, since in the dynamic case, the application doesn't create its own PersistentVolume?
it's related to this upstream issue: https://github.com/kubernetes/kubernetes/issues/91556, volumeHandle is the unique ID detected by kubelet when mount, so it's by design in k8s now.
for dynamic case, the volumeHandle is a uuid, so it's always unique, while if you use static case, you need to make sure the volumeHandle is unique.
That's too bad. It's very un-k8s like. Kubelet configures the system of pods and resources according to current state, but here it looks like a temporarily-duplicated volumeHandle that quickly becomes not duplicated causes a discrepancy between the desired and actual state that Kubelet or the CSI driver never resolves.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What happened: Using CSI blob fuse driver to statically access blob container as persistent volume. Worked once, and then after pods restarted, no longer works.
What you expected to happen: Expected PVCs to mount into pods.
How to reproduce it: Follow instructions at https://learn.microsoft.com/en-us/azure/aks/azure-csi-blob-storage-static?tabs=secret.
Enable blob CSI driver via EnableBlobCSIDriver feature and
az aks create --enable-blob-driver
on cluster. Create a deployment that mounts a PVC to a PV to a blob container. Kill all of the pods that are mounting one of the persistent volumes. Pod will fail to start.Anything else we need to know?: See partial manual workaround at end of issue.
Environment:
kubectl version
): 1.22.6uname -a
): 5.4.0-1089-azureCSI blob driver log:
kubectl get pv:
kubectl get pvc:
Sample pod description:
I was able to only partially work around the issue by using
kubectl exec -n kube-system csi-blob-node-6clgc -c blob -- /bin/sh
and then manually creating the missing pv-name/globalmount directories and setting their owner id to 1001, which is the UID under which my pods are running. (Not sure this latter is necessary.).This got the pod started. However, when I look at the directories in my pods into which the PVCs are mounted, they're empty, even though the container named by the volumeAttributes.containerName field in the CSI spec is not empty.
Running the
mount
command in the CSI driver's blob container doesn't show any of my blob containers mounted. However, I do see lines like this:I'm little surprised to see mention of /dev/sda1 at the mount point that should be a blob file, but I am not familiar with how a FUSE file system appears in mtab.