Open RomanBednar opened 5 months ago
yes, that's a limitation since the job status can only be stored locally in ~/.azcopy
, as long as the controller pod is restarted, copy jobs could not be continued.
How can users detect such state? How can they recover / resume?
Who deletes incompletely cloned volumes in Azure? There is no PV created for them. How is an user supposed to find them in the first place?
I am afraid the cloning is not very useful if it can leak volumes that need manual cleanup.
Can Kubernetes Job
objects be used for scheduling these azcopy
operations, so as these cloning operations can be tracked separately? If controller pod dies, then cloning can continue and when controller pod restarts, it can find existing cloning Jobs
(via label or something) and then continue with creation of PV etc.
What happened:
While testing azure file cloning in OpenShift we noticed that if a controller pod running azcopy job is killed any clone PVC still being copied gets stuck in Pending phase.
This seems to be the effect of azcopy relying on job plan files (
AZCOPY_JOB_PLAN_LOCATION
or~/.azcopy
by default) to track jobs which can be lost if stored on ephemeral volume.Is this a know limitation or is there a recommended solution?
What you expected to happen:
Clone job surviving a lost controller pod.
How to reproduce it:
1) Create Azure File PVC/PV 2) Create a new clone PVC referencing the origin PVC as a source 3) Kill Azure File leader controller pod 4) Clone PVC is stuck in Pending state
Anything else we need to know?:
Checking the helm charts in this repo the destination for those job plan files seems to be ephemeral with
emptyDir
volume so the issue would occur with this deployment as well: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/eefed914745603ad28eac52f07c47b972ad3dadd/charts/v1.30.2/azurefile-csi-driver/templates/csi-azurefile-controller.yaml#L229Environment:
kubectl version
):uname -a
):