I have noticed that after draining a node, the Released PVs with ReclaimPolicy set to Delete never get deleted. When running kubectl describe pv <name>, I can see the following:
Name: pvc-3f138b3b-2c1d-4737-b53b-1e4ac53e5b92
Labels: <none>
Annotations: local.path.provisioner/selected-node: k3s-agent-md-ars
pv.kubernetes.io/provisioned-by: rancher.io/local-path
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-path
Status: Released
Claim: default/persistence-rabbitmq-cluster-server-0
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 4Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [k3s-agent-md-ars]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /var/lib/rancher/k3s/storage/pvc-3f138b3b-2c1d-4737-b53b-1e4ac53e5b92_default_persistence-rabbitmq-cluster-server-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning VolumeFailedDelete 3m57s (x15 over 83m) rancher.io/local-path_local-path-provisioner-5ccc7458d5-xlcdg_3c7bbb98-4bc1-4768-95cb-2e6f3ee20a30 failed to delete volume pvc-3f138b3b-2c1d-4737-b53b-1e4ac53e5b92: failed to delete volume pvc-3f138b3b-2c1d-4737-b53b-1e4ac53e5b92: create process timeout after 120 seconds
The node is uncordoned after few minutes and is in ready state. Could it be that the local-path gives up after 120s and never tries to clean up the released PVs? If so, can I change some config to keep trying longer?
I have noticed that after draining a node, the Released
PV
s withReclaimPolicy
set toDelete
never get deleted. When runningkubectl describe pv <name>
, I can see the following:The node is
uncordoned
after few minutes and is in ready state. Could it be that thelocal-path
gives up after 120s and never tries to clean up the released PVs? If so, can I change some config to keep trying longer?