Closed rbtcollins closed 3 months ago
Basically I never want to write thing again:
kubectl get pv | awk '/^local/ { print $1}' | xargs kubectl get pv -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0]}{"\n"}{end}' | awk '/aks-fdbl16sv2/ { print $1 }' | xargs kubectl delete pv
yes, this is a general issue.
To prevent the leak in the first place, I would like to have a controller which watches nodes, and when a node is removed from the cluster, removes all the claims and pvs that were on that node. This would be a separate controller, so that it can be opt-in rather than run implicitly in the existing daemonsets.
I agree a cloud controller is required. I've written up a proposal: https://docs.google.com/document/d/1SA9epEwA3jPwibRV0ccQwJ2UfZXoeUYKyNxNegt0vn4 for this.
Thank you for the link, I've added a thought to the document there. Is a prototype under way?
As an intermediate step, would it help if we put an OwnerRef on the PV pointing to the Node object? PVC protection will prevent the PV from actually being deleted, but if a user comes in a deletes the PVC, then the PV object will get cleaned up instead of taking up cycles from the PV controller (which every few seconds goes through all PVs in the cluster).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
On Mon, 14 Dec 2020, 03:03 fejta-bot, notifications@github.com wrote:
Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta https://github.com/fejta. /lifecycle stale
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/issues/201#issuecomment-744120803, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADZ7XSHS24ULCLXAHNEVALSUVXA3ANCNFSM4OCKP5MA .
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale. /lifecycle frozen
/help /kind feature
@cdenneen: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
A maybe better scripts, depending on your needs:
kubectl get pv -o custom-columns=NAME:metadata.name,PATH:spec.local.path,PVC:spec.claimRef.name,NODE:spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0] | grep ${NODE_NAME} | awk '{ print $1}' > /tmp/delete_pv
kubectl get pv -o custom-columns=NAME:metadata.name,PATH:spec.local.path,NAMESPACE:spec.claimRef.namespace,PVC:spec.claimRef.name,NODE:spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0] | grep ${NODE_NAME} | grep -v none | awk '{ print $3" "$4}' > /tmp/delete_pvc
while read PVC; do
kubectl delete pvc -n $PVC
done </tmp/delete_pvc
for PV in `cat /tmp/delete_pv`;do kubectl delete pv $PV; done
From k8s 1.23 there a feature for gracefull shutdown of node and from 1.24 none-gracefull shutdown. So may be, we can use it to delete pvc from nodes?
From k8s 1.23 there a feature for gracefull shutdown of node and from 1.24 none-gracefull shutdown. So may be, we can use it to delete pvc from nodes?
It seems that this issue has been fixed by https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/pull/385, but I think by the feature of gracefull node shutdown is a good idea
From k8s 1.23 there a feature for gracefull shutdown of node and from 1.24 none-gracefull shutdown. So may be, we can use it to delete pvc from nodes?
Yes, but It is better to go with gracefull shutdown of nodes This feature allows nodes in our Kubernetes cluster to shut down in an orderly manner. Instead of abruptly stopping, the node will notify Kubernetes, giving it time to safely relocate workloads to other nodes. It will protect from Data loss aswell.
Where as in non graceful shutdown of nodes abrupt and can lead to data loss, application downtime, and disrupted services.
The issue is addressed and resolved in #385
/close
Take a look at the controller in #385 to see if it will help
@msau42: Closing this issue.
When a node is deleted - either due to failure or due to an autoscale event removing it, local static volumes on it become permanently destroyed. But the metadata for those pv's, and for any pvc's referencing them, are not.
This is frustrating, because we have to take care not to reuse node names, yet in AKS, node names are deterministic and reused commonly, as you can see from these sample names:
This leads to rogue volumes and claims resurrecting themselves back when a nodepool is scaled up again - whether automatically or not.
Describe the solution you'd like in detail
I would like tackle this from two angles:
To prevent this zombie resurrect threat I would like to extend the topology key with some sort of uuid for the node, for instance the Machine ID: (7b59f36627eb48f0af387b76c602cf9b from one of those examples); though this may require changes outside of this sig and thus be a future work - since we cannot benefit from things that we can't deploy in AKS.
To prevent the leak in the first place, I would like to have a controller which watches nodes, and when a node is removed from the cluster, removes all the claims and pvs that were on that node. This would be a separate controller, so that it can be opt-in rather than run implicitly in the existing daemonsets.
Describe alternatives you've considered
I have not considered any alternatives.
Additional context