Open fedordikarev opened 8 months ago
🤔 with the current architecture I don't think it's possible. While the tool has some support for Kubernetes it is only for metadata, it doesn't really know about it; you can think of it as mostly a generic kind of tool.
In order to know if the disks as bounded by a PV we would need to not only add a dependency on the Kubernetes packages, but also provide the user with the way to indicate which disk was created by which Kubernetes cluster (e.g. you could have multiple GKE clusters within the same GCP project, each cluster managing PVs). This will add tons of complexity to the code, adding the door to more bugs.
I don't think this is a bug, and I don't think this is fixable. The tool is clearly for doing cleanup but as always this needs to be done in a conscious manner. While I understand the pain using the tool to delete disks brought to people I don't think it's a tool problem.
I vote to close this one with a wont-fix
label, but I'm open to be convinced of the contrary.
As an idea we can add feature to support several cases:
To address that we can add flag --exclude-list filename
with the list of disk ids (one per line) that should be excluded from the list at all (or maybe shown in the main list with extra note and excluded from the selection during delete process).
I'm not sure if it easy to do or even possible with the current architecture of the tool, but we got some incidents due to next behaviour:
unused
tool and remove disks that are currently unmountedIt could be less an issue after k8s 1.27 and persistentvolumeclaim-retention, but should we add some extra checks (in the tool or maybe external) to check if there any PV referring to the disk before deleting them?