orange-cloudfoundry / k3s-wrapper-boshrelease

k3s wrapper scripts bosh release
Apache License 2.0
3 stars 2 forks source link

delete deployment should clean all data & external resources -avoids leaks #37

Open poblin-orange opened 3 years ago

poblin-orange commented 3 years ago

Expected behavior

As a k3s-boshrelease operator

As a k3s-boshrelease operator

Observed behavior

Upon k3S deployment deletion,

Alternative solutions

Use GCP API to delete volumes annotated with cluster id > Cloud SDK can be used to identify the disks if the proper filter and format are parse > i.e. > To list all the disks being used by a GKE (you can change the filter at your convenience) > ``` > gcloud compute disks list --format="table(name,users)" --filter="name~^gke-" > ``` > To list only disks used as PVC > ``` > gcloud compute disks list --format="table(name,users)" --filter="name~^gke-.*-pvc-.*" > ``` > This last command will list detached PVC disks > ``` > gcloud compute disks list --format="table(name,users)" --filter="name~^gke-.*-pvc-.* AND -users:*" > ``` > To ensure a detached disk is not in use by a cluster, here's a kubectl command to list a cluster's PVs and their GCE PD: > ``` > kubectl get pv -o custom-columns=K8sPV:.metadata.name,GCEDisk:spec.gcePersistentDisk.pdName > ``` >
$ kubectl api-resources --namespaced=false ``` $ kubectl api-resources --namespaced=false NAME SHORTNAMES APIGROUP NAMESPACED KIND componentstatuses cs false ComponentStatus namespaces ns false Namespace nodes no false Node persistentvolumes pv false PersistentVolume mutatingwebhookconfigurations admissionregistration.k8s.io false MutatingWebhookConfiguration validatingwebhookconfigurations admissionregistration.k8s.io false ValidatingWebhookConfiguration customresourcedefinitions crd,crds apiextensions.k8s.io false CustomResourceDefinition apiservices apiregistration.k8s.io false APIService tokenreviews authentication.k8s.io false TokenReview selfsubjectaccessreviews authorization.k8s.io false SelfSubjectAccessReview selfsubjectrulesreviews authorization.k8s.io false SelfSubjectRulesReview subjectaccessreviews authorization.k8s.io false SubjectAccessReview certificatesigningrequests csr certificates.k8s.io false CertificateSigningRequest clusterissuers certmanager.k8s.io false ClusterIssuer nodes metrics.k8s.io false NodeMetrics ingressclasses networking.k8s.io false IngressClass runtimeclasses node.k8s.io false RuntimeClass podsecuritypolicies psp policy false PodSecurityPolicy clusterrolebindings rbac.authorization.k8s.io false ClusterRoleBinding clusterroles rbac.authorization.k8s.io false ClusterRole priorityclasses pc scheduling.k8s.io false PriorityClass csidrivers storage.k8s.io false CSIDriver csinodes storage.k8s.io false CSINode storageclasses sc storage.k8s.io false StorageClass volumeattachments storage.k8s.io false VolumeAttachment ```
gberche-orange commented 3 years ago

Expected behavior

Observed behavior

Upon k3S deployment deletion, the K8S resources are not deleted

Alternative solutions

poblin-orange commented 3 years ago

Interesting to see how an integrated k8s runtime like gke handles these leaks access. However, on a generic k8s runtime like k3s cant exactly know which type of external resource could leak (with out of tree Cloud Provider, container based storage like longhorn, custom ingress / loadbalancer / externalDNS).

Probably a security mechanism could be set (preventing deletion if a list of configurable kinds is still present, blocking deletion).

With this, the cleanup of these objects should be let to upper level provisionning layer. see: