Closed devdattakulkarni closed 3 months ago
@eminalparslan Can you take a look at this issue?
@eminalparslan
Kubernetes finalizers will be an appropriate mechanism to handle this situation: https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/
The general idea will be like this: When KubePlus Pod is about to be deleted, we add metadata.finalizers entry to all the resourcecomposition instances that are currently present in the cluster. This will prevent Kubernetes API to delete resourcecomposition instances. When KubePlus starts up, it will check for any existing resourcecomposition instances with metadata.finalizers field. If it finds such resources, KubePlus can simply clear the metadata.finalizers entry. This will cause Kubernetes API to proceed with the deletion of that resourcecomposition if one was requested while KubePlus was down. And since KubePlus is now back, the resourcecomposition deletion should naturally lead to deletion of all the children Custom Resource instances. (This assumes that KubePlus, upon startup, is able to build its internal state correctly by discovering any resourcecomposition instances that are currently present in the cluster. This functionality is present in KubePlus, though it will be good to verify it).
Pre-requisites:
This is no longer an issuer after moving resourcecomposition crd registration in the KubePlus helm chart's crd folder. See: https://github.com/cloud-ark/kubeplus/commit/34f6dcef47dd00b7fd3c2a42c3e046fab33c6f42
We have observed that the following sequence of events is possible: