Open eltomato89 opened 4 years ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This is a really big problem.
@abhinavdahiya thanks, but unfortunately it wouldn't work in an automated pipeline
The issue seems to be that apiservice is still present, after doing kubectl delete apiservices.apiregistration.k8s.io v1.packages.operators.coreos.com
namespace is successfully deleted. This resource should be deleted by operator itself. I tried setting ownerReference
on apiservice on namespace, but that does not seem to work, as ownerReferences
does not seem to work on global resources.
There was issue opened on kubernetes github https://github.com/kubernetes/kubernetes/issues/60807, but this is not really an issue, as apiservice is still depending on namespace and thus finalizer never completes. Maybe correct solution would be to have ownerReference
set to namespace, but this does not currently work.
If you have sufficiently powerful orchestration for kubernetes, you can split packageserver clusterserviceversion from deployment and make it so it is removed before operator is removed. This is a workaround, and what kubernetes should really do is implement garbage collection (ownerReferences) for global resources.
Example of workoround using pulumi: https://gist.github.com/offlinehacker/856b64ec5ad5ab3829bf01f1fb29958d
FWIW I captured a workaround for this issue and a few others in a quick shell script, manage.sh
Is this still an issue?
Yes, I'm under the pressure of this issue still in v0.22.0. (Using minikube)
NOTE: This is still an issue as of Feb 2024
OLM finalizer keeping deleted (empty) namespace stuck in "Terminating"-state
Bug Report
What did you do? After deleting a (newly created) namespace it is stuck in the "Terminating state" and wont go away until removing the "kubernetes" finalizer.
What did you expect to see? Kubernetes properly removing the namespace
What did you see instead? Under which circumstances? The namespace is stuck in "Terminating" state.
Environment
operator-lifecycle-manager version: 0.14.1
Kubernetes version information:
kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-23T14:21:54Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-c0eccc", GitCommit:"c0eccca51d7500bb03b2f163dd8d534ffeb2f7a2", GitTreeState:"clean", BuildDate:"2019-12-22T23:14:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Possible Solution Got help on the Slack Channel
kubectl get apiservice
gaveNAME SERVICE AVAILABLE AGE ... v1.packages.operators.coreos.com olm/v1-packages-operators-coreos-com False (ServiceNotFound) 6d23h ...
After deleting the apiservice, namespaces are properly terminating again.
Additional context