Open pamelachristie opened 4 years ago
Related to #1421
We've run into this too.
@ecordell I think this needs a higher priority as it's a valid scenario for both upstream and RH.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This is still valid bug
@kramvan1 we run into the same problem. As we need to replace Ubuntu 18 nodes with Ubuntu 16. Is there any outlook for this problem as this blocks us from going forward. Don't won't to drop that node when it is not backed up by ReplicaSet!
@mhaideibm I think we need to get this on the roadmap for OLM. https://docs.google.com/document/d/1Zuv-BoNFSwj10_zXPfaS9LWUQUCak2c8l48d0-AhpBw/edit#heading=h.8ngolbigvi7q
Bump
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Bump
This issue has been automatically marked as stale because it has not had any recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contribution. For more help on your issue, check out the olm-dev channel on the kubernetes slack [1] and the OLM Dev Working Group [2] [1] https://kubernetes.slack.com/archives/C0181L6JYQ2 [2] https://github.com/operator-framework/community#operator-lifecycle-manager-wg
There is a tracking kube bug here: https://github.com/kubernetes/kubernetes/issues/57049
Does drain with --force
not solve the problem until this is addressed upstream?
A drain with --force is not a viable solution in this case. The pod had to be deleted with --force (which does not truly delete) before the drain could continue. This meant manual intervention because it blocked drain.
I noticed the PR for this was closed, is there any outlook on a replacement PR to actually get this addressed?
Bug Report
The CatalogSource pod blocks
kubectl drain
commands as it is not managed by a Kubernetes controller.What did you do? Ran
kubectl drain <NODE>
on a cluster with a CatalogSource pod present.What did you expect to see? The
kubectl drain
should have proceeded without the pod causing a failureWhat did you see instead? Under which circumstances?
This result is due to the pod being not being managed by a Kubernetes controller.
Environment
operator-lifecycle-manager version:
0.14.1
Kubernetes version information:
1.16
Kubernetes cluster kind: IBM Cloud
Possible Solution Create a deployment or some other kind of Kubernetes controller to manage the pod rather than just the CatalogSource custom resource.
Additional context