Closed chapmanc closed 2 weeks ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@chapmanc This is expected as you manually change the load balancer. All managed resources including the lb should not be touched.
What happened:
K8S service deletion proceeds without cleaning up the load balancer.
What you expected to happen:
The service should clean up all the dependent resources. Instead we have a warning in the events that if failed to delete and then it gives up and deletes the service. The Etag doesn't seem to match but it doesn't update or try to update itself. This results in the load balancer being orphaned and left behind.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
We have a controller that adds some things to the lb and then deletes them before the service is deleted. We have verified that all modifications it makes are removed and the manager is no longer running by the time it gets to the delete.
Environment:
kubectl version
): Server Version: v1.28.9cat /etc/os-release
): n/auname -a
): n/a