Closed bowei closed 2 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale
/remove-lifecycle rotten /lifecycle frozen
This has been fixed with the finalizers & GC V2.
From @nicksardo on April 28, 2017 22:5
GC runs at the end of every sync for ever ingress. If there are many ingress objects, this results in many GCP calls. It's even more troublesome in the case of federated clusters.
We should perform a loadbalancer-specific GC on delete notification, but only run cluster-wide GC on a less frequent basis, such as resyncPeriod (set to 10min). cc @madhusudancs @csbell
Copied from original issue: kubernetes/ingress-nginx#674