Closed marlonramos51 closed 2 weeks ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Is your feature request related to a problem? Yes, in my company we developed a blue/green deploy related to infrastructure major changes due to internal process. We have 4 different repositories to deploy: infra eks cluster-blue, infra eks cluster-green, network stack (Elbs, targets, route, etc) and application
To make it simple, let´s say we have 1 single NLB and 2 target groups, TG-BLUE /TG-GREEN and the blue one is the active cluster and its target is attached to NLB. When I want to activate the other cluster, I switch from TG-BLUE to TG-GREEN in NLB. So, ever one Target is attached do NLB and the other one remain with no NLB attached.
If the readinessgate is enabled and I try deploy the application to that cluster whose target group is not yet attached to the NLB , the readiness workflow never ends due to the message "Target group is not configured to receive traffic"
Describe the solution you'd like I´d like to have a way to write some conditions when enable elbv2.k8s.aws/pod-readiness-gate-inject to allow the albcontroller ignore specific status, for instance: apiVersion: v1 kind: Namespace metadata: name: default labels: elbv2.k8s.aws/pod-readiness-gate-inject: enable conditions: status not in "No traffic"
Describe alternatives you've considered None, -leave a fake NLB just to have both targets ever attached is not an option -I don´t want to use instance as target backend