Open mikemorris opened 2 years ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/assign
/assign @kflynn
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen /remove-lifecycle rotten /lifecycle staleproof
@craigbox: Reopened this issue.
Hi @craigbox! We see you've re-opened this issue. Generally speaking on this project we ask that maintainers be involved/consulted in the decision to re-open closed issues, as we are the ones that have to prioritize and work on the logistics. That said, we can definitely make exceptions when needed! We appreciate your interest in the issue, and were wondering: out of curiosity, is this something that you're interested in personally contributing to in order to help move forward?
I will have re-opened this because @mikemorris mentioned it here.
(Maintainers didn't close the issue, the passage of time did, and personally I have little love for that. The problem still exists, even if it's not currently being worked on or tracked.)
We can understand how community members like yourself may be pained by seeing issues auto-close, especially if its something you're wanting to see implemented. It is however the case that the project has limited resources (effectively running on volunteer time) and we simply can not prioritize and move forward with all issues. We know this can be frustrating, and we are sorry for that frustration, but "Closed" is sometimes the most honest and realistic reflection of the accurate state of an issue in terms of priority and project management.
Ideally we would ask that community members please consider putting a closed issue on the meeting agenda or mailing list to discuss it there, or be personally willing to invest time in moving something forward prior to bumping it back open as this can be more optimal for breathing new life into that issue, sharing context and perhaps garnering support from people who will be the ones to work directly on it.
All the above said for the general case, I do think perhaps this is a bit of a special case: there were two people assigned to the issue prior to its closing:
@mikemorris and @kflynn what are your thoughts on this issue?
/triage needs-information
I think that this is still relevant, and that our new(ish)ly better-defined GEP process is the right way to tackle it, given that we know it's relevant now, that it's becoming more relevant with cloud gateways, and that there are some fascinating cans of worms lurking behind the innocuous title of this issue.
To that end, I'll organize some thoughts and open a discussion. Let's leave this open until that happens.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale pending https://github.com/kubernetes-sigs/gateway-api/issues/1478#issuecomment-2294534288
What would you like to be added:
As a followup to #1426, there is a need to clarify how traffic ingressing through a
Gateway
should or should not respect GAMMA routing configuration, specifically in the case when aService
specified as abackendRef
of anHTTPRoute
with aGateway
parentRef
may have separately-configured mesh routing rules from anHTTPRoute
specifying theService
as aparentRef
.An initial draft of a proposal to address this has been started in https://docs.google.com/document/d/16GZj-XFt6sAi4tMUy9Ckr99znIm6Hy0W0VeawJUdWRw/edit#
Why this is needed:
There are at least two possible approaches to handling this - expecting or allowing a Gateway to implicitly respect GAMMA routing rules (which may be difficult for Gateway API implementations focused on N/S use cases, or when mixing N/S and E/W implementations from different vendors), or requiring more explicit configuration. We should clarify the expected behavior here to facilitate GAMMA implementation.