kubernetes-sigs / gateway-api

Repository for the next iteration of composite service (e.g. Ingress) and load balancing APIs.
https://gateway-api.sigs.k8s.io
Apache License 2.0
1.83k stars 475 forks source link

The conformance test suite should also search `Accepted` condition in `Gateway.Status.Conditions`. #3026

Closed wstcliyu closed 3 weeks ago

wstcliyu commented 6 months ago

What would you like to be added: The conformance test suite tries looking for Accepted condition in Gateway.Status.Listeners.Conditions. It should also search Accepted condition in Gateway.Status.Conditions. https://github.com/kubernetes-sigs/gateway-api/blob/main/conformance/utils/kubernetes/helpers.go#L716

Why this is needed: Some Gateway implementation only sets Accepted condition in Gateway.Status.Conditions.

robscott commented 6 months ago

Although the current conformance checks are per Listener, I do agree that these probably becomes unnecessarily verbose when a Gateway has lots of listeners:

https://github.com/kubernetes-sigs/gateway-api/blob/7320257302905254dcebf4f5c424741cf1556c06/conformance/utils/kubernetes/helpers.go#L325-L331

When I'm thinking about potential alternatives, I think we'd end up with something like this:

  1. Gateways must have top level conditions set for all of the conditions we currently require on Listeners
  2. Listener conditions only need to be set when they differ from top-level Gateway conditions (ie all but one Listener is accepted, therefore Gateway is accepted + error conditions are set for the problematic Listener)

That kind of change would allow implementations to continue to set all of the conditions they already are, but no longer requires status to be quite as verbose. In any case, it's too late to make a change like this for v1.1, but definitely worth discussion in the v1.2 cycle.

/cc @youngnick

youngnick commented 5 months ago

We've defined these conditions as positive-polarity conditions, which means that the official contract for them is that they should always be present, because them not being present implies "not enough information to set this condition yet".

Doing something like this will require us writing out and then conformance testing all the rules to ensure that we get this right, and that everyone does at least the minimum.

What I don't want to have happen here is that implementations stop publishing any information altogether into Listener status, which will happen unless we carefully spec out the change here.

Making everything required is definitely more verbose but means that we are guaranteed accuracy in the Listener status.

@wstcliyu, as of right now, the conformance tests check that the Listener Conditions are set because they are required in the spec. We can make it optional, provided we define the rules and make sure those rules are tested properly by conformance. Until then, implementations need to populate all of the status to be conformant, sorry.

robscott commented 5 months ago

@youngnick how would you feel about a change to the spec that looked like this?

  1. Gateways must have top level conditions set for all of the conditions we currently require on Listeners
  2. Listener conditions only need to be set when they differ from top-level Gateway conditions (ie all but one Listener is accepted, therefore Gateway is accepted + error conditions are set for the problematic Listener)
youngnick commented 5 months ago

It's a pretty big change in intent for status for Gateways, so I think I'd like to think about it for a bit. I can't see immediate problems, but I'm worried about implicit assumptions and the changeover process.

The proposed change would essentially change all Listener Conditions to have negative-polariy behavior, but positive-polarity naming, which seems confusing.

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 3 weeks ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 3 weeks ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/gateway-api/issues/3026#issuecomment-2406704371): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.