Closed robscott closed 3 weeks ago
Totally agree on this one. I think that, in addition to a way for marking a conformance test as still experimental, we need to create a process that leads to the graduation of such a test, i.e., when a test can be moved out of experimental.
Let me grab this one. /assign
marking a conformance test as still experimental
Any thoughts on the naming - I'm thinking it would be confusing to describe a test as
experimental conformance test for an extended feature
Maybe stable/unstable is better way to describe it?
but then we have a stable/experimental channel .. hmm
Maybe all new conformance tests are given a named status until we have at least 3 implementations passing the tests or they've been present in the API for at least 1 release and have evidence of at least 3 implementations passing them, ideally via submitted conformance reports.
Some possible terms for this state:
If we want a corresponding term for tests that have graduated from that state, maybe one of the following would work?
Maybe all new conformance tests are given a named status until we have at least 3 implementations passing the tests or they've been present in the API for at least 1 release and have evidence of at least 3 implementations passing them, ideally via submitted conformance reports.
+1. We could initialize such a field as <not-graduated-yet>
and eventually set it to <graduated>
once the "at least 1 release and have evidence of at least 3 implementations passing them" condition is satisfied (not-graduated-yet and graduated are placeholders for the terms we eventually decide to use).
I think we should add some knobs to the conformance suite to opt-in the <not-graduated-yet>
tests, as we should exclude them by default when running the suite. A problem that I can see in such an approach is getting new tests to graduation, as what's the benefit for an implementation to explicitly run <not-graduated-yet>
tests purposefully?
To overcome this problem, we could remove the graduation criteria of having 3 implementations, and leave the "1 release cycle" requirement. This way, we force implementations to try and (maybe) fix <not-graduated-yet>
conformance tests before their graduation. We should make clear before the release which tests are becoming mandatory, and in case some implementations have issues with them, those issues should be fixed before the test graduation (and case by case we can decide to put off the test graduation until the next release).
Some possible terms for this state:
- Qualifying
- Exploratory
- Preliminary
Some other terms that could be a good fit:
If we want a corresponding term for tests that have graduated from that state, maybe one of the following would work?
- Stable (not actually used in Gateway API yet)
- Qualified
- Established
I'd go with preliminary|trial
and stable
.
I did this pretty simply in https://github.com/kubernetes-sigs/gateway-api/pull/3212 - just added a new set called ExperimentalPolicyFeatures
for the experimental BackendTLSPolicy
.
What would you like to be added: Some kind of indicator that a conformance test is still "experimental".
Why this is needed: In some cases we want to add conformance tests before we have any implementations of a feature (x-ref https://github.com/kubernetes-sigs/gateway-api/pull/2821). Not being able to meaningfully run a test like this increases the chances that a test could either be wrong or buggy. In general, our conformance tests represent a contract that we should be very hesitant to change, but when tests are new and untested, we likely need to have some kind of label to indicate that they could still change if needed.
cc @dprotaso