Closed robscott closed 1 month ago
Since the CRDs are shared resources, what safeguards does this approach provide to ensure the Job does not cause breakage among different implementations? For instance, implementation A runs the Job to install version X of the CRDs and later implementation B runs the Job to install version Y of the CRDs. If the schema changes between X and Y versions, a conversion will need to take place, correct?
You're completely right @danehans, to make this safe, we'd need to establish some guardrails that could be fairly limiting. I think the only way to provide safe installation and upgrades would be to limit this to installing newer versions of CRDs included in standard channel. If an experimental CRD was present, it's possible that an upgrade could result in a breaking change.
I think the MVP for this would need to be limited to standard channel since it provides strong backwards compatibility guarantees.
In the future, we'd probably want to extend this to experimental, but that would require more advanced logic, including:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Taken from my comment in https://github.com/kubernetes-sigs/gateway-api/pull/2951#issuecomment-2043491641
I'm not sure using a job to bootstrap the gateway api crds is possible before the CNI get ready first. As that is the case to bootstrap cilium to use its gateway api support. I'm testing different implementations to better learn Gateway API.
I'd always assumed that Cilium's Envoy-based Gateway API implementation was deployed separately from CNI, @sayboras can you confirm if this approach would be problematic for Cilium?
I'd always assumed that Cilium's Envoy-based Gateway API implementation was deployed separately from CNI
Yes, you are correct. The Gateway API provisioning part is part of Cilium Operator, which is separated from Cilium Agent or Cilium CNI components.
Can you confirm if this approach would be problematic for Cilium?
I don't think there will be any problem due to the reasons mentioned above.
@sayboras Hello!
When users use helm chart to bootstrap cilium CNI with gatewayAPI.enabled
in a new cluster, is the default GatewayClass cilium
the only missing resource if the gateway api crds were not installed before hand?
I currently uses a multi-step installation process:
Is it equivalent with the following approach?
cilium
GatewayClassI'd always assumed that Cilium's Envoy-based Gateway API implementation was deployed separately from CNI
Yes, you are correct. The Gateway API provisioning part is part of Cilium Operator, which is separated from Cilium Agent or Cilium CNI components.
Can you confirm if this approach would be problematic for Cilium? I don't think there will be any problem due to the reasons mentioned above.
@robscott More specifically, does it mean in the future the cilium helm installation method would embed the gateway api crds bootstrap/upgrade k8s job?
Is it equivalent with the following approach?
Not really equivalent, however, once https://github.com/cilium/cilium/issues/29207 is done, the installation process will be easier (though you might still need to provision Cilium GatewayClass outside of helm chart).
Is it equivalent with the following approach?
Not really equivalent, however, once cilium/cilium#29207 is done, the installation process will be easier (though you might still need to provision Cilium GatewayClass outside of helm chart).
I see. If we use k8s job (as this issue discussed) to install the gateway api crds, given that https://github.com/cilium/cilium/issues/29207 is done, so basically this k8s job and the cilium helm bootstrap can be started in parallel and got eventually installed, not leaving the k8s job in pending state. Is my understanding correct?
I see. If we use k8s job (as this issue discussed) to install the gateway api crds, given that https://github.com/cilium/cilium/issues/29207 is done, so basically this k8s job and the cilium helm bootstrap can be started in parallel and got eventually installed, not leaving the k8s job in pending state. Is my understanding correct?
The Gateway API provisioning is part of Cilium Operator, which is separated from Cilium Agent or Cilium CNI components. So any pod will be scheduled regardless of Gateway API CRD installation. The work mentioned in https://github.com/cilium/cilium/issues/29207 is to improve user experience and avoid manual Cilium Operator restart.
I see. If we use k8s job (as this issue discussed) to install the gateway api crds, given that cilium/cilium#29207 is done, so basically this k8s job and the cilium helm bootstrap can be started in parallel and got eventually installed, not leaving the k8s job in pending state. Is my understanding correct?
The Gateway API provisioning is part of Cilium Operator, which is separated from Cilium Agent or Cilium CNI components. So any pod will be scheduled regardless of Gateway API CRD installation. The work mentioned in cilium/cilium#29207 is to improve user experience and avoid manual Cilium Operator restart.
Thanks for the above and previous clarification.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What would you like to be added: We could create a simple Kubernetes Job that could be bundled with implementations to install Gateway API CRDs if they don't already exist. This job would have the following configuration:
This would need to have the following logic for each Gateway API CRD:
All of this could theoretically be built with the
registry.k8s.io/kubectl
image.Why this is needed: Many implementations want to have an easy way to bundle CRDs with their installation, but they also don't want to conflict with other installations of Gateway API in the cluster. This could provide a reasonably safe mechanism to ensure that CRDs were present and at a min version. This could also be bundled in a Helm chart https://github.com/kubernetes-sigs/gateway-api/issues/1590 to bypass some of the limitations of including CRDs directly in a Helm chart.
Note: This is not ready to work on yet. We first need to get some feedback on this idea to ensure that it actually makes sense before starting any development.