kubernetes-sigs / cluster-api

Home for Cluster API, a subproject of sig-cluster-lifecycle
https://cluster-api.sigs.k8s.io
Apache License 2.0
3.46k stars 1.27k forks source link

Prefix selector labels of core components #6571

Closed erwinvaneyk closed 1 year ago

erwinvaneyk commented 2 years ago

User Story

As an operator I would like to deploy multiple CAPI components in the same namespace for multi-tenancy and organisational reasons.

Detailed Description

I am trying to deploy some Cluster API components in a single namespace, because of restrictions organizationally and because I am trying to deploy multiple instances of CAPI in the same cluster.

This works great except for one issue: the label selectors of the components are overlapping. Among others, this is causing webhooks requests to get routed to incorrect pods.

Components that have overlapping label selectors:

Each of these components has the default label selector of kubebuilder-generated projects, namely: control-plane: controller-manager. Other providers do prefix the labels nicely. For instance, CAPA uses the following label: control-plane: capa-controller-manager

We could choose to prefix the control-plane label in the components in this repository as follows:

Anything else you would like to add:

/kind feature

killianmuldoon commented 2 years ago

Maybe a better solution is to add a new unique label to each of the deployments? e.g. provider=core, provider=kubeadmboostrap etc.

I think this would also meet the use case but would negate the risk of breaking existing workflows.

sbueringer commented 2 years ago

What exactly are the restrictions on label selectors of existing Deployments, we can't change them at all?

sbueringer commented 2 years ago

Hm looks like at least when we update with clusterctl we don't have that problem as we delete+create the Deployment: https://storage.googleapis.com/kubernetes-jenkins/logs/periodic-cluster-api-e2e-upgrade-v0-3-to-main/1531253143076081664/artifacts/clusters/clusterctl-upgrade-2trxo5/clusterctl-upgrade.log

erwinvaneyk commented 2 years ago

@sbueringer ah I did not realize that is the way upgrades are orchestrated. In that case that seems at least to reduce the scope of that issue.

sbueringer commented 2 years ago

Unfortunately that only helps when the upgrade is done with clusterctl. Everyone who tries to run CAPI in a GitOps mode without clusterctl will have that problem.

fabriziopandini commented 2 years ago

I am trying to deploy multiple instances of CAPI in the same cluster

This is a scenario we are not recommending; see discussion in https://cluster-api.sigs.k8s.io/developer/architecture/controllers/support-multiple-instances.html

I am trying to deploy some Cluster API components in a single namespace

This is something we should support, and if it is not working, on top of fixing providers in this repo we should extend the provider contract/advocate for all the providers implementing a similar fix

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

fabriziopandini commented 1 year ago

/triage accepted /help /remove lifecycle-stale

I think we should take a look at label selectors for ensuring that deploying all CAPI components in a single namespace works, but support for deploying multiple instances of CAPI in the same cluster is out of scope

k8s-ci-robot commented 1 year ago

@fabriziopandini: This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-help command.

In response to [this](https://github.com/kubernetes-sigs/cluster-api/issues/6571): >/triage accepted >/help >/remove lifecycle-stale > >I think we should take a look at label selectors for ensuring that deploying all CAPI components in a single namespace works, but support for deploying multiple instances of CAPI in the same cluster is out of scope > > Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
sbueringer commented 1 year ago

I'm not sure I understand the problem:

apiVersion: v1
kind: Service
metadata:
  labels:
    cluster.x-k8s.io/provider: bootstrap-kubeadm
  name: capi-kubeadm-bootstrap-webhook-service
  namespace: capi-kubeadm-bootstrap-system
spec:
...
  selector:
    cluster.x-k8s.io/provider: bootstrap-kubeadm
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    cluster.x-k8s.io/provider: bootstrap-kubeadm
    control-plane: controller-manager
  name: capi-kubeadm-bootstrap-controller-manager
  namespace: capi-kubeadm-bootstrap-system
spec:
  selector:
    matchLabels:
      cluster.x-k8s.io/provider: bootstrap-kubeadm
      control-plane: controller-manager

I understand that you would run into a problem if you deploy multiple-instances of the same provider, but multiple different providers in the same namespace should work I guess (?)

I would also assume that if you try to deploy the same provider multiple times they will always have the same labels. You will also always run into problems as the mutating / validating webhook configurations and the CRD conversion webhook can just point to one service.

fabriziopandini commented 1 year ago

I double-checked @sbueringer findings and we are fine for the scenario where all the providers are deployed in the same namespace because we add cluster.x-k8s.io/provider: ... to each selector.

Given that the scenario for deploying multiple instances has only very limited support, this requires the users to apply their own modification to /config as described in https://cluster-api.sigs.k8s.io/developer/architecture/controllers/support-multiple-instances.html

I'm closing the issue because no changes are required in CAPI and the scenario and its implications are already documented /close

k8s-ci-robot commented 1 year ago

@fabriziopandini: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/cluster-api/issues/6571#issuecomment-1308601371): >I double check @sbueringer findings and we are fine for the scenario where all the providers are deployed in the same namespace because we add cluster.x-k8s.io/provider: ... to each selector. > >Given that the scenario for deploying multiple instances has only very limited support, this requires the users to apply their own modification to /config as described in https://cluster-api.sigs.k8s.io/developer/architecture/controllers/support-multiple-instances.html > >I'm closing the issue because no changes are required in CAPI and the scenario and its implications are already documented >/close > Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.