Closed erwinvaneyk closed 1 year ago
Maybe a better solution is to add a new unique label to each of the deployments? e.g. provider=core, provider=kubeadmboostrap etc.
I think this would also meet the use case but would negate the risk of breaking existing workflows.
What exactly are the restrictions on label selectors of existing Deployments, we can't change them at all?
Hm looks like at least when we update with clusterctl
we don't have that problem as we delete+create the Deployment: https://storage.googleapis.com/kubernetes-jenkins/logs/periodic-cluster-api-e2e-upgrade-v0-3-to-main/1531253143076081664/artifacts/clusters/clusterctl-upgrade-2trxo5/clusterctl-upgrade.log
@sbueringer ah I did not realize that is the way upgrades are orchestrated. In that case that seems at least to reduce the scope of that issue.
Unfortunately that only helps when the upgrade is done with clusterctl. Everyone who tries to run CAPI in a GitOps mode without clusterctl will have that problem.
I am trying to deploy multiple instances of CAPI in the same cluster
This is a scenario we are not recommending; see discussion in https://cluster-api.sigs.k8s.io/developer/architecture/controllers/support-multiple-instances.html
I am trying to deploy some Cluster API components in a single namespace
This is something we should support, and if it is not working, on top of fixing providers in this repo we should extend the provider contract/advocate for all the providers implementing a similar fix
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/triage accepted /help /remove lifecycle-stale
I think we should take a look at label selectors for ensuring that deploying all CAPI components in a single namespace works, but support for deploying multiple instances of CAPI in the same cluster is out of scope
@fabriziopandini: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
I'm not sure I understand the problem:
apiVersion: v1
kind: Service
metadata:
labels:
cluster.x-k8s.io/provider: bootstrap-kubeadm
name: capi-kubeadm-bootstrap-webhook-service
namespace: capi-kubeadm-bootstrap-system
spec:
...
selector:
cluster.x-k8s.io/provider: bootstrap-kubeadm
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
cluster.x-k8s.io/provider: bootstrap-kubeadm
control-plane: controller-manager
name: capi-kubeadm-bootstrap-controller-manager
namespace: capi-kubeadm-bootstrap-system
spec:
selector:
matchLabels:
cluster.x-k8s.io/provider: bootstrap-kubeadm
control-plane: controller-manager
I understand that you would run into a problem if you deploy multiple-instances of the same provider, but multiple different providers in the same namespace should work I guess (?)
I would also assume that if you try to deploy the same provider multiple times they will always have the same labels. You will also always run into problems as the mutating / validating webhook configurations and the CRD conversion webhook can just point to one service.
I double-checked @sbueringer findings and we are fine for the scenario where all the providers are deployed in the same namespace because we add cluster.x-k8s.io/provider: ... to each selector.
Given that the scenario for deploying multiple instances has only very limited support, this requires the users to apply their own modification to /config as described in https://cluster-api.sigs.k8s.io/developer/architecture/controllers/support-multiple-instances.html
I'm closing the issue because no changes are required in CAPI and the scenario and its implications are already documented /close
@fabriziopandini: Closing this issue.
User Story
As an operator I would like to deploy multiple CAPI components in the same namespace for multi-tenancy and organisational reasons.
Detailed Description
I am trying to deploy some Cluster API components in a single namespace, because of restrictions organizationally and because I am trying to deploy multiple instances of CAPI in the same cluster.
This works great except for one issue: the label selectors of the components are overlapping. Among others, this is causing webhooks requests to get routed to incorrect pods.
Components that have overlapping label selectors:
Each of these components has the default label selector of kubebuilder-generated projects, namely:
control-plane: controller-manager
. Other providers do prefix the labels nicely. For instance, CAPA uses the following label:control-plane: capa-controller-manager
We could choose to prefix the
control-plane
label in the components in this repository as follows:Anything else you would like to add:
/kind feature