Closed jessicaochen closed 6 years ago
Note that since this was opened we added MachineSets and MachineDeployments which should also be supported.
I can work on this. /assign
@ashish-amarnath: GitHub didn't allow me to assign the following users: ashish-amarnath.
Note that only kubernetes-sigs members and repo collaborators can be assigned. For more information please see the contributor guide
OperatingNamespace
to create_cluster
, delete_cluster
and validate_cluster
with a default value pointing to the default
namespace.ClusterClient
methods to take the namespace and use the supplied value when calling the client-go methods.Need to clarify:
applyClusterAPIStack
, which also point to the default namespaceIMO there isn't much value in allowing the creation of cluster
objects in multiple namespaces. However, there is definitely value in running controllers in different namespaces and with them running with a service account that has access to the cluster
objects in the single namespace where they get created.
E.g. let's say there is a namespace cluster-registry
in the 'external-cluster' and controllers running in other namespaces. All controllers can then watch for cluster
objects in the cluster-registry
namespace and then filter, say based on labels, which object they want to reconcile.
@jessicaochen WDYT?
My understanding about what the community decided regarding namespaces and clusters is that cluster objects will be in different namespaces and there will only be one cluster object per namespace. Any machine objects in the same namespace as a cluster object belong to that cluster (this specific point about how machines link to clusters might change).
https://github.com/kubernetes/kube-deploy/issues/463
I think we should stick with the community-agreed model. Feel free to counter propose in the community meeting and get community consensus if you feel strongly that we should be keeping all cluster objects in one namespace. @roberthbailey FYI
My understanding about what the community decided regarding namespaces and clusters is that cluster objects will be in different namespaces and there will only be one cluster object per namespace.
There were folks that wanted to put multiple clusters into a single namespace so that they could share things like credentials for a cloud provider. At that point it would be similar (conceptually) to having two GKE clusters in the same GCP project -- the namespace is like the project and you want the same access to developers to both clusters.
Any machine objects in the same namespace as a cluster object belong to that cluster
There is an open issue (and maybe PR) to make a tighter link to support the above use case.
... if you feel strongly that we should be keeping all cluster objects in one namespace.
I don't think anyone was advocating for having them all in a single namespace, but having the flexibility of having more than one per namespace.
Issue #41 was discussed during the meeting on June 20th (notes). @mvladev had an AI to add some comments to the issue but it looks like they weren't extracted from the conversation and put into github (to be more easily found).
The summary is that we agreed to add an optional reference from Machine -> Cluster so that you could have multiple clusters in the same namespace and be able to identify which machines belong to which cluster.
https://github.com/kubernetes-sigs/cluster-api/blob/master/clusterctl/clusterdeployer/clusterclient.go
Currently, all the cluster-api objects are created in the default
namespace. Which is kinda inline with my initial idea.
After gathering thoughts from other folks I have evaluated 3 approaches to solve this:
Approach 1: One namespace for all clusters objects: Cluster deployer will create all cluster-api objects in one namespace Pros:
Cons:
Approach 2: Namespace per cluster Cluster deployer will create a namespace for every cluster. Pros:
Cons:
kubectl get clusters --all-namespaces -l<some-filter>
Approach 3: Allow cluster deployer to accept the namespace where the cluster-api objects will need to be created Cluster objects will be namespace scoped and the namespace will be part of the cluster spec. Pros:
kubectl get clusters -n public-clusters
—-> a namespace for all public clustersCons:
Based on the above evaluation Approach 3 is better.
Feel free to correct me if I've gotten something wrong or I am missing anything.
In the current Cluster API architecture, the common controller code must be changed to support the different approaches listed above. If a provider's use case is not supported, the provider must choose between merging its changes to the Cluster API, or forking. In light of that, I think it's important to keep potential use cases in mind.
For example, an enterprise could run a permanent external cluster with the Cluster API. It could give internal organizations broad permissions within different namespaces in that cluster. To support this use case, the Cluster API common controllers would have to reconcile multiple Cluster objects in the same namespace--and, as a consequence, be able to associate Machine objects to some Cluster object.
@ashish-amarnath:
I wonder if we can split the backend work from the UX design. I think these are the pieces of the backend design:
clusterctl
deploys the apiserver within default namespace: https://github.com/kubernetes-sigs/cluster-api/blob/051f338bdacb76117d73a86258bc58c946add7b5/clusterctl/clusterdeployer/clusterapiserver.go#L48 https://github.com/kubernetes-sigs/cluster-api/blob/051f338bdacb76117d73a86258bc58c946add7b5/clusterctl/clusterdeployer/clusterapiservertemplate.go#L19
Maybe we should leave this as is for now. There can only be one Cluster API extension server deployed to a cluster anyway.
operators deploy to the namespace specified by providerconfig.yaml
Cluster
, Machines
, etc. are deployed to the namespace specified by the cluster.yaml
and machine.yaml
mainfests but the clusterclient code assumes the default namespace: https://github.com/kubernetes-sigs/cluster-api/blob/051f338bdacb76117d73a86258bc58c946add7b5/clusterctl/clusterdeployer/clusterclient.go#L109 In the future we may want to support more than one cluster per namespace, however for now it is not possible, at least in part because we need a link between clusters and machines.
FWIW, for our SSH we are assuming one cluster per namespace (no need for strong refs) and one controller per cluster (better isolation).
@davidewatson In the change that I am working on atm, for the cluster Create
I use the namespace in the cluster definition yaml, using NamespceDefault
if it is empty. This way there is no change in UX. However, for the Delete
I think the UX change is inevitable.
So to be consistent I am thinking of making the UX similar for the create scenario as well.
WDYT?
Fair, that's a good point.
/assign
@ashish-amarnath: GitHub didn't allow me to assign the following users: ashish-amarnath.
Note that only kubernetes-sigs members and repo collaborators can be assigned. For more information please see the contributor guide
Support cluster and machine in multiple namespaces in clusterctl create as it currently assumes default namespace.