giantswarm / roadmap

Giant Swarm Product Roadmap
https://github.com/orgs/giantswarm/projects/273
Apache License 2.0
3 stars 0 forks source link

Access to clusters in other organizations #1666

Open teemow opened 1 year ago

teemow commented 1 year ago

We have some use cases in which operators on the management cluster need access to different workload clusters that are in other organizations.

Example: The customer installed ArgoCD on the management cluster. Argocd now needs access to all workload clusters on the management cluster.

Some customers would even like to allow people in an organization to install applications within a namespace of a cluster only.

Let's think about this and come up with a model how this could work in the future.

Related: #1665 #1623

teemow commented 1 year ago

One customer is using a separate service account on the workload cluster to connect Argo to it. Using different service accounts for different tools sounds like a good idea imo. We should imo rethink that we use one single kubeconfig with cluster-admin access to connect to workload clusters from the management cluster.

Customer:

When creating a new workload cluster, we use a job & shell script to create a service account for Argo CD on the workload cluster and then use a token for this service account to build the secret for Argo CD.

Fyi @puja108

puja108 commented 1 year ago

We should imo rethink that we use one single kubeconfig with cluster-admin access to connect to workload clusters from the management cluster.

This is definitely a todo and was not intended to stay, the goal is to have separate kubeconfigs with specific roles attached to anything we run. This was also why I wanted to have Team Rainbow looking into this, as they should be concerned with access management to WCs, not only for customers and human end users, but also for automation within and outside the MC. I had a roadmap issue for this, but it got merged with human/customer access and closed once we had OIDC and kubectl gs support. I'd like team rainbow (cc @weatherhog) to look into solutions so that we can have fine grained access to WCs automated and drive the issue so that the other teams migrate away from the shared cluster-admin kubeconfig that was used as a work-around for the early stages of CAPI development. I'll create an issue for CAPI roadmap planning for this to keep it separate from this concrete issue.

If those kubeconfigs are SA- or cert-based is more of an implementation detail, although I've also been advocating for more use of SAs, as certificates still have the downside of not being directly revocable in K8s. We had a recent customer discussion around this, too.

Related to the implementation might also be #427 as it could enable implementation routes that are currently not possible.