Open mlbiam opened 3 years ago
Can't find the other issue (which made the same request), but the reason we have a credential per cluster is for performance reasons. The argocd application controller watches all resources in a managed cluster, and it's not scalable to do this per cluster/serviceaccount combination.
While I can appreciate that, whats the difference between monitoring 25 clusters with a single account or 25 namespaces with their own service accounts? It seems like a linear scalability issue. It appears flux does this by having a controller per namespace which isn't great but I can understand it. Google's Anthos Configuration Manager also can handle this model (where each namespace gets its own service account for it's controller).
whats the difference between monitoring 25 clusters with a single account or 25 namespaces with their own service accounts
assuming you have 30 group/kind of resources, the difference is 25 clusters x 30 total connection streams vs. 25 clusters x 25 namespaces x 30 connection streams
There is an issue and stale pull request for adding user impersonation on a per-project basis. I'm not sure why we never looked at it thoroughly, but personally I like the idea. It reuses the same connection to the cluster, but performs impersonation when handling resources:
https://github.com/argoproj/argo-cd/issues/3376 (issue) https://github.com/argoproj/argo-cd/pull/3377 (PR)
assuming you have 30 group/kind of resources, the difference is 25 clusters x 30 total connection streams vs. 25 clusters x 25 namespaces x 30 connection streams
Sorry, I'm missing something. If I have one cluster with 25 namespaces then argo would use a different account for each namespace, plus a cluster-admin for global resources, you're saying that's 26*30 connections? If I have 25 separate clusters, each with a single cluster-admin that's still 25x30 regardless of the number of namespaces. Where's my math wrong?
I'm not sure if you'e thinking that an argocd would be both multi-ns and multi-cluster? That would be unreasonable. I'd assume one argcd/cluster when you're dealing with tennancy at the ns
@jannfis that sounds like a great idea!
It appears flux does this by having a controller per namespace which isn't great but I can understand it.
That actually highlights the problem I'm trying to describe. Flux's model of having controller per namespace == additional connection streams. Argo CD avoids this problem by uses one connection stream (a cluster watch) per group/kind. Since Argo CD streams live updates of resources as they happen, it needs to establish a watch stream on resources to show these changes live.
I should also point out that if you prefer flux's model of one controller per namespace, this is already possible by using a namespace installation and where argo cd is managing a single namespace. But you will have multiple argo-cd instances, which is maybe what you don't want.
It reuses the same connection to the cluster, but performs impersonation when handling resources:
Impersonation might be a middle ground where we can address the performance concerns by still preserving a single connection per group/kind, but it would still mean that Argo CD needs some sort of cluster level privileges to manage the cluster. In other words, there would be two sets of credentials:
kubectl get/apply/patch/delete
resourcesIf I have 25 separate clusters, each with a single cluster-admin that's still 25x30 regardless of the number of namespaces. Where's my math wrong?
Using 25 different service accounts implies that there are 25 different authentication credentials used to manage resources in 25 projects. I think what you might be missing in your understanding is that Argo CD fundamentally works by having a cluster-level watch on all group/kind resources. It cannot easily move to per project credentials for these same watches because for each set of credentials, it is new set of watches, exploding the number of open connections to the managed cluster.
That said, I do understand the use case and desire where service accounts that are used for the definition of what is permitted/not-permitted, could possibly be achieved via impersonation. But fundamentally the cluster-level watches that the application-controller currently uses to manage clusters can't expand to use N set of credentials. At that point, I think you are better off creating N instances of Argo CD.
That's reasonable. I think the important feature here is that when the write operation happens, it does so using the sa/impersonation account. This gives you the auditability of tracking specific changes, helps protect against rbac bypasses, and leverages argos great built in rbac.
There's an ongoing proposal with #14255
Summary
When using ArgoCD in a multi-tenant environment, it would be good to specify that a cluster can use a specific service account instead of having a 1-1 relationship between clusters and service accounts.
Motivation
When using ArgoCD in a multi-tenant environment (where each tenant is at the namespace, not the cluster), you have to rely on the application configuration to limit which objects argo can sync. This is means duplicating the intent of RBAC and making it harder to audit. It would be better for an application to specify both a cluster AND a service account. This way the application is limited by the security constraints of the service account. It also makes it easier to track which projects do what in the auto logs.
Proposal
When creating a cluster in argocd, do not limit each url to one entry. Let there be multiple clusters with the same URL but different service accounts. Then the application can reference a specific cluster/sa combination instead of just a url.