argoproj-labs / argocd-vault-plugin

An Argo CD plugin to retrieve secrets from Secret Management tools and inject them into Kubernetes secrets
https://argocd-vault-plugin.readthedocs.io
Apache License 2.0
806 stars 190 forks source link

ArgoCD Vault Plugin secured multitenancy #491

Open Orabig opened 1 year ago

Orabig commented 1 year ago

I've found some configuration working to tackle this issue, and I'd like to have some feedback about it. Do you see any caveat, security risk or issue about this ?

Is your feature request related to a problem?

The need arises when working with multitenancy secrets. The documentation on this subject did not seem as a valid solution, because it gives the possibility to choose in the Application manifest which Vault credential to use. Thus, in a multi-tenant environment, there is no way to prevent Team A to use the credentials of Team B. In my ArgoCD instance, I chose to give the possibility for the teams to deploy their own Application in their namespaces ( Applications in any namespace feature). The drawback is that the administrator don't have any control on the Applications deployed (other than via the AppProject which defines in which namespace they can deploy to)

Thus, I tried to find a way to :

Describe your solution

The credentials must be declared in secrets names "argocd-vault-plugin-credentials" in the namespace where they are needed. The plugin will be configured to use these credentials instead of 'argocd:argocd-vault-plugin-credentials'. I make use of the $ARGOCD_APP_NAMESPACE environment variable which gives the namespace of the application :

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cm
data:
  configManagementPlugins: |-
    - name: argocd-vault-plugin
      generate:
        command: ["sh", "-c"]
        args: ['argocd-vault-plugin generate ./ -s "$ARGOCD_APP_NAMESPACE:argocd-vault-plugin-credentials"']

For this to work, the argocd-repo-server must be given access to the vault credentials. This is done with the following rbac definitions :


---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
# This role is necessary, so that Argocd repo server is able to read Vault credentials.
# They are stored in any namespaces under the name argocd-vault-plugin-credentials
metadata:
  labels:
    app.kubernetes.io/name: argocd-repo-server-secret-read
    app.kubernetes.io/part-of: argocd
    app.kubernetes.io/component: server
  name: argocd-repo-server-secret-read
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  resourceNames:
  - argocd-vault-plugin-credentials
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/name: argocd-repo-server-secret-read
    app.kubernetes.io/part-of: argocd
    app.kubernetes.io/component: server
  name: argocd-repo-server-secret-read
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: argocd-repo-server-secret-read
subjects:
- kind: ServiceAccount
  name: argocd-repo-server
  namespace: argocd

What do you think ?

werne2j commented 1 year ago

@Orabig I think this is a valid way to do multi tenancy with AVP. As long as that cluster role is locked down, and there is no way for Team A to put Team Bs approle credentials in there secret, its seems fine.

I think every team has different use cases and solutions that work for them. For example..

Thus, in a multi-tenant environment, there is no way to prevent Team A to use the credentials of Team B.

This is not a concern where maybe a central team controls the Applications and maybe Team A and Team B don't touch the Argo CD stuff. But when each team handles there own Applications, it might might not make sense. So it really depends on the teams/structure/permissions on how to best handle multi tenancy.

We would be glad to put this example into the multi tenancy documentation if you want to create a PR.

Thanks for sharing!

Orabig commented 1 year ago

During my tests, I've found a major caveat in my method :(

The environment variable $ARGOCD_APP_NAMESPACE does not contain the namespace of the Application object, but the namespace where the application is deploying the k8s object.

More precisely, if a team has an Application defined like this :

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  namespace: definition-namespace             <<<< Namespace where the Application object is deployed by the user
  name: my-app
spec:
  project: xxx
  source:
    repoURL: xxxx
    plugin:
      name: argocd-vault-plugin-debug
  destination:
    name: my-cluster
    namespace: target-namespace             <<<< Namespace where the manifests are deployed by ArgoCD

Then $ARGOCD_APP_NAMESPACE=="target-namespace" and not "definition-namespace" unfortunately.

Thus, this is an issue in a multi-cluster context, because "target-namespace" may not exist in the K8S cluster where ArgoCD is deployed.

I've looked for another variable that I could use, but it seems that the real "Application namespace" is not exported at all :

For reference, here are the only variables available :

(source https://github.com/argoproj/argo-cd/blob/master/reposerver/repository/repository.go#L1414 )

func newEnv(q *apiclient.ManifestRequest, revision string) *v1alpha1.Env {
    return &v1alpha1.Env{
        &v1alpha1.EnvEntry{Name: "ARGOCD_APP_NAME", Value: q.AppName},
        &v1alpha1.EnvEntry{Name: "ARGOCD_APP_NAMESPACE", Value: q.Namespace},
        &v1alpha1.EnvEntry{Name: "ARGOCD_APP_REVISION", Value: revision},
        &v1alpha1.EnvEntry{Name: "ARGOCD_APP_SOURCE_REPO_URL", Value: q.Repo.Repo},
        &v1alpha1.EnvEntry{Name: "ARGOCD_APP_SOURCE_PATH", Value: q.ApplicationSource.Path},
        &v1alpha1.EnvEntry{Name: "ARGOCD_APP_SOURCE_TARGET_REVISION", Value: q.ApplicationSource.TargetRevision},
    }
}
werne2j commented 1 year ago

@Orabig based on your last comment are we ok to close this issue and the corresponding PR?

Orabig commented 1 year ago

Well, I don't think so, because even if it's not perfect, the solution works and is a good answer to the initial problem. Most of the time (or with a simple naming strategy), the namespace will be the same in both clusters, so it will be useful anyway.

For now, me and my team are looking for a way to improve argocd itself to add the missing environment variable, which will tackle the issue.