argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
17.63k stars 5.37k forks source link

Kubelogin support in ArgoCD #9460

Open rphua opened 2 years ago

rphua commented 2 years ago

Summary

It would be nice if ArgoCD could add support for the Kubelogin plugin for AKS clusters.

Motivation

For security reasons we have enabled Azure AD RBAC for all of our clusters. However, right now it is not possible to use Kubelogin for ArgoCD. Instead it uses the cluster admin credentials which bypasses the Azure AD RBAC check.

Proposal

Add an option to enable the Kubelogin plugin. Kubelogin is based on the client-go credential plugin: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

mmerrill3 commented 2 years ago

this could be implemented in the argocd-k8s-auth command that is setup for aws and gcp already. See https://github.com/argoproj/argo-cd/pull/9190.

mmerrill3 commented 2 years ago

if this is not being worked on, I'll work on a PR for it.

kgopi1 commented 1 year ago

waiting for this feature to be added to support AKS Clusters.

nicholass-alcidion commented 1 year ago

I have been able to get kubelogin working for our clusters by installing kubelogin via an init container, and then configuring the cluster with external login.

I am using kustomize for my argo-cd deployments, here are the overlays I am using install kubelogin. I am fairly sure I only to be adding kubelogin to the argocd-application-controller StatefulSet but have not tested this.

Once kubelogin is installed I was able to configure a cluster with execProviderConfig

overlays/argocd-kubelogin.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: argocd-server
spec:
  template:
    spec:
      volumes:
      - name: custom-tools
        emptyDir: {}
      initContainers:
      - name: download-tools
        image: alpine:3.8
        command: [sh, -c]
        args:
        - >-
            wget -qO- https://github.com/Azure/kubelogin/releases/latest/download/kubelogin-linux-amd64.zip |
            unzip -x bin/linux_amd64/kubelogin -p - > /custom-tools/kubelogin &&
            chmod a+x /custom-tools/kubelogin
        volumeMounts:
        - mountPath: /custom-tools
          name: custom-tools
      containers:
      - name: argocd-server
        volumeMounts:
        - mountPath: /usr/local/bin/kubelogin
          name: custom-tools
          subPath: kubelogin
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: argocd-application-controller
spec:
  template:
    spec:
      volumes:
      - name: custom-tools
        emptyDir: {}
      initContainers:
      - name: download-tools
        image: alpine:3.8
        command: [sh, -c]
        args:
        - >-
            wget -qO- https://github.com/Azure/kubelogin/releases/latest/download/kubelogin-linux-amd64.zip |
            unzip -x bin/linux_amd64/kubelogin -p - > /custom-tools/kubelogin &&
            chmod a+x /custom-tools/kubelogin
        volumeMounts:
        - mountPath: /custom-tools
          name: custom-tools
      containers:
      - name: argocd-application-controller
        volumeMounts:
        - mountPath: /usr/local/bin/kubelogin
          name: custom-tools
          subPath: kubelogin
    {
      "execProviderConfig": {
        "apiVersion": "client.authentication.k8s.io/v1beta1",
        "command": "/usr/local/bin/kubelogin",
        "args": [
          "get-token",
          "--login",
          "spn",
          "--server-id",
          "00000000-0000-0000-0000-000000000000",
          "--client-id",
          "00000000-0000-0000-0000-000000000000",
          "--tenant-id",
          "00000000-0000-0000-0000-000000000000",
          "--environment",
          "AzurePublicCloud"
        ],
        "env": {
          "AAD_SERVICE_PRINCIPAL_CLIENT_ID": "00000000-0000-0000-0000-000000000000",
          "AAD_SERVICE_PRINCIPAL_CLIENT_SECRET": "secret"
        },
        "installHint": "kubelogin missing"
      },
      "tlsClientConfig": {
        "insecure": false,
        "caData": "base64encodedCaDataAsNormal"
      }
    }
james-callahan commented 1 year ago

I got excited seeing this on the changelog for 2.8.0, but was sad to find out it's for azure's "kubelogin" and not the more generic kubelogin project that implements OIDC

crenshaw-dev commented 1 year ago

@james-callahan if there's not already an enhancement request, would you mind creating one?

crenshaw-dev commented 1 year ago

Actually, let's just track it here. This is a reasonably generic issue.

rouke-broersma commented 1 year ago

@crenshaw-dev we created this feature request specifically for Azure AKS Kubelogin and the description currently links to that project. I think it's potentially (more) confusing to keep this issue open. At the very least the description should be modified to mention the new intentions of the issue.

There also appears to be be yet another client-go credential plugin called Kubelogin: https://github.com/Nordstrom/kubelogin so there's even more potential for conflicts.

crenshaw-dev commented 1 year ago

Ah that's fair. If someone opens a new issue, I'll happily close this one in favor of the new one!

rumstead commented 1 year ago

Maybe this isn't the correct place (tell me and I will move to discussions or something) but how are folks looking to leverage kubelogin across 100s of clusters? We are using an SPN because our Argo CD cluster isn't in AKS and putting the SPN secret in each cluster secret would be a nightmare to roll every X days.

Currently, we have a secret that contains our SPN details and we run a cronjob/job when a new cluster is created that does an az login + az get-creds + argocd cluster add. It's crude but working well for us. I would love to use a more Azure-native way to log in but the "centrally" managed secret is way easier to roll.

rouke-broersma commented 1 year ago

Maybe this isn't the correct place (tell me and I will move to discussions or something) but how are folks looking to leverage kubelogin across 100s of clusters? We are using an SPN because our Argo CD cluster isn't in AKS and putting the SPN secret in each cluster secret would be a nightmare to roll every X days.

Currently, we have a secret that contains our SPN details and we run a cronjob/job when a new cluster is created that does an az login + az get-creds + argocd cluster add. It's crude but working well for us. I would love to use a more Azure-native way to log in but the "centrally" managed secret is way easier to roll.

We will use workload identity.

FernandoMiguel commented 1 year ago

@rumstead when a new cluster is created, terrafom (our IAC) creates a file in git, type secret manifest. the Argo instance consumes that secret and is able to connect to the target cluster. it became super simple when we got there. we had several other approaches over time, but this one is the simpler of them all

mmerrill3 commented 1 year ago

@rumstead the reason why I made this PR was to really leverage workload identities... no secrets needed, just setup the federated credentials in Active Directory between the kubernetes service account for argo, and your SPN (registered app or user assigned MSI). I too have nearly a thousand clusters that I need to now adopt this upstream change. You could use the client id and client secret approach, but I am just going to use workload identities.

rumstead commented 1 year ago

Thanks all for the feedback. We are looking to leverage Workload Identities when we move our Argo CD cluster to AKS.

kgopi1 commented 1 year ago

@rumstead when a new cluster is created, terrafom (our IAC) creates a file in git, type secret manifest. the Argo instance consumes that secret and is able to connect to the target cluster. it became super simple when we got there. we had several other approaches over time, but this one is the simpler of them all

Are you storing kubernetes secret in your Git ? hope encryption is enabled.

FernandoMiguel commented 1 year ago

@kgopi1 no secrets at all. Just awsauth config block, which is all non sensitive details.

kzzalews commented 11 months ago

Taking into account https://github.com/argoproj/argo-cd/pull/14866 (feature was delivered with Argo CD 2.8), I believe that this ticket could be closed, am I right?

james-callahan commented 11 months ago

Taking into account #14866 (feature was delivered with Argo CD 2.8), I believe that this ticket could be closed, am I right?

The linked PR is about the azure support. This issue is currently being reused for kubelogin (the OIDC thing) support. See https://github.com/argoproj/argo-cd/issues/9460#issuecomment-1669811393

bcho commented 10 months ago

Hey all, Azure/kubelogin is considering to expose library usage, which should be able to help the integration here. This is our side proposal: https://github.com/Azure/kubelogin/issues/373 . Please let us know if your feedbacks / blockers, thanks in advance!

cc @mmerrill3 / @crenshaw-dev since I saw your names in current implementation.

rouke-broersma commented 10 months ago

Hey all, Azure/kubelogin is considering to expose library usage, which should be able to help the integration here. This is our side proposal: https://github.com/Azure/kubelogin/issues/373 . Please let us know if your feedbacks / blockers, thanks in advance!

cc @mmerrill3 / @crenshaw-dev since I saw your names in current implementation.

Azure kubelogin has already been implemented so I don't think there are any current blockers. This issue has been repurposed to also implement the generic oidc auth provider confusingly also called kubelogin. For the purpose of your questions it might be better to open a new issue or discussion so as to try to avoid further confusion in this issue.

bcho commented 10 months ago

Hey all, Azure/kubelogin is considering to expose library usage, which should be able to help the integration here. This is our side proposal: https://github.com/Azure/kubelogin/issues/373 . Please let us know if your feedbacks / blockers, thanks in advance! cc @mmerrill3 / @crenshaw-dev since I saw your names in current implementation.

Azure kubelogin has already been implemented so I don't think there are any current blockers. This issue has been repurposed to also implement the generic oidc auth provider confusingly also called kubelogin. For the purpose of your questions it might be better to open a new issue or discussion so as to try to avoid further confusion in this issue.

Ah, I see, indeed it’s very confusing 😁, please disregard my previous comment. We will go with the new design instead, thanks!