aws / secrets-store-csi-driver-provider-aws

The AWS provider for the Secrets Store CSI Driver allows you to fetch secrets from AWS Secrets Manager and AWS Systems Manager Parameter Store, and mount them into Kubernetes pods.
Apache License 2.0
452 stars 126 forks source link

Pod Identity Association not recognised by secrets store CSI driver #300

Open ecole-startupcraft opened 8 months ago

ecole-startupcraft commented 8 months ago

Describe the bug Attempting to mount secrets using Pod Identity EKS add-on results in the following error:

Warning FailedMount 103s (x29 over 44m) kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod default/secrets-store-inline, err: rpc error: code = Unknown desc = us-east-1: An IAM role must be associated with service account demo-app-pod-sa (namespace: default) To Reproduce

Steps to reproduce the behavior:

      - kind: Pod
        apiVersion: v1
        metadata:
          name: secrets-store-inline
        spec:
          serviceAccountName: demo-pod-sa
          containers:
          - image: registry.k8s.io/e2e-test-images/busybox:1.29-4
            name: busybox
            imagePullPolicy: IfNotPresent
            command:
            - "/bin/sleep"
            - "10000"
            volumeMounts:
            - name: secrets-store-inline
              mountPath: "/mnt/secrets-store"
              readOnly: true
          volumes:
            - name: secrets-store-inline
              csi:
                driver: secrets-store.csi.k8s.io
                readOnly: true
                volumeAttributes:
                  secretProviderClass: "aws-secrets"

Do you also notice this bug when using a different secrets store provider (Vault/Azure/GCP...)? Yes/No AWS EKS specific

Expected behavior Secrets Volume properly mounted

Environment: Helm chart secrets-store-csi-driver-provider-aws v0.3.5

gustavclausen commented 8 months ago

This is must likely due to the provider using an older version of the AWS SDK (1.47.10) which doesn't support the container credential provider (supporting the EKS Pod Identity functionality). The minimum required SDK version is v1.47.11 (see https://docs.aws.amazon.com/eks/latest/userguide/pod-id-minimum-sdk.html). Thus, this is also a thing to consider.

ran2806 commented 8 months ago

I am also facing the same issue. Is there any solution on it?

egorksv commented 8 months ago

I am also facing the same issue. Is there any solution on it?

As a workaround I'm using good old OIDC-mapped IAM role, but would definitely want to move on to the new way.

steven-so commented 8 months ago

I am also facing the same issue. Is there any solution on it?

As a workaround I'm using good old OIDC-mapped IAM role, but would definitely want to move on to the new way.

What role does the service account need to "MountVolume.SetUp failed for volume 'secrets-store-inline'"?

steven-so commented 8 months ago

I'm also getting: Warning FailedMount 6s (x6 over 22s) kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod default/user-service-depl-858474494c-5nr6w, err: rpc error: code = Unknown desc = us-west-2: An IAM role must be associated with service account default (namespace: default)

however I'm not using Pod Identity add-on.

I'm trying to spin up a private docker hub image.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-service
  template:
    metadata:
      labels:
        app: my-service
    spec:
      volumes:
        - name: secrets-store-inline
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: "aws-secrets"
      containers:
        - name: my-service
          image: myprivate/dockerimage
          env:
            - name: JWT_KEY
              valueFrom:
                secretKeyRef:
                  name: jwt-secret
                  key: JWT_KEY
          volumeMounts:
            - name: secrets-store-inline
              mountPath: "/mnt/secrets-store"
              readOnly: true
      imagePullSecrets:
        - name: "docker-hub"
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: aws-secrets
spec:
  provider: aws
  secretObjects:
    - secretName: jwt-secret
      type: Opaque
      data:
        - objectName: jwt-secret
          key: JWT_KEY
    - secretName: docker-hub
      type: kubernetes.io/dockerconfigjson
      data:
        - objectName: "docker-configjson"
          key: ".dockerconfigjson"
  parameters:
    region: us-west-2
    objects: |
      - objectName: "jwt-secret"
        objectType: "secretsmanager"
      - objectName: "docker-configjson"
        objectType: "secretsmanager"
jbct commented 7 months ago

Thank you for the report, we will look into this issue.

kevarr commented 7 months ago

To offer a potential use-case: I'd like to use a cross-account IAM role to centralize permission management to a SecretsManager secret, use EKS Pod Identities to allow a Pod running in one account (Account A) to assume the centralized role in another account (Account B), and mount the secret into the Pod using secrets-store-csi-driver-provider-aws.

With EKS Pod Identities you can only associate roles that are in the same AWS account as the cluster. The documentation states that to achieve cross-account access you should use role chaining[1]. This is in contrast to IRSA which allows you to directly assume a role in another account [2].

If I were to use the provider to mount the secret into my Pod it would need to assume the role in Account B using the credentials of Account A before making the GetSecretValue API request.

[1] https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html [2] https://aws.amazon.com/blogs/containers/cross-account-iam-roles-for-kubernetes-service-accounts/

dprangnell commented 6 months ago

I haven't tested this, but it's almost definitely resolved in the latest release 0.3.6 which deploys v1.49.19 of the SDK.

dprangnell commented 6 months ago

Well I spoke too soon, I tested it and it doesn't work with release 0.3.6:

  Warning  FailedMount  40s (x11 over 6m52s)  kubelet MountVolume.SetUp failed for volume "secret-from-secret-manager" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod default/demo-app, err: rpc error: code = Unknown desc = us-east-1: An IAM role must be associated with service account pod-identity (namespace: default)
helm list -n kube-system      
NAME                            NAMESPACE   REVISION    UPDATED                                 STATUS      CHART                                       APP VERSION
csi-secrets-store               kube-system 1           2024-02-17 10:52:16.835931 -0800 PST    deployed    secrets-store-csi-driver-1.4.1              1.4.1      
secrets-provider-aws            kube-system 1           2024-02-17 10:52:34.524537 -0800 PST    deployed    secrets-store-csi-driver-provider-aws-0.3.6 
    spec:
      serviceAccountName: pod-identity
      volumes:
        - name: secret-from-secret-manager
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: "aws-secrets"
      containers:
      - name: demo-app
        image: "example/demo-app:latest"
        volumeMounts:
        - name: secrets-from-secret-manager
          mountPath: "/mnt/secrets-store"
          readOnly: true
        imagePullPolicy: Always
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: aws-secrets
spec:
  provider: aws
  parameters:
    objects: |
      - objectName: "arn:aws:secretsmanager:{{.Values.region}}:{{.Values.accountid}}:secret:{{.Values.secretname}}"
Fatma-J commented 6 months ago

@dprangnell Same for me. Have you succeeded in fixing the issue?

giedriuskilcauskas commented 6 months ago

That specific check for annotation is here: https://github.com/aws/secrets-store-csi-driver-provider-aws/blob/main/auth/auth.go#L110-L126 And I think there is no way to workaround it with parameters without code change.

DanielMcAssey commented 5 months ago

Hitting this issue too, unfortunately OIDC is a pain with its limitations on roles

jbct commented 5 months ago

Thank you for the request. The EKS POD Identities page calls out incompatibility with other CSI storage drivers and we are working to get documentation updated to include the Secrets Manager and Config Provider for Secret Store. We have this in our backlog and have marked it as a future enhancement.

DanielMcAssey commented 5 months ago

I have secrets manager manually installed, that page indicates that it should work, but I am hitting the same issue as the above

yambottle commented 5 months ago

System Info

Setup Steps

giedriuskilcauskas commented 5 months ago
  • Create SA default-secret-manager

@yambottle you need to put annotation on SA with the role created in previous step.

AnhQKatalon commented 4 months ago

I am currently having the exact same issue. This prevents us from migrating to use Pod Identity because all of our services are using Secret CSI to pull the secrets from Secrets Manager

swt-yoromero commented 2 months ago

Hola a todos!!!. Me paso lo mismo, tengo EKS 1.30, y lo solucione agregandole permisos al rol ya que faltaban permisos para los recursos secrets, este era el error que me daba encontrado en los logs del pod: kubectl logs -n kube-system secrets-store-csi-driver-45w8a

secrets-store-csi-driver\" cannot list resource \"secrets\" in API group \"\" at the cluster scope\n"
I0613 20:17:17.679272       1 reflector.go:424] "pkg/mod/k8s.io/client-go@v0.26.4/tools/cache/reflector.go:169: failed to list *v1.Secret: secrets is forbidden: User \"system:serviceaccount:kube-system:secrets-store-csi-driver\" cannot list resource \"secrets\" in API group \"\" at the cluster scope\n"
E0613 20:17:17.679334       1 reflector.go:140] "pkg/mod/k8s.io/client-go@v0.26.4/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets is forbidden: User \"system:serviceaccount:kube-system:secrets-store-csi-driver\" cannot list resource \"secrets\" in API group \"\" at the cluster scope\n"
I0613 20:18:00.577364       1 reflector.go:424] "pkg/mod/k8s.io/client-go@v0.26.4/tools/cache/reflector.go:169: failed to list *v1.Secret: secrets is forbidden: User \"system:serviceaccount:kube-system:secrets-store-csi-driver\" cannot list resource \"secrets\" in API group \"\" at the cluster scope\n"

luego de agregar estos permisos [list, watch, get] me dio este error:

I0613 20:46:34.691739       1 secretproviderclasspodstatus_controller.go:342] "The secret operation failed with forbidden error. If you installed the CSI driver using helm, ensure syncSecret.enabled=true is set.\n"
E0613 20:46:34.694549       1 secretproviderclasspodstatus_controller.go:338] "failed to create Kubernetes secret" err="secrets is forbidden: User \"system:serviceaccount:kube-system:secrets-store-csi-driver\" cannot create resource \"secrets\" in API group \"\" in the namespace \"fincuo\"" spc="fincuo/api-funded-natives-secrets" pod="fincuo/api-funded-natives-5df4c46dc7-zgm74" secret="fincuo/api-funded-natives-secrets" spcps="fincuo/api-funded-natives-5df4c46dc7-zgm74-fincuo-api-funded-natives-secrets"

y lo que agregue al ClusterRole quedo fue:

- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch

quedando asi el recurso:

 # Source: secrets-store-csi-driver/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: secretproviderclasses-role
  labels:
    app.kubernetes.io/instance: "secrets-store-csi-driver"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/name: "secrets-store-csi-driver"
    app.kubernetes.io/version: "1.4.3"
    app: secrets-store-csi-driver
    helm.sh/chart: "secrets-store-csi-driver-1.4.3"
rules:
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - secrets-store.csi.x-k8s.io
  resources:
  - secretproviderclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - secrets-store.csi.x-k8s.io
  resources:
  - secretproviderclasspodstatuses
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - secrets-store.csi.x-k8s.io
  resources:
  - secretproviderclasspodstatuses/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - storage.k8s.io
  resourceNames:
  - secrets-store.csi.k8s.io
  resources:
  - csidrivers
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
mohamed-jawad-etg commented 2 months ago

facing the same issue, can't find away around and I don't want to use OIDC

jgrigg commented 2 months ago

I am also very interested in this. We have adopted Pod Identities for general cross-account AWS access but are having to resort to IRSA for mounted secrets (via ESO) which is yuk.

Worth noting that we are also using role chaining (via profile/ini config). Ideally the CSI driver would allow configuration of a role to assume/chain in such an 'external' account.

psivananda commented 1 month ago

modified code and it's working for me for both IRSA and PIA

To make it work for PIA

  1. we need to update helm chart to add cluster name as ENV
  2. build helm chart, docker image and push to your registries
  3. Remove eks.amazonaws.com/role-arn annotations from serviceaccount

https://github.com/psivananda/secrets-store-csi-driver-provider-aws/commit/8eb8b553cbb7a9712c51b0ac27bddb546f7fbf8a

image