aws / secrets-store-csi-driver-provider-aws

The AWS provider for the Secrets Store CSI Driver allows you to fetch secrets from AWS Secrets Manager and AWS Systems Manager Parameter Store, and mount them into Kubernetes pods.
Apache License 2.0
465 stars 132 forks source link

EKS K8s secret cannot be created from volume mount with secret store csi driver #92

Open arimaverick opened 2 years ago

arimaverick commented 2 years ago

I want to pass the aws secrets manager secret as an environment variable to the eks container. However even after correctly volume mounted the secret, the kubernetes secret could not be created from the volume mount.
I am using the roles and service account mentioned in the document.

To Reproduce Here is my secretprovider class:

apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: aws-secrets
spec:
  provider: aws
  secretObjects:
  - secretName: newsecret     # name of the Kubernetes Secret object
    type: Opaque         # type of the Kubernetes Secret object e.g. Opaque, kubernetes.io/tls
    data:
    - objectName: dbpass   # name of the mounted content to sync. this could be the object name or the object alias 
      key: password      # data field to populate
  parameters:                    # provider-specific parameters
    objects:  |
      - objectName: dummysecret
        objectType: secretsmanager
        objectAlias: dbpass

My deployment manifest section where I am passing the secret as an Environment variable:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ghost
  labels:
    app: ghost
spec:
  selector:
    matchLabels:
      app: ghost
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: ghost
        tier: frontend
    spec:
      serviceAccountName: ghost-sa
      containers:
      - image: ghost:1-alpine
        name: ghost
        env:
        - name: database_client
          value: mysql
        - name: database_connection_host
          value: XXXXXX
        - name: database_connection_user
          value: ghostadmin
        - name: database_connection_password
          valueFrom:
            secretKeyRef:
              name: newsecret
              key: password
        - name: database_connection_database
          value: ghostdb1
        ports:
        - containerPort: 2368
          name: ghost
        volumeMounts:
        - name: ghost-persistent-storage
          mountPath: /var/lib/ghost/content
        - name: ghostdb-pass
          mountPath: /mnt/ghostdb-pass
          readOnly: true
      volumes:
      - name: ghost-persistent-storage
        persistentVolumeClaim:
          claimName: efs-storage-claim
      - name: ghostdb-pass
        csi:
          driver: secrets-store.csi.k8s.io
          readOnly: true
          volumeAttributes: 
            secretProviderClass: aws-secrets

However the Pod goes to CreateContainerConfigError state and the following error was encountered:

Error: secret "newsecret" not found
timed out waiting for the condition

Expected behavior The secret should be created and passed as an environment variable to the kubernetes container.

Additional context As mentioned in the description above I can though retrieve the secret in the volume mounted:

/var/lib/ghost # ls /mnt/ghostdb-pass
dbpass

Thanks.

dmirallestl commented 2 years ago

Exactly the same issue here. Seems secretproviderclass is not creating the secret when mounting volume.

leo-technologies-devops-service commented 2 years ago

Experiencing the same exact issue - can successfully see mounted secrets within the pod but kubernetes secret object not being created when mounting volume for a env var. pod is stuck in creation.

DuyTungHa commented 2 years ago

Experiencing the same exact issue

dmirallestl commented 2 years ago

Just in case it works for someone. I fixed it by enabling syncsecret on helm instantiation.

resource "helm_release" "secret_csi_driver" { name = "secret-csi-driver" repository = "https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts" chart = "secrets-store-csi-driver" set { name = "syncSecret.enabled" value = "true" } depends_on = [ module.cluster ] }

flaviops commented 2 years ago

I just upgraded my helm install with the following:

helm upgrade --install -n kube-system --set syncSecret.enabled=true csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver

Even with this change the secret is still not being created from the volume mount, maybe I'm missing something.

ericluria commented 2 years ago

@flaviops I had the same issue and setting syncSecret.enabled=true got me one step closer to the solution (thanks, @dmirallestl!). In particular, I ran the following to look at the cluster events:

kubectl get events --sort-by='.lastTimestamp' -A
...
FailedToCreateSecret   pod/<redacted>                       failed to get data in spc <redacted>/eric-test-secret for secret <redacted>, err: file matching objectName SOME_ENV_VAR not found in the pod

Have you looked at the events to see if maybe you're running into the same issue?

a7i commented 2 years ago

I just upgraded my helm install with the following:

helm upgrade --install -n kube-system --set syncSecret.enabled=true csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver

Even with this change the secret is still not being created from the volume mount, maybe I'm missing something.

I noticed that the ClusterRole is missing permissions to get/create/patch secrets

Add the following permissions to your ClusterRole secretproviderclasses-role :

- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - list
  - watch
  - patch
  - create

Also, the secret name has to be referenced somewhere in your Pod spec (in env for example).

flaviops commented 2 years ago

Have you looked at the events to see if maybe you're running into the same issue?

It was a similar problem, thanks for the help