kubernetes-sigs / secrets-store-csi-driver

Secrets Store CSI driver for Kubernetes secrets - Integrates secrets stores with Kubernetes via a CSI volume.
https://secrets-store-csi-driver.sigs.k8s.io/
Apache License 2.0
1.27k stars 292 forks source link

`SecretProviderClass` Correct create the `Secret` but the mounted volume is empty #1103

Closed FirelightFlagboy closed 1 week ago

FirelightFlagboy commented 1 year ago

What steps did you take and what happened:

  1. I've configure vault-provider like so

    helm install vault hashicorp/vault -f kube-cluster/helm/vault.yml

    kube-cluster/helm/vault.yml contain the following data:

    global:
    enabled: false
    externalVaultAddr: https://192.168.1.200:8200
    
    csi:
    enabled: true
    
    volumes:
        - name: tls
        secret:
            secretName: vault-ca-cert
    
    volumeMounts:
        - name: tls
        mountPath: /vault/tls/
        readOnly: true
    
    extraArgs:
        - --vault-tls-ca-cert=/vault/tls/vault-ca.pem

    Then I config the auth/kubernetes entrypoint on vault

    # Get the name of our vault token associated with the vault-csi service account.
    VAULT_HELM_SECRET_NAME=$(kubectl get secrets --output=json | jq -r '.items[].metadata.name | select(.|startswith("vault-token-"))')
    # Get the token stored in the secret.
    TOKEN_REVIEW_JWT=$(kubectl get secret $VAULT_HELM_SECRET_NAME --output='jsonpath={.data.token}' | base64 -d)
    # Get the host addr & CA certificate of kubectl controller node.
    KUBE_HOST=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}')
    KUBE_CA_CERT=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode)
    # Issuer
    ISSUER=$(kubectl get --raw /.well-known/openid-configuration | jq -r .issuer)
    
    vault write auth/kubernetes/config \
        token_reviewer_jwt="$TOKEN_REVIEW_JWT" \
        kubernetes_host="$KUBE_HOST" \
        kubernetes_ca_cert="$KUBE_CA_CERT" \
        issuer="$ISSUER"
  2. I've configure secrets-store-csi-driver using the helm chart

    helm install secrets-csi secrets-store-csi-driver/secrets-store-csi-driver \
        --set syncSecret.enabled=true \
        --set enableSecretRotation=true
  3. I've created a SecretProviderClass like so

    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
      name: app-secret-provider
    spec:
      provider: vault
      secretObjects:
        - data:
            - key: admin-token
              objectName: admin-token
          secretName: app-secret
          type: Opaque
      parameters:
        vaultAddress: https://192.168.1.200:8200
        roleName: app
        objects: |
          - objectName: admin-token
            secretPath: secret/data/app
            secretKey: admin-token
  4. I've configured a Pod like so

    apiVersion: v1
    kind: Pod
    metadata:
      name: test-foo
      labels:
        name: test-foo
    spec:
      terminationGracePeriodSeconds: 5
      serviceAccountName: app-sa
      containers:
        - name: test-foo
          image: busybox:latest
          args:
            - sleep
            - infinity
          resources:
            limits:
              memory: "128Mi"
              cpu: "500m"
          volumeMounts:
            - name: secrets-store-inline
              mountPath: /secrets-store
              readOnly: true
          env:
            - name: ADMIN_TOKEN
              valueFrom:
                secretKeyRef:
                  name: app-secret
                  key: admin-token
      volumes:
        - name: secrets-store-inline
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: app-secret-provider
  5. Inside the pod (kubectl exec -it test-foo -- sh), the mounted volume /secrets-store remain empty but the env variable is set to the correct value:

    $ ls -l /secrets-store/
    total 0
    $ printenv ADMIN_TOKEN
    foobar

What did you expect to happen:

I expect the mount-point to populated with the secrets data.

Anything else you would like to add:

When inspect the logs, I've found a Warning but it seems to be raised only when I delete the pod.

$ kubectl logs secrets-csi-secrets-store-csi-driver-hsksq secrets-store -f --tail=0 
W1122 19:55:23.704817       1 mount_helper_common.go:133] Warning: "/var/snap/microk8s/common/var/lib/kubelet/pods/dc9cdf30-6519-4998-bb43-4310e8538662/volumes/kubernetes.io~csi/secrets-store-inline/mount" is not a mountpoint, deleting
I1122 19:55:23.704883       1 nodeserver.go:307] "node unpublish volume complete" targetPath="/var/snap/microk8s/common/var/lib/kubelet/pods/dc9cdf30-6519-4998-bb43-4310e8538662/volumes/kubernetes.io~csi/secrets-store-inline/mount" time="1.036837ms"

Which provider are you using: I'm using an external Vault server as a provider

I'm creating the issue here since I don't see any error log on the vault-csi-provider and the k8s secret is created with the correct value even tho the mounted volume remain empty.

Environment:

car-da commented 1 year ago

I have the same. Store secrets in ENV works. Mount works but stay empty, no file is created with keys.

Vault-csi-provider say that secret added to mount 2023-02-07T13:49:43.639Z [INFO] server.provider: secret added to mount response: directory=/var/lib/docker/kubelet/pods/a3b851c3-0a10-4885-ac8a-a31e13620804/volumes/kubernetes.io~csi/vault-secrets-vol/mount file=app-hokus 2023-02-07T13:49:43.639Z [INFO] server: Finished unary gRPC call: grpc.method=/v1alpha1.CSIDriverProvider/Mount grpc.time=386.609929ms grpc.code=OK err=<nil>

csi-secrets-store also without error I0207 14:16:39.593066 1 nodeserver.go:254] "node publish volume complete" targetPath="/var/lib/docker/kubelet/pods/74c3c645-0482-4b3d-8240-f2163e6d38d3/volumes/kubernetes.io~csi/vault-secrets-vol/mount" pod="steinhaislj/grafana-jarda-5b7d9b99f7-6fdxd" time="421.122201ms"

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

FirelightFlagboy commented 1 year ago

/remove-lifecycle stale

k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

FirelightFlagboy commented 9 months ago

/remove-lifecycle stale

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

FirelightFlagboy commented 6 months ago

/remove-lifecycle stale

wojtek-viirtue commented 5 months ago

Seeing the same exact behavior as described by @FirelightFlagboy

Vault CSI provider:

server.vaultclient: Requesting secret: secretConfig="{db-password secret/data/db-pass password  map[] ---------- }" method=GET path=/v1/secret/data/db-pass params=map[]
2024-05-22T17:53:14.817Z [INFO]  server.provider: secret added to mount response: directory=/var/snap/microk8s/common/var/lib/kubelet/pods/5b44c867-3428-4642-8eac-a288fd65e78d/volumes/kubernetes.io~csi/secrets-store-inline/mount file=db-password
2024-05-22T17:53:14.817Z [INFO]  server: Finished unary gRPC call: grpc.method=/v1alpha1.CSIDriverProvider/Mount grpc.time=30.726788ms grpc.code=OK err=<nil>

Mount exists, but there are no files created.

Valut: v1.16.1 CSI Secrets Store Driver: v1.4.3 K8s version: v1.27.13

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 week ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 week ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/secrets-store-csi-driver/issues/1103#issuecomment-2425497772): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.