cert-manager / csi-driver

A Kubernetes CSI plugin to automatically mount signed certificates to Pods using ephemeral volumes
https://cert-manager.io/docs/usage/csi-driver/
Apache License 2.0
199 stars 46 forks source link

Volume empty #134

Open blaubaer opened 1 year ago

blaubaer commented 1 year ago

Environment

Software

  1. Kubernetes: v1.23
  2. cert-manager: v1.10.1 installed using helm chart jetstack/cert-manager from https://charts.jetstack.io with all default values.
  3. csi-drover: v0.5.0 installed using helm chart jetstack/cert-manager-csi-driver from https://charts.jetstack.io with all default values.

Resources

The following resources are truncated by some meta and status information.

ClusterIssuer/self-signer

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: self-signer
status:
  conditions:
    - type: Ready
spec:
  selfSigned: {}

Certificate/cert-manager/cluster-ca

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: cluster-ca
  namespace: cert-manager
status:
  conditions:
    - type: Ready
  notAfter: '2023-03-28T14:18:47Z'
  notBefore: '2022-12-28T14:18:47Z'
  renewalTime: '2023-02-26T14:18:47Z'
  revision: 1
spec:
  commonName: CA de1.engity.red
  isCA: true
  issuerRef:
    group: cert-manager.io
    kind: ClusterIssuer
    name: self-signer
  privateKey:
    algorithm: ECDSA
    size: 256
  secretName: cluster-ca

ℹ️ Secret cert-manager/cluster-ca exists and has all required fields. ✅

ClusterIssuer/ca

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: ca
status:
  conditions:
    - type: Ready
spec:
  ca:
    secretName: cluster-ca

Relevant PODs

  1. cert-manager/cert-manager-64d459d7f-5rtc6: Running ✅
  2. cert-manager/cert-manager-cainjector-7d9466748-m79r5: Running ✅
  3. cert-manager/cert-manager-csi-driver-5fgxg and cert-manager-csi-driver-w56v6: Running ✅
  4. cert-manager/cert-manager-webhook-d77bbf4cb-25j2h: Running ✅

Scenario

Steps to reproduce

  1. I've create a POD with the following config:
    apiVersion: v1
    kind: Pod
    metadata:
      name: a-test-pod
      namespace: sandbox
    spec:
      containers:
        - name: app
          image: busybox
          volumeMounts:
          - mountPath: "/tls"
            name: tls
          command: [ "sleep", "1000000" ]
      volumes:
        - name: tls
          csi:
            driver: csi.cert-manager.io
            readOnly: true
            volumeAttributes:
                  csi.cert-manager.io/issuer-name: ca
                  csi.cert-manager.io/issuer-kind: ClusterIssuer
                  csi.cert-manager.io/common-name: a-test

Observed behavior

  1. Pod sandbox/sandbox is created.
  2. Executing ls -la /tls at this POD will shows an empty folder image

Expected behavior

  1. Pod sandbox/sandbox is created.
  2. Executing ls -la /tls at this POD will show:
    1. tls.key
    2. tls.crt
    3. ca.crt

Debug data

cert-manager/cert-manager-csi-driver-w56v6/cert-manager-csi-driver

I1228 15:04:14.354697       1 nodeserver.go:83] driver "msg"="Registered new volume with storage backend" "pod_name"="a-test-pod"
I1228 15:04:14.354780       1 manager.go:302] manager "msg"="Processing issuance" "volume_id"="csi-6404d0e86c3f01668d1899859188730a22e0f179db1f7022a36072b0d9c254c4"
I1228 15:04:14.547266       1 manager.go:340] manager "msg"="Created new CertificateRequest resource" "volume_id"="csi-6404d0e86c3f01668d1899859188730a22e0f179db1f7022a36072b0d9c254c4"
I1228 15:04:15.548181       1 nodeserver.go:100] driver "msg"="Volume registered for management" "pod_name"="a-test-pod"
I1228 15:04:15.548199       1 nodeserver.go:113] driver "msg"="Ensuring data directory for volume is mounted into pod..." "pod_name"="a-test-pod"
I1228 15:04:15.548412       1 nodeserver.go:132] driver "msg"="Bind mounting data directory to the pod's mount namespace" "pod_name"="a-test-pod"
I1228 15:04:15.549875       1 nodeserver.go:138] driver "msg"="Volume successfully provisioned and mounted" "pod_name"="a-test-pod"

cert-manager/cert-manager-64d459d7f-5rtc6/cert-manager-controller

I1228 15:04:14.497104       1 conditions.go:263] Setting lastTransitionTime for CertificateRequest "49fdaf0d-8548-45bc-ab8f-0eef4793b125" condition "Approved" to 2022-12-28 15:04:14.497097003 +0000 UTC m=+2757.852054054
I1228 15:04:14.577304       1 conditions.go:263] Setting lastTransitionTime for CertificateRequest "49fdaf0d-8548-45bc-ab8f-0eef4793b125" condition "Ready" to 2022-12-28 15:04:14.57729735 +0000 UTC m=+2757.932254386
manelio commented 1 year ago

Same problem here. I'm using microk8s, Kubernetes version 1.26:

kubectl get csidrivers

NAME                  ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES       AGE
csi.cert-manager.io   true             true             false             <unset>         false               Ephemeral   50m
# Deploy example app

kubect apply -f https://raw.githubusercontent.com/cert-manager/csi-driver/main/deploy/example/example-app.yaml
kubectl get pod -n sandbox

NAME                          READY   STATUS    RESTARTS   AGE
my-csi-app-5c569977c8-b6dx4   1/1     Running   0          4m45s
my-csi-app-5c569977c8-c84kt   1/1     Running   0          4m45s
my-csi-app-5c569977c8-f6nrv   1/1     Running   0          4m45s
my-csi-app-5c569977c8-mn78w   1/1     Running   0          4m45s
my-csi-app-5c569977c8-tjwbl   1/1     Running   0          4m45s
# Directory /tls exists, but is empty

kubectl exec -n sandbox --stdin --tty $(kubectl get pod -n sandbox -l app=my-csi-app -o custom-columns=:metadata.name | tail -n1) -- ls -la /tls

total 8
drwxr-xr-x    2 root     root          4096 Jan 28 16:07 .
drwxr-xr-x    1 root     root          4096 Jan 28 16:07 ..

Edit: csi-driver log shows concerning messages:

I0201 10:23:11.880983       1 nodeserver.go:83] driver "msg"="Registered new volume with storage backend" "pod_name"="app-5dfb558dbb-rk8xv"
I0201 10:23:11.881091       1 manager.go:302] manager "msg"="Processing issuance" "volume_id"="csi-795b25d19675b007b4d71802bd485a92369c9437ce3d9224511bf5338f5ef143"
I0201 10:23:12.209743       1 manager.go:340] manager "msg"="Created new CertificateRequest resource" "volume_id"="csi-795b25d19675b007b4d71802bd485a92369c9437ce3d9224511bf5338f5ef143"
I0201 10:23:13.210873       1 nodeserver.go:100] driver "msg"="Volume registered for management" "pod_name"="app-5dfb558dbb-rk8xv"
I0201 10:23:13.210891       1 nodeserver.go:113] driver "msg"="Ensuring data directory for volume is mounted into pod..." "pod_name"="app-5dfb558dbb-rk8xv"
I0201 10:23:13.211448       1 nodeserver.go:132] driver "msg"="Bind mounting data directory to the pod's mount namespace" "pod_name"="app-5dfb558dbb-rk8xv"
I0201 10:23:13.213119       1 nodeserver.go:138] driver "msg"="Volume successfully provisioned and mounted" "pod_name"="app-5dfb558dbb-rk8xv"
I0201 10:23:34.735988       1 nodeserver.go:159] driver "msg"="Stopped management of volume" "target_path"="/var/snap/microk8s/common/var/lib/kubelet/pods/39e87081-472e-4f0c-a58c-74cf63ee13bd/volumes/kubernetes.io~csi/tls/mount" "volume_id"="csi-5bb5586064040ada5584fb6348169a07a0cec94e4007d2eba9d16ccca41779b8"
E0201 10:23:34.736098       1 server.go:109] driver "msg"="failed processing request" "error"="file does not exist" "request"={} "rpc_method"="/csi.v1.Node/NodeUnpublishVolume"
I0201 10:23:35.239902       1 nodeserver.go:159] driver "msg"="Stopped management of volume" "target_path"="/var/snap/microk8s/common/var/lib/kubelet/pods/39e87081-472e-4f0c-a58c-74cf63ee13bd/volumes/kubernetes.io~csi/tls/mount" "volume_id"="csi-5bb5586064040ada5584fb6348169a07a0cec94e4007d2eba9d16ccca41779b8"
szuro commented 1 year ago

Had the same issue using microk8s. It seems to be solved after providing the following values for the helm chart:

$ cat csi-values.yaml
app:
  kubeletRootDir: "/var/snap/microk8s/common/var/lib/kubelet"

Then just install with: helm upgrade -i -n cert-manager -f csi-values.yaml cert-manager-csi-driver jetstack/cert-manager-csi-driver --wait