awslabs / mountpoint-s3-csi-driver

Built on Mountpoint for Amazon S3, the Mountpoint CSI driver presents an Amazon S3 bucket as a storage volume accessible by containers in your Kubernetes cluster.
Apache License 2.0
214 stars 26 forks source link

No signing credentials found #260

Closed ocni-dtu closed 2 months ago

ocni-dtu commented 2 months ago

/kind bug

What happened? Trying to get the csi driver to work on an EC2 instance with K3s installed.

Getting the following error:

kubectl logs -l app=s3-csi-node --namespace kube-system
Defaulted container "s3-plugin" out of: s3-plugin, node-driver-registrar, liveness-probe, install-mountpoint (init)
I0920 12:26:53.127654       1 node.go:207] NodeGetCapabilities: called with args 
I0920 12:26:53.129656       1 node.go:65] NodePublishVolume: req: volume_id:"s3-csi-driver-volume" target_path:"/var/lib/kubelet/pods/f0b06756-35f7-4621-8112-7f890854707f/volumes/kubernetes.io~csi/s3-pv/mount" volume_capability:<mount:<mount_flags:"region eu-central-1" > access_mode:<mode:MULTI_NODE_READER_ONLY > > volume_context:<key:"bucketName" value:"gbdi-bundles" > 
I0920 12:26:53.129738       1 node.go:112] NodePublishVolume: mounting gbdi-bundles at /var/lib/kubelet/pods/f0b06756-35f7-4621-8112-7f890854707f/volumes/kubernetes.io~csi/s3-pv/mount with options [--read-only --region=eu-central-1]
E0920 12:26:53.216697       1 driver.go:113] GRPC error: rpc error: code = Internal desc = Could not mount "gbdi-bundles" at "/var/lib/kubelet/pods/f0b06756-35f7-4621-8112-7f890854707f/volumes/kubernetes.io~csi/s3-pv/mount": Mount failed: Failed to start service output: Error: Failed to create S3 client  Caused by:     0: initial ListObjectsV2 failed for bucket gbdi-bundles in region eu-central-1     1: Client error     2: No signing credentials found Error: Failed to create mount process 
I0920 12:28:55.308607       1 node.go:207] NodeGetCapabilities: called with args 
I0920 12:28:55.313315       1 node.go:207] NodeGetCapabilities: called with args 
I0920 12:28:55.317433       1 node.go:207] NodeGetCapabilities: called with args 
I0920 12:28:55.319229       1 node.go:65] NodePublishVolume: req: volume_id:"s3-csi-driver-volume" target_path:"/var/lib/kubelet/pods/f0b06756-35f7-4621-8112-7f890854707f/volumes/kubernetes.io~csi/s3-pv/mount" volume_capability:<mount:<mount_flags:"region eu-central-1" > access_mode:<mode:MULTI_NODE_READER_ONLY > > volume_context:<key:"bucketName" value:"gbdi-bundles" > 
I0920 12:28:55.319354       1 node.go:112] NodePublishVolume: mounting gbdi-bundles at /var/lib/kubelet/pods/f0b06756-35f7-4621-8112-7f890854707f/volumes/kubernetes.io~csi/s3-pv/mount with options [--read-only --region=eu-central-1]
E0920 12:28:55.381316       1 driver.go:113] GRPC error: rpc error: code = Internal desc = Could not mount "gbdi-bundles" at "/var/lib/kubelet/pods/f0b06756-35f7-4621-8112-7f890854707f/volumes/kubernetes.io~csi/s3-pv/mount": Mount failed: Failed to start service output: Error: Failed to create S3 client  Caused by:     0: initial ListObjectsV2 failed for bucket gbdi-bundles in region eu-central-1     1: Client error     2: No signing credentials found Error: Failed to create mount process

What you expected to happen? The pvc to mount

How to reproduce it (as minimally and precisely as possible)? Installed the csi driver with Helm. The aws-secret is created in advance.

The PV, PVC setup looks like this:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: s3-pv
spec:
  capacity:
    storage: 1200Gi # ignored, required
  accessModes:
    - ReadOnlyMany # supported options: ReadWriteMany / ReadOnlyMany
  mountOptions:
    - region eu-central-1
  csi:
    driver: s3.csi.aws.com # required
    volumeHandle: s3-csi-driver-volume
    volumeAttributes:
      bucketName: gbdi-bundles
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: s3-claim
  namespace: bundles
spec:
  accessModes:
    - ReadOnlyMany # supported options: ReadWriteMany / ReadOnlyMany
  storageClassName: "" # required for static provisioning
  resources:
    requests:
      storage: 1200Gi # ignored, required
  volumeName: s3-pv

Anything else we need to know?:

Environment

unexge commented 2 months ago

Hey @ocni-dtu, thanks for reporting the issue.

Could you please check if you created your aws-secret in kube-system namespace with the correct keys? If you're using the default Helm chart values, it should be something like this:

$ kubectl create secret generic aws-secret \
    --namespace kube-system \
    --from-literal "key_id=${AWS_ACCESS_KEY_ID}" \
    --from-literal "access_key=${AWS_SECRET_ACCESS_KEY}"

If the issue still persists, could you please try again with the version v1.8.0? We changed something relevant to long-term AWS credentials in v1.8.1, it might be related to that.

ocni-dtu commented 2 months ago

Yes, the aws-secret were created in kube-system. However not with keys key_id and access_key, but they have been updated in the helm chart values:

    awsAccessSecret:
      keyId: accesskeyid
      accessKey: accessskeysecret

Which I now can see that I made a typo in... 🤦🏻

ocni-dtu commented 2 months ago

And can report that it works both in 1.8.1 and 1.8.0

unexge commented 2 months ago

Thanks for the report @ocni-dtu! Happy to hear that your problem solved.