awslabs / mountpoint-s3-csi-driver

Built on Mountpoint for Amazon S3, the Mountpoint CSI driver presents an Amazon S3 bucket as a storage volume accessible by containers in your Kubernetes cluster.
Apache License 2.0
153 stars 18 forks source link

Can't mount two S3 mount points in the same pod #114

Closed sogos closed 6 months ago

sogos commented 6 months ago

/kind bug

NOTE: If this is a filesystem related bug, please take a look at the Mountpoint repo to submit a bug report

What happened?

Tried to mount two different buckets in the same pod:

Pod volume mounts

            - name: front-bucket
              mountPath: /mnt/front              
              readOnly: true
            - name: assets-bucket
              mountPath: /mnt/assets    

Pod Volumes

  - name: assets-bucket
    persistentVolumeClaim: 
      claimName: xxx-assets-claim
  - name: front-bucket
    persistentVolumeClaim: 
      claimName: xxx-front-claim 

If I comment one or the other VolumeMounts, it's okay, the pod start correctly

What you expected to happen?

I except the Pod run with two S3 mount points attached :)

How to reproduce it (as minimally and precisely as possible)?

Create two PV (one different bucket for each), PVC and try to mount in a pod.

Anything else we need to know?:

I also use CSI Secret Store + CSI Secret AWS Plugin

Logs of one S3 CSI Driver Controller, we only see 1/2 is detected/mounted

I1222 15:22:23.195319       1 driver.go:61] Driver version: 1.1.0, Git commit: c681ab1f19ccba5976e3263f0e3df65718750369, build date: 2023-12-05T19:47:03Z, nodeID: ip-10-1-19-61.eu-west-3.compute.internal, mount-s3 version: 1.3.1
I1222 15:22:23.199800       1 mount_linux.go:285] 'umount /tmp/kubelet-detect-safe-umount2175845353' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount2175845353: must be superuser to unmount.
I1222 15:22:23.199818       1 mount_linux.go:287] Detected umount with unsafe 'not mounted' behavior
I1222 15:22:23.205186       1 driver.go:83] Found AWS_WEB_IDENTITY_TOKEN_FILE, syncing token
I1222 15:22:23.205394       1 driver.go:113] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
I1222 15:22:23.887141       1 node.go:204] NodeGetInfo: called with args 
I1222 15:22:31.688535       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:22:31.689731       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:22:31.690351       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:22:31.690838       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:22:31.691495       1 node.go:49] NodePublishVolume: called with args volume_id:"s3-csi-driver-volume" target_path:"/var/lib/kubelet/pods/b5dc74ab-ac7a-4f34-8a45-14596e68137c/volumes/kubernetes.io~csi/xxx-front-pv/mount" volume_capability:<mount:<mount_flags:"region eu-west-3" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > > volume_context:<key:"bucketName" value:"xxx-front-production" > 
I1222 15:22:31.691547       1 node.go:81] NodePublishVolume: creating dir /var/lib/kubelet/pods/b5dc74ab-ac7a-4f34-8a45-14596e68137c/volumes/kubernetes.io~csi/xxx-front-pv/mount
I1222 15:22:31.691613       1 node.go:108] NodePublishVolume: mounting xxx-front-production at /var/lib/kubelet/pods/b5dc74ab-ac7a-4f34-8a45-14596e68137c/volumes/kubernetes.io~csi/xxx-front-pv/mount with options [--region=eu-west-3]
I1222 15:22:31.693249       1 systemd.go:99] Creating service to run cmd /opt/mountpoint-s3-csi/bin/mount-s3 with args [xxx-front-production /var/lib/kubelet/pods/b5dc74ab-ac7a-4f34-8a45-14596e68137c/volumes/kubernetes.io~csi/xxx-front-pv/mount --region=eu-west-3 --user-agent-prefix=s3-csi-driver/1.1.0]: mount-s3-1.3.1-df2c8236-7483-4356-8c7c-ebb2f88904be.service
I1222 15:22:31.813628       1 node.go:113] NodePublishVolume: /var/lib/kubelet/pods/b5dc74ab-ac7a-4f34-8a45-14596e68137c/volumes/kubernetes.io~csi/xxx-front-pv/mount was mounted
I1222 15:22:46.776028       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:24:11.527808       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:25:48.044354       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:27:03.892153       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:28:25.606031       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:29:40.029468       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:31:38.040287       1 node.go:188] NodeGetCapabilities: called with args

Environment

sogos commented 6 months ago

Oh, just tried to rename the "undocumented" field on PV

 volumeHandle: s3-csi-driver-volume

By something different on each PV and it worked ! (s3-csi-driver-volume-front)

Keeping this issue for documentation :)

jjkr commented 6 months ago

Glad you figured out the fix here. The CSI persistent volume fields are documented here for reference. We'll add some better documentation/examples on our end as well. Kubernetes seemingly silently drops volumes with duplicate ids and does not pass them on to the driver, so the error reporting is not helpful.

sogos commented 6 months ago

Thanks for the example !!! :heart: