Open EdKingscote opened 5 months ago
Thank you for the request. You’re correct that the current authentication mechanisms are generally designed to share a single IAM role across multiple mounts on a node. The driver will pass the profile
option on to the mountpoint process, so adding a line like - profile=myprofile
to the persistent volume configuration is a potential workaround. To make this work, you will need a credentials file in /root/.aws
in the host’s filesystem (not a container because the actual FUSE process runs outside of the driver in the host’s systemd). See the Mountpoint documentation for more details on how their authentication works.
We will keep this request open as a feature to make multiple account access possible or at least easier and share any updates as we have them.
I have the same need
I also would like to point out, that it would be very nice to not use the same IAM role to access all s3 buckets, but to be able to specify exactly which role to be used on a per bucket basis - en par with the featureset of the efs driver regarding authentication.
/feature
I'm looking at running this in a self-hosted K8s environment but desire to access different S3 buckets that are spread across multiple AWS accounts, which means each will need a unique access/secret key combination.
I've spent a fair bit of time looking around, but it isn't clear to me whether this is achievable. The only thing I can envisage right now is using mountOptions on the Persistent Volume definition to be able to select the right credential profile, but I can't see a way to provide the profiles needed.
Many thanks