efrecon / docker-s3fs-client

Alpine-based s3fs client: mount from container, make available to other containers
BSD 3-Clause "New" or "Revised" License
181 stars 64 forks source link

Possibility to load credentials from env file #32

Closed hown3d closed 2 years ago

hown3d commented 2 years ago

Hey, I'm currently trying to setup s3fs inside kubernetes with IAM Role as a Service Account support. Since s3-fuse doesn't use the aws cpp sdk, it's kind of a hack to pull off.

My current setup looks like this:

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  serviceAccount: svc-acc
  volumes:
  - name: devfuse
    hostPath:
      path: /dev/fuse
  - name: tmp
    emptyDir: {}
  - name: mntdumps3fs
    hostPath:
      path: /mnt/s3fs/test-rocket-rocketchat-s3fs
  initContainers:
  - name: get-aws-credentials
    image: amazon/aws-cli:latest
    volumeMounts:
    - mountPath: /tmp
      name: tmp
    securityContext:
      runAsUser: 0
      runAsNonRoot: false
      runAsGroup: 0
    command: ["/bin/sh", "-c"]
    args:
      - STS=$(aws sts assume-role-with-web-identity --role-arn $AWS_ROLE_ARN --role-session-name s3-fuse --web-identity-token $(cat ${AWS_WEB_IDENTITY_TOKEN_FILE}) --query 'Credentials.[AccessKeyId,SecretAccessKey]' --output text);
        AWS_ACCESS_KEY_ID=$(echo $STS | cut -d' ' -f1);
        AWS_SECRET_ACCESS_KEY=$(echo $STS | cut -d' ' -f2);
        echo -e "$AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY" > /tmp/s3_passwd;
        chmod 600 /tmp/s3_passwd;
  containers:
  - name: s3fs
    image: ghcr.io/efrecon/s3fs:1.91
    imagePullPolicy: IfNotPresent
    securityContext:
      privileged: true
      capabilities:
        add:
          # needed for fuse
          - SYS_ADMIN
          - MKNOD
      runAsUser: 0
      runAsNonRoot: false
      runAsGroup: 0
      seccompProfile:
        type: "Unconfined"
    env:
      - name: AWS_S3_BUCKET
        value: organisation-rocketchat-backups-stage
      - name: AWS_S3_AUTHFILE
        value: /tmp/s3_passwd
      - name: S3FS_ARGS
        value: "endpoint=eu-central-1"
      - name: S3FS_DEBUG
        value: "1"
      - name: S3FS_ARGS
        value: "curldbg -f -o f2"
    volumeMounts:
    - name: devfuse
      mountPath: /dev/fuse
    - name: mntdumps3fs
      mountPath: /opt/s3fs/bucket
    - name: tmp
      mountPath: /tmp

If the entrypoint would provide a way to load the needed environment variables from a .env file, I could simply do the following in my init container:

STS=$(aws sts assume-role-with-web-identity --role-arn $AWS_ROLE_ARN --role-session-name s3-fuse --web-identity-token $(cat ${AWS_WEB_IDENTITY_TOKEN_FILE}) --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' --output text)
AWS_ACCESS_KEY_ID=$(echo $STS | cut -d' ' -f1)
AWS_SECRET_ACCESS_KEY=$(echo $STS | cut -d' ' -f2)
AWS_SESSION_TOKEN =$(echo $STS | cut -d' ' -f3)
echo -e "AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID\nAWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY\nAWS_SESSION_TOKEN=$AWS_SESSION_TOKEN" > /tmp/creds.env

Using that approach I could provide the path to the credentials file and the entrypoint will source it.

But having a native way of s3-fuse to do it, would still be way better. @gaul do you have any plans on adding support soon?

efrecon commented 2 years ago

Could you, @hown3d, have a look at PR #33 and see if that would solve your problems? Would be even better if you could test it and report here once done. thx!

hown3d commented 2 years ago

LGTM, didn't use s3fs in the end, so I can't test it right now.