freegroup / kube-s3

Kubernetes pods used shared S3 storage
MIT License
210 stars 67 forks source link

Volume mount does not work in Azure AKS #16

Open ananthkamath opened 3 years ago

ananthkamath commented 3 years ago

I followed the steps in the blogpost https://blog.meain.io/2020/mounting-s3-bucket-kube/, but when the daemonset get created when i bash into the pod i see below in /var directory.

bash-4.3# ls /var
cache   db      empty   git     lib     lock    log     run     s3fs    s3fs:shared  spool   tmp

Just wanted to check if I am missing anything here which needs to be done for Azure AKS or does this setup not work for Azure AKS?

Daemonset.yaml

spec:
      containers:
      - name: s3fuse
        image: meain/s3-mounter
        lifecycle:
          preStop:
            exec:
              command: ["/bin/sh","-c","umount -f /var/s3fs"]
        securityContext:
          privileged: true
          capabilities:
            add:
            - SYS_ADMIN
        envFrom:
        - configMapRef:
            name: s3-config
        volumeMounts:
        - name: devfuse
          mountPath: /dev/fuse
        - name: mntdatas3fs
          mountPath: /var/s3fs:shared
      volumes:
      - name: devfuse
        hostPath:
          path: /dev/fuse
      - name: mntdatas3fs
        hostPath:
          path: /mnt/s3-data

examplepod.yaml

spec:
  containers:
  - image: nginx
    name: s3-test-container
    securityContext:
      privileged: true
    volumeMounts:
    - name: mntdatas3fs
      mountPath: /var/s3fs:shared
  volumes:
  - name: mntdatas3fs
    hostPath:
      path: /mnt/s3-data
schantier commented 2 years ago

I have the same problem, did you find a solution?

apatil4 commented 1 year ago

seeing the same things for AWS EKS