Open ananthkamath opened 3 years ago
I followed the steps in the blogpost https://blog.meain.io/2020/mounting-s3-bucket-kube/, but when the daemonset get created when i bash into the pod i see below in /var directory.
bash-4.3# ls /var cache db empty git lib lock log run s3fs s3fs:shared spool tmp
Just wanted to check if I am missing anything here which needs to be done for Azure AKS or does this setup not work for Azure AKS?
Daemonset.yaml
spec: containers: - name: s3fuse image: meain/s3-mounter lifecycle: preStop: exec: command: ["/bin/sh","-c","umount -f /var/s3fs"] securityContext: privileged: true capabilities: add: - SYS_ADMIN envFrom: - configMapRef: name: s3-config volumeMounts: - name: devfuse mountPath: /dev/fuse - name: mntdatas3fs mountPath: /var/s3fs:shared volumes: - name: devfuse hostPath: path: /dev/fuse - name: mntdatas3fs hostPath: path: /mnt/s3-data
examplepod.yaml
spec: containers: - image: nginx name: s3-test-container securityContext: privileged: true volumeMounts: - name: mntdatas3fs mountPath: /var/s3fs:shared volumes: - name: mntdatas3fs hostPath: path: /mnt/s3-data
I have the same problem, did you find a solution?
seeing the same things for AWS EKS
I followed the steps in the blogpost https://blog.meain.io/2020/mounting-s3-bucket-kube/, but when the daemonset get created when i bash into the pod i see below in /var directory.
Just wanted to check if I am missing anything here which needs to be done for Azure AKS or does this setup not work for Azure AKS?
Daemonset.yaml
examplepod.yaml