Closed msenechal closed 6 months ago
@msenechal thanks for raising this issue with us.
The correct way to mount the s3 bucket would be via setting the uid and gid to 7474
apiVersion: v1
kind: PersistentVolume
metadata:
name: s3-pv-neo4j
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
mountOptions:
- allow-other
- uid=7474
- gid=7474
- region eu-west-2
csi:
driver: s3.csi.aws.com
volumeHandle: s3-csi-driver-volume
volumeAttributes:
bucketName: harshit-demo2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: s3-claim-neo4j
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 10Gi
volumeName: s3-pv-neo4j
This will mount the PV as neo4j user and you will not have to use the setOwnerAndGroupWritableFilePermissions
It seems when setting setOwnerAndGroupWritableFilePermissions without using the uid and gid , the chown and chmod operations fail to take effect.
You can find the official aws example for setting uid and gid here
i have tried after setting the uid and gid and it seems to be working just fine. I have used the below values.yaml for deploying neo4j
neo4j:
name: demo
resources:
cpu: "0.5"
memory: "2Gi"
password: "password"
edition: "enterprise"
acceptLicenseAgreement: "yes"
volumes:
data:
mode: "dynamic"
dynamic:
storageClassName: gp2
import:
mode: volume
disableSubPathExpr: true
volume:
persistentVolumeClaim:
claimName: s3-claim-neo4j
disableSubPathExpr: true --> this is set to false by default but if you do not set this to true the /import folder will get uploaded to the bucket which you might not require hence setting it to true might be required here
neo4j@demo-0:/import$ ls -slt
total 0
0 -rw-r--r-- 1 neo4j neo4j 0 Apr 2 21:41 demo.txt
neo4j@demo-0:/import$ echo "hello" > abc.txt
neo4j@demo-0:/import$ cat abc.txt
hello
neo4j@demo-0:/import$
Contact Details
morgan.senechal@neo4j.com
What happened?
When trying to mount an existing persistentVolumeClaim with Mountpoint for Amazon S3 CSI Driver as the source, it looks like it is not able to mount it.
Steps to reproduce: Install this add-ons to your EKS cluster
Create the PV and PVC that will be mounting an S3 bucket: kubectl apply -f volumes3.yaml -n namespace
Example test writing to the s3 from a simple centos container (also useful if you want to kubectl exec into the container and look at the FS how it is mounted) kubectl apply -f app.yaml -n namespace
Installing neo4j with helm charts with this volume (for import): helm install -n namespace my-neo4j-release neo4j/neo4j -f neo4j.yaml
Issue: On the centos pod, the volume is mounted and can be read/write On the neo4j pod, the volume is mounted as:
Chart Name
Neo4j
Chart Version
5.12.0
Environment
Amazon Web Services
Relevant log output
Code of Conduct