neo4j / helm-charts

Apache License 2.0
59 stars 52 forks source link

[Bug]: Mounting persistentVolumeClaim not working #298

Closed msenechal closed 6 months ago

msenechal commented 9 months ago

Contact Details

morgan.senechal@neo4j.com

What happened?

When trying to mount an existing persistentVolumeClaim with Mountpoint for Amazon S3 CSI Driver as the source, it looks like it is not able to mount it.

Steps to reproduce: Install this add-ons to your EKS cluster

Create the PV and PVC that will be mounting an S3 bucket: kubectl apply -f volumes3.yaml -n namespace

apiVersion: v1
kind: PersistentVolume
metadata:
  name: s3-pv-neo4j
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  mountOptions:
    - allow-other
    - region eu-west-1
  csi:
    driver: s3.csi.aws.com
    volumeHandle: s3-csi-driver-volume
    volumeAttributes:
      bucketName: neo4jmorgan-csi-k8s
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: s3-claim-neo4j
spec:
  accessModes:
    - ReadWriteMany 
  storageClassName: "" 
  resources:
    requests:
      storage: 10Gi 
  volumeName: s3-pv-neo4j

Example test writing to the s3 from a simple centos container (also useful if you want to kubectl exec into the container and look at the FS how it is mounted) kubectl apply -f app.yaml -n namespace

apiVersion: v1
kind: Pod
metadata:
  name: app1
spec:
  containers:
    - name: app1
      image: centos
      command: ["/bin/sh"]
      args: ["-c", "echo 'Hello from app1' >> /import/app1.txt; tail -f /dev/null"]
      volumeMounts:
        - name: persistent-storage
          mountPath: /import
  volumes:
    - name: persistent-storage
      persistentVolumeClaim:
        claimName: s3-claim-neo4j

Installing neo4j with helm charts with this volume (for import): helm install -n namespace my-neo4j-release neo4j/neo4j -f neo4j.yaml

neo4j:
  name: my-standalone
  resources:
    cpu: "0.5"
    memory: "2Gi"
  password: "password"
  edition: "enterprise"
  acceptLicenseAgreement: "yes"
volumes:
  data:
    mode: "dynamic"
    dynamic:
     storageClassName: gp2
  import:
    mode: volume
    volume:
      setOwnerAndGroupWritableFilePermissions: true
      persistentVolumeClaim:
        claimName: s3-claim-neo4j

Issue: On the centos pod, the volume is mounted and can be read/write On the neo4j pod, the volume is mounted as:

neo4j@my-neo4j-release-0:~$ ls -ltr /      
ls: cannot access '/import': Permission denied
total 20
d?????????   ? ?    ?        ?            ? import
drwxr-xr-x   2 root root     6 Sep 29 20:00 home

Chart Name

Neo4j

Chart Version

5.12.0

Environment

Amazon Web Services

Relevant log output

neo4j@my-neo4j-release-0:~$ ls -ltr /      
ls: cannot access '/import': Permission denied
total 20
d?????????   ? ?    ?        ?            ? import
drwxr-xr-x   2 root root     6 Sep 29 20:00 home

Code of Conduct

harshitsinghvi22 commented 6 months ago

@msenechal thanks for raising this issue with us.

The correct way to mount the s3 bucket would be via setting the uid and gid to 7474

apiVersion: v1
kind: PersistentVolume
metadata:
  name: s3-pv-neo4j
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  mountOptions:    
    - allow-other
    - uid=7474
    - gid=7474
    - region eu-west-2
  csi:
    driver: s3.csi.aws.com
    volumeHandle: s3-csi-driver-volume
    volumeAttributes:
      bucketName: harshit-demo2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: s3-claim-neo4j
spec:
  accessModes:
    - ReadWriteMany 
  storageClassName: "" 
  resources:
    requests:
      storage: 10Gi 
  volumeName: s3-pv-neo4j

This will mount the PV as neo4j user and you will not have to use the setOwnerAndGroupWritableFilePermissions

It seems when setting setOwnerAndGroupWritableFilePermissions without using the uid and gid , the chown and chmod operations fail to take effect.

You can find the official aws example for setting uid and gid here

i have tried after setting the uid and gid and it seems to be working just fine. I have used the below values.yaml for deploying neo4j

neo4j:
  name: demo
  resources:
    cpu: "0.5"
    memory: "2Gi"
  password: "password"
  edition: "enterprise"
  acceptLicenseAgreement: "yes"
volumes:
  data:
    mode: "dynamic"
    dynamic:
     storageClassName: gp2
  import:
    mode: volume
    disableSubPathExpr: true
    volume:      
      persistentVolumeClaim:
        claimName: s3-claim-neo4j  

disableSubPathExpr: true --> this is set to false by default but if you do not set this to true the /import folder will get uploaded to the bucket which you might not require hence setting it to true might be required here

neo4j@demo-0:/import$ ls -slt
total 0
0 -rw-r--r-- 1 neo4j neo4j 0 Apr  2 21:41 demo.txt
neo4j@demo-0:/import$ echo "hello" > abc.txt
neo4j@demo-0:/import$ cat abc.txt
hello
neo4j@demo-0:/import$