Open mbu147 opened 3 years ago
image: openebs/provisioner-nfs:0.7.1 and image: openebs/jiva-operator:3.0.0
Hi @mbu147, I followed the same steps as mentioned in the description and observed that permissions of nfsshare
directory are 755
and here is the output:
root@nfs-pvc-e155220f-63b7-4882-9104-98575910d9c9-69c97df57d-wrxq2:/ # ls -la
total 88
drwxr-xr-x 1 root root 4096 Nov 2 06:42 .
drwxr-xr-x 1 root root 4096 Nov 2 06:42 ..
drwxr-xr-x 3 root root 4096 Nov 2 06:42 nfsshare
...
...
...
Steps followed to provision NFS volume:
helm install openebs openebs/openebs -n openebs --create-namespace --set legacy.enabled=false --set jiva.enabled=true --set ndm.enabled=false --set ndmOperator.enabled=false --set localProvisioner.enabled=true --set nfs-provisioner.enabled=true --set nfs-provisioner.nfsStorageClass.backendStorageClass=openebs-jiva-csi-default
openebs-kernel-nfs
SC which got created by above command
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-pvc
spec:
storageClassName: openebs-kernel-nfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5G
nfs-pvc-e155220f-63b7-4882-9104-98575910d9c9-69c97df57d-wrxq2
with the following permissions
drwxr-xr-x 3 root root 4096 Nov 2 06:42 nfsshare
root@fio:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 916G 92G 778G 11% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
10.0.0.20:/ 4.9G 20M 4.9G 1% /datadir
and permissions are matching with nfs volume permissions
drwxr-xr-x 3 root root 4096 Nov 2 06:42 datadir
StorageClass outputs:
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-device openebs.io/local Delete WaitForFirstConsumer false 140m
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 140m
openebs-jiva-csi-default jiva.csi.openebs.io Delete Immediate true 140m
openebs-kernel-nfs openebs.io/nfsrwx Delete Immediate false 140m
Did I miss anything? Not sure how are you getting d--------- 17 xfs xfs 4096 Oct 29 14:15 nfsshare
this 000 permissions.
One more observation:
root@nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6-fb8f5fd66-jfz6b:/ # ls -la d--------- 17 xfs xfs 4096 Oct 29 14:15 nfsshare
ls -la
shows that different owner & user? Are there any manual edits made on ownership? Usually, it should be root root
by default...Can you help with the following outputs(maybe it will help to understand further):
Hi @mittachaitu, thanks for your reply and testing process!
It looks like I'm doing the same, except not using the "global" helm chart. I switch to the same chart as you and will have a look.
I noticed that it apparently only occurs when each node is under a high IO load, so that he needs to "reconnect" the mount points.
ls -la shows that different owner & user? Are there any manual edits made on ownership? Usually, it should be root root by default...
In a fresh new pvc the folder has root root
and r-xr-xr-x
. My nginx and php-fpm container have the UID and GID 33, in the pvc container UID 33 is the xfs user. I think i did a chown nginx:nginx -R <nfs mount>
after copying my data from the old pvc.
Thanks!
I noticed that it apparently only occurs when each node is under a high IO load, so that he needs to "reconnect" the mount points.
Hmm..., the system might be going into an RO state, if jiva volume is turning into RO then d---------
permission make sense(AFAIK).
My nginx and php-fpm container have the UID and GID 33, in the pvc container UID 33 is the xfs user. I think i did a chown nginx:nginx -R
after copying my data from the old PVC.
Yeah, currently nfs-provisioner allows to only to set fsGID but there is an issue to support configuring UID(which is being worked upon), so that when the volume is provisioned user no need to run chown
commands explicitly(anyway this is different problem).
okay i understand.. no one had these issue beside me?
You asked for more information, i forgot to add this to my last post: kubectl get sc openebs-jiva-csi-default -o yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"openebs"},"name":"openebs-jiva-csi-default"},"parameters":{"cas-type":"jiva","policy":"openebs-jiva-default-policy"},"provisioner":"jiva.csi.openebs.io","reclaimPolicy":"Delete","volumeBindingMode":"Immediate"}
creationTimestamp: "2021-10-29T11:16:06Z"
labels:
argocd.argoproj.io/instance: openebs
name: openebs-jiva-csi-default
resourceVersion: "21060613"
selfLink: /apis/storage.k8s.io/v1/storageclasses/openebs-jiva-csi-default
uid: f9275268-9a2b-4f20-a59b-ebdaede5b8e3
parameters:
cas-type: jiva
policy: openebs-jiva-default-policy
provisioner: jiva.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: Immediate
kubectl get deploy nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6 -n openebs -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-10-29T12:23:27Z"
generation: 1
labels:
openebs.io/nfs-server: nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6
name: nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6
namespace: openebs
resourceVersion: "22196180"
selfLink: /apis/apps/v1/namespaces/openebs/deployments/nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6
uid: 4b9de5e8-a1b5-4ea1-b905-67ec358dc015
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
openebs.io/nfs-server: nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
openebs.io/nfs-server: nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6
spec:
containers:
- env:
- name: SHARED_DIRECTORY
value: /nfsshare
- name: CUSTOM_EXPORTS_CONFIG
- name: NFS_LEASE_TIME
value: "90"
- name: NFS_GRACE_TIME
value: "90"
image: openebs/nfs-server-alpine:0.7.1
imagePullPolicy: IfNotPresent
name: nfs-server
ports:
- containerPort: 2049
name: nfs
protocol: TCP
- containerPort: 111
name: rpcbind
protocol: TCP
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /nfsshare
name: exports-dir
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: exports-dir
persistentVolumeClaim:
claimName: nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-10-29T12:23:27Z"
lastUpdateTime: "2021-10-29T12:24:58Z"
message: ReplicaSet "nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6-fb8f5fd66" has
successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2021-10-31T10:33:15Z"
lastUpdateTime: "2021-10-31T10:33:15Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Describe the bug: I created many RWX nfs shares with nfs-provisioner. As backend storageclass i use openebs-jiva. Sometimes every nfs share mount get the permission 000 Then, the mount folder looks like
in the nfs-pvc pod. Cause of that, the nginx container where the nfs-pvc is mounted also have 000 on the folder and cannot read the files within
The files in the mount folder have the correct permissions.
Expected behaviour: Default mount permission 755 or something similar
Steps to reproduce the bug: Just create a new nfs rwx pvc and wait. After some time the running nginx container cannot read the folder anymore.
The output of the following commands will help us better understand what's going on:
kubectl get pods -n <openebs_namespace> --show-labels
https://pastebin.com/wxBrUzGykubectl get pvc -n <openebs_namespace>
https://pastebin.com/H4MuJs2jkubectl get pvc -n <application_namespace>
https://pastebin.com/fnZinyN3Anything else we need to know?: jiva and nfs installed via helm charts https://github.com/openebs/jiva-operator/tree/develop/deploy/helm/charts https://github.com/openebs/dynamic-nfs-provisioner/tree/develop/deploy/helm/charts helm config:
storage class:
Environment details:
kubectl get po -n openebs --show-labels
):kubectl version
):v1.21.5+k3s
cat /etc/os-release
):AlmaLinux 8.4 (Electric Cheetah)
uname -a
):4.18.0-305.19.1.el8_4.x86_64
Do i have a misconfigured setup or is this a bug?
Thanks for help!