kubernetes-retired / external-storage

[EOL] External storage plugins, provisioners, and helper libraries
Apache License 2.0
2.69k stars 1.6k forks source link

nfs-provisioner can't create share folder #1332

Closed GRuuuuu closed 4 years ago

GRuuuuu commented 4 years ago

Environment

Openshift version : v4.3.8
Kubernetes version : v1.16.2
Used Image : quay.io/external_storage/nfs-client-provisioner:latest

Problem

storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storageclass # IMPORTANT pvc needs to mention this name
provisioner: nfs-test-00 # name can be anything
parameters:
  archiveOnDelete: "false"

RBAC

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-pod-provisioner-sa
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
#  - apiGroups: [""]
#    resources: ["services", "endpoints"]
#    verbs: ["get"]
#  - apiGroups: ["extensions"]
#    resources: ["podsecuritypolicies"]
#    resourceNames: ["nfs-provisioner"]
#    verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-pod-provisioner-sa
    namespace: volume00
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-pod-provisioner-sa
    namespace: volume00
roleRef:
  kind: Role
  name: leader-locking-nfs-provisioner
  apiGroup: rbac.authorization.k8s.io

provisioner

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-provisioner
spec:
  selector:
    matchLabels:
      app: nfs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccount: nfs-pod-provisioner-sa
      containers:
        - name: nfs-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          env:
            - name: PROVISIONER_NAME # do not change
              value: nfs-test-00 # SAME AS PROVISONER NAME VALUE IN STORAGECLASS
            - name: NFS_SERVER # do not change
              value: x.x.x.x # Ip of the NFS SERVER
            - name: NFS_PATH # do not change
              value: /share/00  # path to nfs directory setup
          imagePullPolicy: "IfNotPresent"
          volumes:
            - name: nfs-provisioner-v # same as volumemouts name
              nfs:
                server: x.x.x.x
                path: /share/00

After applying these yaml files, It seemed all ok.
pv and pvc created well, and provisioner log was successed!

$ oc get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS       REASON   AGE
persistentvolume/pvc-3f5fa5dd-19a0-40d1-bd07-aa3b8c5e7dee   100Mi      RWX            Delete           Bound    volume00/nfs-pvc-test   nfs-storageclass            4s

NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
persistentvolumeclaim/nfs-pvc-test   Bound    pvc-3f5fa5dd-19a0-40d1-bd07-aa3b8c5e7dee   100Mi      RWX            nfs-storageclass   13s
# Provisioner log
$  oc logs nfs-provisioner-6f4df6b8dc-v9n8j
I0615 02:10:08.298618       1 leaderelection.go:185] attempting to acquire leader lease  volume00/nfs-test-00...
I0615 02:10:25.688121       1 leaderelection.go:194] successfully acquired lease volume00/nfs-test-00
I0615 02:10:25.688297       1 controller.go:631] Starting provisioner controller nfs-test-00_nfs-provisioner-6f4df6b8dc-v9n8j_55f60825-aead-11ea-86ce-0a580a810020!
I0615 02:10:25.688515       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"volume00", Name:"nfs-test-00", UID:"c768d62e-b541-4f92-8c1c-60ce9df92f40", APIVersion:"v1", ResourceVersion:"437394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-provisioner-6f4df6b8dc-v9n8j_55f60825-aead-11ea-86ce-0a580a810020 became leader
I0615 02:10:25.788548       1 controller.go:680] Started provisioner controller nfs-test-00_nfs-provisioner-6f4df6b8dc-v9n8j_55f60825-aead-11ea-86ce-0a580a810020!
I0615 02:10:25.788603       1 controller.go:1158] delete "pvc-133a671b-69f1-4d4e-911e-67065b4b4482": started
I0615 02:10:25.788716       1 controller.go:987] provision "volume00/nfs-pvc-test" class "nfs-storageclass": started
I0615 02:10:25.790776       1 controller.go:1087] provision "volume00/nfs-pvc-test" class "nfs-storageclass": volume "pvc-3f5fa5dd-19a0-40d1-bd07-aa3b8c5e7dee" provisioned
I0615 02:10:25.790792       1 controller.go:1101] provision "volume00/nfs-pvc-test" class "nfs-storageclass": trying to save persistentvvolume "pvc-3f5fa5dd-19a0-40d1-bd07-aa3b8c5e7dee"
I0615 02:10:25.790818       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume00", Name:"nfs-pvc-test", UID:"3f5fa5dd-19a0-40d1-bd07-aa3b8c5e7dee", APIVersion:"v1", ResourceVersion:"437362", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "volume00/nfs-pvc-test"
W0615 02:10:25.792033       1 provisioner.go:104] path /persistentvolumes/volume00-nfs-pvc-test-pvc-133a671b-69f1-4d4e-911e-67065b4b4482 does not exist, deletion skipped
I0615 02:10:25.792048       1 controller.go:1186] delete "pvc-133a671b-69f1-4d4e-911e-67065b4b4482": volume deleted
I0615 02:10:25.793796       1 controller.go:1108] provision "volume00/nfs-pvc-test" class "nfs-storageclass": persistentvolume "pvc-3f5fa5dd-19a0-40d1-bd07-aa3b8c5e7dee" saved
I0615 02:10:25.793819       1 controller.go:1149] provision "volume00/nfs-pvc-test" class "nfs-storageclass": succeeded
I0615 02:10:25.793866       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume00", Name:"nfs-pvc-test", UID:"3f5fa5dd-19a0-40d1-bd07-aa3b8c5e7dee", APIVersion:"v1", ResourceVersion:"437362", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-3f5fa5dd-19a0-40d1-bd07-aa3b8c5e7dee
I0615 02:10:25.794970       1 controller.go:1196] delete "pvc-133a671b-69f1-4d4e-911e-67065b4b4482": persistentvolume deleted
I0615 02:10:25.794985       1 controller.go:1198] delete "pvc-133a671b-69f1-4d4e-911e-67065b4b4482": succeeded

It seemed good! but when I check my NFS, no folders were created.

It seemed nfs-client-provisioner can not make folder in NFS server.

by the way, I can mount folder in my worker node and can make folder as I can.

Now I'm confused with this situation. Why nfs-client-provisioner couldn't make folder in NFS even though provisioner log was succeeded?

carloreggiani commented 4 years ago

On bare metal OpenShift 3.18 I'm trying to consume a NFS server by helm recipt create as in readme:

helm install stable/nfs-client-provisioner --set nfs.server=x.x.x.x --set nfs.path=/exported/path

No way to have any PVC: the requests remain in "pending" state

Tryed also to replicate as describe here https://www.chernsan.com/2020/03/07/nfs-dynamic-provisioning-for-pvc/, no change :(

Any idea how to investigate the issue?

Deiskos commented 4 years ago

@GRuuuuu, I had the same problem, nfs-provisioner wasn't creating folders in nfs share

As far as I understand, the nfs-provisioner-v has to be mounted specifically to /persistentvolumes in that container, so a (hopefully) fixed provisioner config should look like this:

apiVersion: apps/v1
metadata:
  name: nfs-provisioner
spec:
  selector:
    matchLabels:
      app: nfs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccount: nfs-pod-provisioner-sa
      containers:
        - name: nfs-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-provisioner-v
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME # do not change
              value: nfs-test-00 # SAME AS PROVISONER NAME VALUE IN STORAGECLASS
            - name: NFS_SERVER # do not change
              value: x.x.x.x # Ip of the NFS SERVER
            - name: NFS_PATH # do not change
              value: /share/00  # path to nfs directory setup
          imagePullPolicy: "IfNotPresent"
          volumes:
            - name: nfs-provisioner-v # same as volumemouts name
              nfs:
                server: x.x.x.x
                path: /share/00
lowang-bh commented 4 years ago

you should have a volumeMount with mountPath:/persistentvolumes volumeMounts:

nikhita commented 4 years ago

Thanks for reporting the issue!

This repo is no longer being maintained and we are in the process of archiving this repo. Please see https://github.com/kubernetes/org/issues/1563 for more details.

If your issue relates to nfs provisioners, please create a new issue in https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner or https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.

Going to close this issue in order to archive this repo. Apologies for the churn and thanks for your patience! :pray: