gluster / gluster-kubernetes

GlusterFS Native Storage Service for Kubernetes
Apache License 2.0
875 stars 389 forks source link

mountOptions enable-ino32 for gluster storageclass fail. #569

Open yoliander opened 5 years ago

yoliander commented 5 years ago

What happened: I'm trying to add mountOptions (enable-ino32) to the StorageClass, but when i create a PVC, the PV is created without the enable-ino32 mountOption.

What you expected to happen: PV created with mountOptions enable-ino32

How to reproduce it (as minimally and precisely as possible):

Create a storage class like this:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-glustermount
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
mountOptions:
  - enable-ino32
parameters:
  resturl: "http://192.168.9.2:32333"
allowVolumeExpansion: true

=======================================

Create a PVC like this:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: yoli-fs-claim
  labels:
    environment: training
  annotations:
   volume.beta.kubernetes.io/storage-class: test-glustermount
   volume.beta.kubernetes.io/mount-options: enable-ino32
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 20Gi

==================================================

After that, the pv will be only with the auto_unmount option, and ignore the enable-ino32 option.

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    Description: 'Gluster-Internal: Dynamically provisioned PV'
    gluster.kubernetes.io/heketi-volume-id: 5301546e9c2029f19db6d9a95589a3ec
    gluster.org/type: file
    kubernetes.io/createdby: heketi-dynamic-provisioner
    pv.beta.kubernetes.io/gid: "2001"
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs
    volume.beta.kubernetes.io/mount-options: auto_unmount
  creationTimestamp: "2019-03-11T01:39:15Z"
  finalizers:
  - kubernetes.io/pv-protection
  name: pvc-71cf5208-439e-11e9-9550-0a580aedf193
  resourceVersion: "11972757"
  selfLink: /api/v1/persistentvolumes/pvc-71cf5208-439e-11e9-9550-0a580aedf193
  uid: 7af4503a-439e-11e9-9eaa-0a580aedb527
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 20Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: yoli-fs-claim
    namespace: default
    resourceVersion: "11972708"
    uid: 71cf5208-439e-11e9-9550-0a580aedf193
  glusterfs:
    endpoints: glusterfs-dynamic-yoli-fs-claim
    path: vol_5301546e9c2029f19db6d9a95589a3ec
  mountOptions:
  - enable-ino32
  persistentVolumeReclaimPolicy: Delete
  storageClassName: test-glustermount

Anything else we need to know?: I'm using heketi version 8.0.0

Environment:

Kubernetes version (use kubectl version): 1.13 Cloud provider or hardware configuration: Oracle Cloud OS (e.g: cat /etc/os-release): Oracle Linux Server release 7.6 Kernel (e.g. uname -a): 4.14.35-1818.5.4.el7uek.x86_64

Gluster client version:

glusterfs-libs-3.12.2-18.el7.x86_64 glusterfs-fuse-3.12.2-18.el7.x86_64 glusterfs-3.12.2-18.el7.x86_64 glusterfs-client-xlators-3.12.2-18.el7.x86_64

Gluster Server Version:

glusterfs-libs-3.10.12-1.el7.x86_64 glusterfs-3.10.12-1.el7.x86_64 glusterfs-server-3.10.12-1.el7.x86_64 glusterfs-client-xlators-3.10.12-1.el7.x86_64 glusterfs-fuse-3.10.12-1.el7.x86_64 glusterfs-cli-3.10.12-1.el7.x86_64 glusterfs-api-3.10.12-1.el7.x86_64