Open xautlmx opened 3 years ago
Hello, any update on this. I ran into same issue using custom NODE_HOST_PATH. For some reason, only /mnt/hostpath
produces correct permissions (0777), using any other path e.g /mnt/mypath
produces wrong permissions (0755).
An observation is that when using default /mnt/hostpath
and creating a pvc, the folder is created automatically, while a custom path will not create the folder. This explains the different permissions as in default case, the provisioner is creating the folder with umask 0
, while in custom case, the pod is creating the folder with default umask 0022
.
Reproduction
(1) With default value
// deploy provisioner
$ helm upgrade hostpath-provisioner rimusz/hostpath-provisioner --install
// deploy pvc
$ kubectl create -f https://raw.githubusercontent.com/rimusz/hostpath-provisioner/master/deploy/test-claim.yaml
// pv folder is created with correct permissions
$ ll /mnt/hostpath
drwxrwxrwx 2 root root 4096 Oct 4 05:27 pvc-df22fb40-d0b8-45fc-a373-7f7841f32ac3/
(2) With custom value
// deploy provisioner
$ helm upgrade hostpath-provisioner rimusz/hostpath-provisioner --install --set nodeHostPath=/mnt/mypath
// deploy pvc
$ kubectl create -f https://raw.githubusercontent.com/rimusz/hostpath-provisioner/master/deploy/test-claim.yaml
// pv folder is not created
$ ll /mnt/mypath
<empty>
// deploy test-pod
$ kubectl create -f https://raw.githubusercontent.com/rimusz/hostpath-provisioner/master/deploy/test-pod.yaml
// pv folder is created with wrong permissions
$ ll /mnt/mypath
drwxr-xr-x 2 root root 4096 Oct 4 05:18 pvc-f8fe8d17-0593-474d-b0b3-4985d206e124/
Expectation
Folder should always be created by provisioner with relaxed permissions when using custom node hostpath.
Hello, I have faced the same problem, and I found a solution. You must change the mount path inside the pod with the same path outside the pod. Here is the complete yaml:
$ helm template hostpath .
---
# Source: hostpath-provisioner/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: hostpath-hostpath-provisioner
labels:
app.kubernetes.io/name: hostpath-provisioner
helm.sh/chart: hostpath-provisioner-0.2.13
app.kubernetes.io/instance: hostpath
app.kubernetes.io/managed-by: Helm
---
# Source: hostpath-provisioner/templates/storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: hostpath
labels:
app.kubernetes.io/name: hostpath-provisioner
helm.sh/chart: hostpath-provisioner-0.2.13
app.kubernetes.io/instance: hostpath
app.kubernetes.io/managed-by: Helm
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: hostpath
reclaimPolicy: Delete
---
# Source: hostpath-provisioner/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: hostpath-hostpath-provisioner
labels:
app.kubernetes.io/name: hostpath-provisioner
helm.sh/chart: hostpath-provisioner-0.2.13
app.kubernetes.io/instance: hostpath
app.kubernetes.io/managed-by: Helm
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
# Source: hostpath-provisioner/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: hostpath-hostpath-provisioner
labels:
app.kubernetes.io/name: hostpath-provisioner
helm.sh/chart: hostpath-provisioner-0.2.13
app.kubernetes.io/instance: hostpath
app.kubernetes.io/managed-by: Helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: hostpath-hostpath-provisioner
subjects:
- kind: ServiceAccount
name: hostpath-hostpath-provisioner
namespace: default
---
# Source: hostpath-provisioner/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: hostpath-hostpath-provisioner-leader-locking
labels:
app.kubernetes.io/name: hostpath-provisioner
helm.sh/chart: hostpath-provisioner-0.2.13
app.kubernetes.io/instance: hostpath
app.kubernetes.io/managed-by: Helm
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["list", "watch", "create"]
---
# Source: hostpath-provisioner/templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: hostpath-hostpath-provisioner-leader-locking
labels:
app.kubernetes.io/name: hostpath-provisioner
helm.sh/chart: hostpath-provisioner-0.2.13
app.kubernetes.io/instance: hostpath
app.kubernetes.io/managed-by: Helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: hostpath-hostpath-provisioner-leader-locking
subjects:
- kind: ServiceAccount
name: hostpath-hostpath-provisioner
namespace: default
---
# Source: hostpath-provisioner/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostpath-hostpath-provisioner
labels:
app.kubernetes.io/name: hostpath-provisioner
helm.sh/chart: hostpath-provisioner-0.2.13
app.kubernetes.io/instance: hostpath
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: hostpath-provisioner
app.kubernetes.io/instance: hostpath
template:
metadata:
labels:
app.kubernetes.io/name: hostpath-provisioner
app.kubernetes.io/instance: hostpath
spec:
serviceAccountName: hostpath-hostpath-provisioner
containers:
- name: hostpath-provisioner
image: "quay.io/rimusz/hostpath-provisioner:v0.2.5"
imagePullPolicy: IfNotPresent
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: NODE_HOST_PATH
value: "/Data/Volumes"
- name: HOSTPATH_PROVISIONER_NAME
value: "hostpath"
volumeMounts:
- name: pv-volume
mountPath: /Data/Volumes
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
volumes:
- name: pv-volume
hostPath:
path: /Data/Volumes
To wrap up: change the volumeMounts.pv-volume.mountPath
to be same as volumes.pv-volume.hostPath
in hostpath-provisioner/templates/deployment.yaml
(eg. /Data/Volumes
).
Bug still exist.
My solution for custom directory /media/default-storage
:
kubectl patch deployment hostpath-provisioner -n kube-system --patch-file hostpath-provisioner.patch.yaml
# hostpath-provisioner.patch.yaml
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- env:
- name: PV_DIR
value: /media/default-storage
name: hostpath-provisioner
volumeMounts:
- mountPath: /media/default-storage
name: pv-volume
volumes:
- hostPath:
path: /media/default-storage
name: pv-volume
when I use NODE_HOST_PATH to set a custom directory as my hostpath mount point,such as /data. If i create pvc and pod by the yaml file below.The pvc directory is created normal in /data.but when I delete the pod and pvc, The pvc directory is still exists.