Hello there, I have detected some weird behavior. Let me show my values.yaml first (Chart version 0.14.2, Kubernetes k0s 1.27)
.global:
shareHost: &globalShareHost storage-01.internal.place
shareBasePath: &globalShareBasePath "/mnt/pool0/shared/kubernetes.nfs"
controllerBasePath: &globalControllerBasePath "/mnt/kubernetes.nfs"
# Ref: https://github.com/democratic-csi/charts/blob/master/stable/democratic-csi/values.yaml
# Ref: https://kubesec.io/basics/
# Ref: https://github.com/democratic-csi/charts/blob/master/stable/democratic-csi/examples/nfs-client.yaml
democratic-csi:
csiDriver:
# Globally unique name for a given cluster
name: "org.democratic-csi.nfs"
storageClasses:
- name: standard-nfs
defaultClass: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: false
parameters:
csi.storage.k8s.io/fstype: nfs
mountOptions:
- nfsvers=4.2
- nconnect=8
- hard
# Disable completely access time log for any file (this implies applying 'nodiratime')
- noatime
# secrets:
# provisioner-secret:
# controller-publish-secret:
# node-stage-secret:
# node-publish-secret:
# controller-expand-secret:
driver:
config:
# https://github.com/democratic-csi/democratic-csi/tree/master/examples
driver: nfs-client
instance_id:
nfs:
shareHost: *globalShareHost
shareBasePath: *globalShareBasePath
# (shareHost:shareBasePath) should be mounted at this location in the controller container
controllerBasePath: *globalControllerBasePath
dirPermissionsMode: "0777"
# Required to set the UID, and not the name: (dirPermissionsGroup: root) = (dirPermissionsGroup: 0)
dirPermissionsUser: 0
# Required to set the GID, and not the name: (dirPermissionsGroup: wheel) = (dirPermissionsGroup: 0)
dirPermissionsGroup: 0
node:
# Ref: https://github.com/democratic-csi/democratic-csi/tree/master#a-note-on-non-standard-kubelet-paths
kubeletHostPath: /var/lib/k0s/kubelet
# Run the controller service separated from the node service, mount the base share into the controller pod at run time
controller:
externalResizer:
enabled: false
# Use the host’s network
# Sharing the host’s network namespace permits processes in the pod
# to communicate with processes bound to the host’s loopback adapter
hostNetwork: true
# Use the host’s ipc namespace
# Sharing the host’s IPC namespace allows container processes
# to communicate with processes on the host
hostIPC: true
# Controller driver needs to mount the NFS shared directory to be able to create
# the PVC base subdirectories (pvc-xxx-something) in remote NFS server as nodes can not do it on their own.
# If this section is not properly configured, everything
# works fine, but directories with the PVC name on NFS server must be manually created
driver:
extraEnv:
- name: SHARE_HOST
value: *globalShareHost
- name: SHARE_BASE_PATH
value: *globalShareBasePath
- name: CONTROLLER_BASE_PATH
value: *globalControllerBasePath
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- SYS_ADMIN
privileged: true
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "mkdir -p $CONTROLLER_BASE_PATH; mount $SHARE_HOST:$SHARE_BASE_PATH $CONTROLLER_BASE_PATH"]
preStop:
exec:
command: ["/bin/sh","-c","umount $CONTROLLER_BASE_PATH"]
I have detected that it's mandatory to have all this section related to the controller's driver on your values.yaml. This section was configured because without it, the controller was not able to create the directory <basePath>/v/<pvc-something> on remote NFS server.
This error given by the controller was about directory not found, and was fixed when manually created the directory
# Run the controller service separated from the node service, mount the base share into the controller pod at run time
controller:
externalResizer:
enabled: false
# Use the host’s network
# Sharing the host’s network namespace permits processes in the pod
# to communicate with processes bound to the host’s loopback adapter
hostNetwork: true
# Use the host’s ipc namespace
# Sharing the host’s IPC namespace allows container processes
# to communicate with processes on the host
hostIPC: true
# Controller driver needs to mount the NFS shared directory to be able to create
# the PVC base subdirectories (pvc-xxx-something) in remote NFS server as nodes can not do it on their own.
# If this section is not properly configured, everything
# works fine, but directories with the PVC name on NFS server must be manually created
driver:
extraEnv:
- name: SHARE_HOST
value: *globalShareHost
- name: SHARE_BASE_PATH
value: *globalShareBasePath
- name: CONTROLLER_BASE_PATH
value: *globalControllerBasePath
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- SYS_ADMIN
privileged: true
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "mkdir -p $CONTROLLER_BASE_PATH; mount $SHARE_HOST:$SHARE_BASE_PATH $CONTROLLER_BASE_PATH"]
preStop:
exec:
command: ["/bin/sh","-c","umount $CONTROLLER_BASE_PATH"]
The point is that the Chart has a dedicated section for this kind of data and I was expecting this config to be injected and used by the controller to do exactly that under the hood without me mounting anything manually. Am I wrong with something?
The section I'm talking about is .Values.driver. But the reality is that the other section was needed in my case with a generic NFS server. Once added, the directories are managed as expected
Welcome! I’m not sure I follow the question entirely. Are you suggesting all of the options should have defaulted to what was necessary for the nfs-client driver?
Hello there, I have detected some weird behavior. Let me show my
values.yaml
first (Chart version 0.14.2, Kubernetes k0s 1.27)I have detected that it's mandatory to have all this section related to the controller's driver on your
values.yaml
. This section was configured because without it, the controller was not able to create the directory<basePath>/v/<pvc-something>
on remote NFS server. This error given by the controller was about directory not found, and was fixed when manually created the directoryThe point is that the Chart has a dedicated section for this kind of data and I was expecting this config to be injected and used by the controller to do exactly that under the hood without me mounting anything manually. Am I wrong with something?
The section I'm talking about is
.Values.driver
. But the reality is that the other section was needed in my case with a generic NFS server. Once added, the directories are managed as expectedWDYT? :)