Open nicman68 opened 3 years ago
It seems you're missing some RBAC components on your Kubernetes cluster to provision the necessary resources for the NFS Deployment.
Hi Michael,
We are running Mirantis Docker EE. Latest version with kubernetes 1.20.
I installed the driver with "Advanced install"
Worker nodes run ubuntu 18.04 except one which run centos 7.8
/Nicola
Den ons 30 juni 2021 18:22Michael Mattsson @.***> skrev:
It seems you're missing some RBAC components on your Kubernetes cluster to provision the necessary resources for the NFS Deployment.
- What version of Kubernetes and what platform? (i.e OpenShift or upstream vanilla Kubernetes etc)
- How did you install the driver? YAML, Operator or Helm chart?
- What is the host OS on your worker nodes?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/hpe-storage/csi-driver/issues/284#issuecomment-871544701, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOU5AMZPNKP2LPACYREROXLTVNADBANCNFSM47SLG5IQ .
Thanks! MKE has a custom admission controller that I suspect is blocking the Pod
to start. Since the error message is emitted from the Kubernetes control-plane, yet that message is nowhere to be found in the Kubernetes sources, this is a MKE generated error message in their closed source.
There are some examples available here on how to assign privileges to certain namespaces (follow the "Applying the unprivileged PSP policy to a namespace" example, but instead of "kube-system" and "monitoring", use "hpe-storage" and "hpe-nfs").
Hi Michael,
Do you know which (cluster)rolebindings and (cluster)roles are used when creating the NFS deployment in hpe-nfs namespace.
/Nicola
Den ons 30 juni 2021 kl 19:22 skrev Michael Mattsson < @.***>:
Thanks! MKE has a custom admission controller that I suspect is blocking the Pod to start. Since the error message is emitted from the Kubernetes control-plane, yet that message is nowhere to be found in the Kubernetes sources, this is a MKE generated error message in their closed source.
There are some examples available here https://docs.mirantis.com/mke/3.4/ops/deploy-apps-k8s/pod-security-policies.html#use-the-unprivileged-policy on how to assign privileges to certain namespaces (follow the "Applying the unprivileged PSP policy to a namespace" example, but instead of "kube-system" and "monitoring", use "hpe-storage" and "hpe-nfs").
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/hpe-storage/csi-driver/issues/284#issuecomment-871590737, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOU5AMY4BEVGDEG2OZC6KTTTVNHERANCNFSM47SLG5IQ .
-- Med vänliga hälsningar, Nicola Maniette
Conoa AB | Tegnérgatan 35 | 111 61 Stockholm | M: 0765-557355 | T: 08-32 77 00 | www.conoa.se
Enterprise Open Source Expert
The ClusterRole
is "hpe-csi-provisioner-role", the ClusterRoleBinding
is "hpe-csi-provisioner-binding" and the Pod
runs with the ServiceAccount
name of "hpe-csi-nfs-sa". However, the Deployment
is created by the ServiceAccount
"hpe-csi-controller-sa".
Hi Michael,
hpe-csi-controller-sa can create a deployment in hpe-nfs
kubectl auth can-i create deployment --namespace hpe-nfs --as system:serviceaccount:hpe-storage:hpe-csi-controller-sa yes
Have also added privileged podsecuritypolicies. Same error
kubectl describe clusterrole hpe-csi-provisioner-role
Name: hpe-csi-provisioner-role
Labels:
persistentvolumeclaims [] [] [create get list watch update delete] services [] [] [create get list watch update delete] deployments.apps [] [] [create get list watch update delete] configmaps [] [] [get create] namespaces [] [] [get list create] serviceaccounts [] [] [get list create] pods [] [] [get list delete] persistentvolumes [] [] [get list watch create delete update] volumeattachments.storage.k8s.io [] [] [get list watch update patch delete] storageclasses.storage.k8s.io [] [] [get list watch] nodes [] [] [get list] secrets [] [] [get list] volumesnapshotcontents.snapshot.storage.k8s.io [] [] [get list] volumesnapshots.snapshot.storage.k8s.io [] [] [get list] events [] [] [list watch create update patch] podsecuritypolicies.policy [] [privileged] [use]
Are there any scuritycontext in the NFS deployment?
/Nicola
Den fre 16 juli 2021 kl 02:31 skrev Michael Mattsson < @.***>:
The ClusterRole is "hpe-csi-provisioner-role", the ClusterRoleBinding is "hpe-csi-provisioner-binding" and the Pod runs with the ServiceAccount name of "hpe-csi-nfs-sa". However, the Deployment is created by the ServiceAccount "hpe-csi-controller-sa".
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/hpe-storage/csi-driver/issues/284#issuecomment-881093331, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOU5AMYOY6WCUTCZCKDEOCDTX54X5ANCNFSM47SLG5IQ .
-- Med vänliga hälsningar, Nicola Maniette
Conoa AB | Tegnérgatan 35 | 111 61 Stockholm | M: 0765-557355 | T: 08-32 77 00 | www.conoa.se
Enterprise Open Source Expert
It's privileged.
@nicman68 did you ever resolve the NFS issue?
Hi,
I haven't worked with it. Customer changed to other solution.
/Nicola
Den lör 7 okt. 2023 kl 06:31 skrev Michael Mattsson < @.***>:
@nicman68 https://github.com/nicman68 did you ever resolve the NFS issue?
— Reply to this email directly, view it on GitHub https://github.com/hpe-storage/csi-driver/issues/284#issuecomment-1751596498, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOU5AM7OTIL6RTZCILDHPNLX6DLKLAVCNFSM47SLG5I2U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCNZVGE2TSNRUHE4A . You are receiving this because you were mentioned.Message ID: @.***>
--
Med vänliga hälsningar,
Nicola Maniette | Conoa AB
Senior Infrastructure Engineer Email: @.*** Mobile: +46 (0)765 55 73 55
Ok, tack!
On Tue, Oct 10, 2023, 1:42 AM nicman68 @.***> wrote:
Hi,
I haven't worked with it. Customer changed to other solution.
/Nicola
Den lör 7 okt. 2023 kl 06:31 skrev Michael Mattsson < @.***>:
@nicman68 https://github.com/nicman68 did you ever resolve the NFS issue?
— Reply to this email directly, view it on GitHub < https://github.com/hpe-storage/csi-driver/issues/284#issuecomment-1751596498>,
or unsubscribe < https://github.com/notifications/unsubscribe-auth/AOU5AM7OTIL6RTZCILDHPNLX6DLKLAVCNFSM47SLG5I2U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCNZVGE2TSNRUHE4A>
. You are receiving this because you were mentioned.Message ID: @.***>
--
Med vänliga hälsningar,
Nicola Maniette | Conoa AB
Senior Infrastructure Engineer Email: @.*** Mobile: +46 (0)765 55 73 55
— Reply to this email directly, view it on GitHub https://github.com/hpe-storage/csi-driver/issues/284#issuecomment-1754707147, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABVTK2FJABTPN3CX6HCUEM3X6UC57AVCNFSM47SLG5I2U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCNZVGQ3TANZRGQ3Q . You are receiving this because you commented.Message ID: @.***>
Have created a new nfs StorageClass.
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-nfs-mtc provisioner: csi.hpe.com parameters: csi.storage.k8s.io/controller-expand-secret-name: mtc-3par-02-secret csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: mtc-3par-02-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: mtc-3par-02-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: mtc-3par-02-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: mtc-3par-02-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: "NFS volume created by the HPE CSI Driver for Kubernetes" accessProtocol: iscsi csi.storage.k8s.io/fstype: xfs nfsResources: "true" allowOverrides: nfsNamespace cpg: K8S_LAB iscsiPortalIps: x.x.x.x, y.y.y.y reclaimPolicy: Delete allowVolumeExpansion: true
After that I created a basic pvc
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-rwx-pvc spec: accessModes:
This pvc get stuck in status "Pending"
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-rwx-pvc Pending hpe-nfs-mtc 14m
Lot of errors on this pvc
kubectl describe pvc my-rwx-pvc Name: my-rwx-pvc Namespace: default StorageClass: hpe-nfs-mtc Status: Pending Volume: Labels:
Annotations: volume.beta.kubernetes.io/storage-provisioner: csi.hpe.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By:
Events:
Type Reason Age From Message
Normal Provisioning 69s (x11 over 15m) csi.hpe.com_tsrv-dockeree-6.int.comhem.com_d6a2aa9d-fb4a-45e6-b16f-67971e2c6d34 External provisioner is provisioning volume for claim "default/my-rwx-pvc" Warning ProvisionStorage 64s (x11 over 15m) csi.hpe.com failed to create nfs deployment hpe-nfs-91862b16-67db-49d3-9123-116eb4b01792, err deployments.apps "hpe-nfs-91862b16-67db-49d3-9123-116eb4b01792" is forbidden: non-admin user "hpe-storage:hpe-csi-controller-sa" [service account "hpe-nfs:hpe-csi-nfs-sa"]. The configured privileged attributes access for non-admin users ("[]")("[]") and for service accounts ("[]")("[]") lack required permissions to use attributes [kernelcapabilities privileged] for resource hpe-nfs-91862b16-67db-49d3-9123-116eb4b01792 Warning ProvisioningFailed 64s (x11 over 15m) csi.hpe.com_tsrv-dockeree-6.int.comhem.com_d6a2aa9d-fb4a-45e6-b16f-67971e2c6d34 failed to provision volume with StorageClass "hpe-nfs-mtc": rpc error: code = Internal desc = Failed to create NFS provisioned volume pvc-91862b16-67db-49d3-9123-116eb4b01792, err failed to create nfs deployment hpe-nfs-91862b16-67db-49d3-9123-116eb4b01792, err deployments.apps "hpe-nfs-91862b16-67db-49d3-9123-116eb4b01792" is forbidden: non-admin user "hpe-storage:hpe-csi-controller-sa" [service account "hpe-nfs:hpe-csi-nfs-sa"]. The configured privileged attributes access for non-admin users ("[]")("[]") and for service accounts ("[]")("[]") lack required permissions to use attributes [kernelcapabilities privileged] for resource hpe-nfs-91862b16-67db-49d3-9123-116eb4b01792, rollback status: success Normal ExternalProvisioning 26s (x63 over 15m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.hpe.com" or manually created by system administrator
Have tried with both 2.0.0 and 1.4.0, same error on both