Closed nickatnceas closed 1 year ago
I made the following changes:
Created a k8s-dev-cephfs
user in Ceph with appropriate permissions:
ceph auth get-or-create client.k8s-dev-cephfs mon 'allow r' osd 'allow rw tag cephfs *=*' mgr 'allow rw' mds 'allow rw'
Changed userID
and userKey
to adminID
and adminKey
in secret csi-cephfs-secret
, using the new k8s-dev-cephfs
Ceph user:
kubectl edit secrets csi-cephfs-secret
Changed clusterID
to our cluster UUID, and fsName to our CephFS name, in storageclass csi-cephfs-sc
kubectl get storageclass csi-cephfs-sc -o yaml
vim csi-cephfs-sc.yaml
kubectl replace -f csi-cephfs-sc.yaml --force
I expected this to allow me to create a CephFS PVC and PV, but it continually failed:
cephfs-pvs.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc-test-4
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: csi-cephfs-sc
outin@halt:~/k8s$ kubectl create -f cephfs-pvc.yaml -n nick
outin@halt:~/k8s$ kubectl describe pvc -n nick csi-cephfs-pvc-test-4
Name: csi-cephfs-pvc-test-4
Namespace: nick
StorageClass: csi-cephfs-sc
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: cephfs.csi.ceph.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 109s (x11 over 10m) cephfs.csi.ceph.com_ceph-csi-cephfs-provisioner-5f465c8b64-xnnfq_ff8d4afe-2849-4ff7-928b-b0f7a8f171dc External provisioner is provisioning volume for claim "nick/csi-cephfs-pvc-test-4"
Warning ProvisioningFailed 109s (x11 over 10m) cephfs.csi.ceph.com_ceph-csi-cephfs-provisioner-5f465c8b64-xnnfq_ff8d4afe-2849-4ff7-928b-b0f7a8f171dc failed to provision volume with StorageClass "csi-cephfs-sc": rpc error: code = InvalidArgument desc = failed to get connection: connecting failed: rados: ret=-13, Permission denied
Normal ExternalProvisioning 8s (x103 over 25m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "cephfs.csi.ceph.com" or manually created by system administrator
I tried changing the adminID
and adminKey
to the Ceph admin user, and it still failed, which rules out a permissions caps issue in Ceph and received the same error message
I tried restarting the three cephfs-provisioner pods, did not help:
kubectl -n ceph-csi-cephfs delete pod ceph-csi-cephfs-provisioner-5f465c8b64-s5h7s
kubectl -n ceph-csi-cephfs delete pod ceph-csi-cephfs-provisioner-5f465c8b64-hz2zp
kubectl -n ceph-csi-cephfs delete pod ceph-csi-cephfs-provisioner-5f465c8b64-dt8m4
I started on the troubleshooting steps at https://github.com/rook/rook/blob/master/Documentation/Troubleshooting/ceph-csi-common-issues.md
I tried provisioning again and logged into the cephfs-provisioner pod currently being attempted by the PVC provisioner.
kubectl -n ceph-csi-cephfs exec -ti pod/ceph-csi-cephfs-provisioner-5f465c8b64-xnnfq -c csi-cephfsplugin -- bas
I verified with curl that I can connect to both ports on all five Ceph IPs from inside the provisioner pod.
I then copied the minimal ceph.conf configuration into the pod's /etc/ceph.conf file and ran ceph health
, where I though I found the issue:
[root@ceph-csi-cephfs-provisioner-5f465c8b64-xnnfq ceph]# vi ceph.conf
[root@ceph-csi-cephfs-provisioner-5f465c8b64-xnnfq ceph]# ceph health
2023-10-18T23:38:04.587+0000 7fbea5583700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]
2023-10-18T23:38:04.591+0000 7fbea4d82700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]
2023-10-18T23:38:04.595+0000 7fbe9ffff700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]
[errno 13] RADOS permission denied (error connecting to the cluster)
From:
However, this was supposedly fixed in client 16.2.1 (https://docs.ceph.com/en/quincy/security/CVE-2021-20288/), and the pod is running Ceph client 16.2.5:
[root@ceph-csi-cephfs-provisioner-5f465c8b64-xnnfq /]# ceph --version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
Continuing on, I copied the k8s-dev-cephfs
keyring to the pod and it worked:
[root@ceph-csi-cephfs-provisioner-5f465c8b64-xnnfq /]# ceph --id k8s-dev-cephfs health
HEALTH_WARN 1 pgs not deep-scrubbed in time
[root@ceph-csi-cephfs-provisioner-5f465c8b64-xnnfq /]# ceph --id k8s-dev-cephfs fs subvolumegroup ls cephfs
[
{
"name": "arctica-test-subvol-group"
},
...
So as of now it appears that the contents of the secret csi-cephfs-secret
are not being loaded in the provisioner pods. I believe this because the provisioning failed with the limited k8s-dev-cephfs
user and the full admin account in the secret, and succeeded from the provisioner pod with the limited k8s-dev-cephfs
user account.
I tried restarting the provisioner pods, thinking they might need to restart to load a new pod, but was not successful.
To be continued...
I was thinking this could be related to the credential issue reported in https://github.com/ceph/ceph-csi/issues/1818 but I tested by creating a new storageclass using ceph admin credentials without success:
fs-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: csi-fs-secret
type: Opaque
data:
adminID: admin_encoded_in_base64
adminKey: password_encoded_in_base64
fs-sc.yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
name: csi-fs-sc
mountOptions:
- debug
parameters:
clusterID: 8aa4d4a0-a209-11ea-baf5-ffc787bfc812
csi.storage.k8s.io/controller-expand-secret-name: csi-fs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: csi-fs-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
csi.storage.k8s.io/provisioner-secret-name: csi-fs-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
fsName: cephfs
provisioner: cephfs.csi.ceph.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
fs-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-fs-pvc-test-7
namespace: nick
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: csi-fs-sc
Then create the secret, storageclass, and pvc:
outin@halt:~/k8s$ kubectl create -f fs-secret.yaml
secret/csi-fs-secret created
outin@halt:~/k8s$ kubectl create -f fs-sc.yaml
storageclass.storage.k8s.io/csi-fs-sc created
outin@halt:~/k8s$ kubectl create -f fs-pvc.yaml
persistentvolumeclaim/csi-fs-pvc-test-7 created
At this step I have also tried restarting all pod/ceph-csi-cephfs-provisioner-*
and pod/ceph-csi-cephfs-csi-cephfsplugin-*
pods.
I'm expecting this to create a CephFS PV, but instead it gets stuck:
outin@halt:~/k8s$ kubectl get pvc -n nick
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-fs-pvc-test-7 Pending csi-fs-sc 105s
outin@halt:~/k8s$ kubectl describe pvc -n nick csi-fs-pvc-test-7
Name: csi-fs-pvc-test-7
Namespace: nick
StorageClass: csi-fs-sc
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: cephfs.csi.ceph.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 2s (x11 over 2m10s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "cephfs.csi.ceph.com" or manually created by system administrator
Normal Provisioning 2s (x9 over 2m10s) cephfs.csi.ceph.com_ceph-csi-cephfs-provisioner-5f465c8b64-rjndz_810af384-84c9-4737-8ae0-bdcf33b2c43e External provisioner is provisioning volume for claim "nick/csi-fs-pvc-test-7"
Warning ProvisioningFailed 2s (x9 over 2m10s) cephfs.csi.ceph.com_ceph-csi-cephfs-provisioner-5f465c8b64-rjndz_810af384-84c9-4737-8ae0-bdcf33b2c43e failed to provision volume with StorageClass "csi-fs-sc": rpc error: code = InvalidArgument desc = failed to get connection: connecting failed: rados: ret=-13, Permission denied
I can then delete these test resources with:
outin@halt:~/k8s$ kubectl delete pvc -n nick csi-fs-pvc-test-7
outin@halt:~/k8s$ kubectl delete storageclass csi-fs-sc
outin@halt:~/k8s$ kubectl delete secret csi-fs-secret
I'm using the Ceph "root" user and password (admin) for testing, which bypasses the user caps issues listed in the issue above. And I'm able to log into the provisioner pod listed in the error message and connect to the Ceph cluster without issue, which means networking and is working properly.
In the past, Peter had concluded that dynamic provisioning was not supported with the CSI CephFS driver, but did work with the CSI RBD driver. Do you have info that that support has changed (which would be very cool)...
@mbjones it says support for most of the features we want is "GA" at https://github.com/ceph/ceph-csi
Also the issues I linked to here, especially the last one, have multiple reports of successful setups using CephFS.
@nickatnceas one issue that has bit me several times is newlines in secret credentials. Are you sure that neither your adminID
nor adminKey
have newlines embedded in the string? I was naively doing echo "some-password" | base64
when I needed to do echo -n "some-password" | base64
. @artntek was recently caught by this as well. Just something to check.
@mbjones that was it! Thanks!
outin@halt:~/k8s$ kubectl get pvc -n nick
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-fs-pvc-test-8 Bound pvc-0082bed1-27b0-4112-9881-5b7e511c4ec9 10Gi RWX csi-fs-sc 4s
I'm going to test the limited CephFS accounts now, then if that is successful, switch back to the standard storageclass name.
Dynamic provisioning of CephFS volumes is now working on k8s-dev. Here is the config:
On Ceph, create two limited users per the docs:
ceph auth get-or-create client.k8s-dev-cephfs mon 'allow r' osd 'allow rw tag cephfs metadata=*' mgr 'allow rw'
ceph auth get-or-create client.k8s-dev-cephfs-node mon 'allow r' osd 'allow rw tag cephfs *=*' mgr 'allow rw' mds 'allow rw'
Verify that no existing PVs are using the existing storageclass: kubectl get pv
Delete the existing secret:
outin@halt:~/k8s$ kubectl delete secret csi-cephfs-secret
Encode the username and keys in base64:
outin@halt:~$ echo -n "k8s-dev-cephfs" | base64
outin@halt:~$ echo -n "secretstringhere" | base64
outin@halt:~$ echo -n "k8s-dev-cephfs-node" | base64
outin@halt:~$ echo -n "secretstringhere" | base64
Create and load new storageclass and secrets:
cephfs-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret
type: Opaque
data:
adminID: k8s-dev-cephfs_username_encoded_in_base64
adminKey: k8s-dev-cephfs_key_encoded_in_base64
cephfs-secret-node.yaml
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-node-secret
type: Opaque
data:
adminID: k8s-dev-cephfs-node_username_encoded_in_base64
adminKey: k8s-dev-cephfs-node_key_encoded_in_base64
cephfs-sc.yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
name: csi-cephfs-sc
mountOptions:
- debug
parameters:
clusterID: 8aa4d4a0-a209-11ea-baf5-ffc787bfc812
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-node-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
fsName: cephfs
volumeNamePrefix: "k8s-dev-csi-vol-"
provisioner: cephfs.csi.ceph.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
Then load into k8s-dev:
outin@halt:~/k8s$ kubectl create -f cephfs-secret.yaml
secret/csi-cephfs-secret created
outin@halt:~/k8s$ kubectl create -f cephfs-secret-node.yaml
secret/csi-cephfs-node-secret created
outin@halt:~/k8s$ kubectl replace -f cephfs-sc.yaml --force
storageclass.storage.k8s.io "csi-cephfs-sc" deleted
storageclass.storage.k8s.io/csi-cephfs-sc replaced
Testing creates a PVC and PV, which is then visible on the CephFS file system in /volumes/csi/
:
cephfs-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc-test-12
namespace: nick
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: csi-cephfs-sc
outin@halt:~/k8s$ kubectl create -f cephfs-pvc.yaml
persistentvolumeclaim/csi-cephfs-pvc-test-12 created
outin@halt:~/k8s$ kubectl get pvc -n nick
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-cephfs-pvc-test-12 Bound pvc-91690627-43bd-44c4-81d5-50b6901fda45 10Gi RWX csi-cephfs-sc 8s
outin@halt:~/k8s$ kubectl get pv pvc-91690627-43bd-44c4-81d5-50b6901fda45 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: cephfs.csi.ceph.com
creationTimestamp: "2023-10-24T16:28:38Z"
finalizers:
- kubernetes.io/pv-protection
name: pvc-91690627-43bd-44c4-81d5-50b6901fda45
resourceVersion: "368045407"
uid: b5f4229b-181e-4fc8-890c-92b30f2e1cbc
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: csi-cephfs-pvc-test-12
namespace: nick
resourceVersion: "368045401"
uid: 91690627-43bd-44c4-81d5-50b6901fda45
csi:
controllerExpandSecretRef:
name: csi-cephfs-secret
namespace: default
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
name: csi-cephfs-node-secret
namespace: default
volumeAttributes:
clusterID: 8aa4d4a0-a209-11ea-baf5-ffc787bfc812
csi.storage.k8s.io/pv/name: pvc-91690627-43bd-44c4-81d5-50b6901fda45
csi.storage.k8s.io/pvc/name: csi-cephfs-pvc-test-12
csi.storage.k8s.io/pvc/namespace: nick
fsName: cephfs
storage.kubernetes.io/csiProvisionerIdentity: 1698156357817-8081-cephfs.csi.ceph.com
subvolumeName: k8s-dev-csi-vol-62fa67ca-728a-11ee-aac4-bea91be78439
subvolumePath: /volumes/csi/k8s-dev-csi-vol-62fa67ca-728a-11ee-aac4-bea91be78439/0a55f930-96ac-4a7d-bf45-49e6ac61046b
volumeNamePrefix: k8s-dev-csi-vol-
volumeHandle: 0001-0024-8aa4d4a0-a209-11ea-baf5-ffc787bfc812-0000000000000001-62fa67ca-728a-11ee-aac4-bea91be78439
mountOptions:
- debug
persistentVolumeReclaimPolicy: Delete
storageClassName: csi-cephfs-sc
volumeMode: Filesystem
status:
phase: Bound
root@merry-snail:/mnt/cephfs/volumes/csi/k8s-dev-csi-vol-62fa67ca-728a-11ee-aac4-bea91be78439/0a55f930-96ac-4a7d-bf45-49e6ac61046b# pwd
/mnt/cephfs/volumes/csi/k8s-dev-csi-vol-62fa67ca-728a-11ee-aac4-bea91be78439/0a55f930-96ac-4a7d-bf45-49e6ac61046b
root@merry-snail:/mnt/cephfs/volumes/csi/k8s-dev-csi-vol-62fa67ca-728a-11ee-aac4-bea91be78439/0a55f930-96ac-4a7d-bf45-49e6ac61046b# ls -al
total 0
drwxrwxrwx 2 root root 0 Oct 24 09:28 .
drwxrwxrwx 3 root root 2 Oct 24 09:28 ..
Dynamic provisioning of CephFS volumes is now working on k8s-prod. Here is the config:
On Ceph, create two limited users per the docs:
ceph auth get-or-create client.k8s-cephfs mon 'allow r' osd 'allow rw tag cephfs metadata=*' mgr 'allow rw'
ceph auth get-or-create client.k8s-cephfs-node mon 'allow r' osd 'allow rw tag cephfs *=*' mgr 'allow rw' mds 'allow rw'
Verify that no existing PVs are using the existing storageclass: kubectl get pv
There was no existing secret or storageclass.
Encode the username and keys in base64:
outin@halt:~$ echo -n "k8s-cephfs" | base64
outin@halt:~$ echo -n "secretstringhere" | base64
outin@halt:~$ echo -n "k8s-cephfs-node" | base64
outin@halt:~$ echo -n "secretstringhere" | base64
Create and load new storageclass and secrets:
cephfs-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret
type: Opaque
data:
adminID: k8s-cephfs_username_encoded_in_base64
adminKey: k8s-cephfs_key_encoded_in_base64
cephfs-secret-node.yaml
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-node-secret
type: Opaque
data:
adminID: k8s-cephfs-node_username_encoded_in_base64
adminKey: k8s-cephfs-node_key_encoded_in_base64
cephfs-sc.yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
name: csi-cephfs-sc
mountOptions:
- debug
parameters:
clusterID: 8aa4d4a0-a209-11ea-baf5-ffc787bfc812
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-node-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
fsName: cephfs
volumeNamePrefix: "k8s-prod-csi-vol-"
provisioner: cephfs.csi.ceph.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
Then load into k8s-prod:
root@docker-ucsb-4:/home/outin/k8s# kubectl create -f cephfs-secret.yaml
secret/csi-cephfs-secret created
root@docker-ucsb-4:/home/outin/k8s# kubectl create -f cephfs-secret-node.yaml
secret/csi-cephfs-node-secret created
root@docker-ucsb-4:/home/outin/k8s# kubectl create -f cephfs-sc.yaml
storageclass.storage.k8s.io/csi-cephfs-sc created
Testing creates a PVC and PV, which is then visible on the CephFS file system in /volumes/csi/
:
cephfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc-test-1
namespace: outin
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: csi-cephfs-sc
root@docker-ucsb-4:/home/outin/k8s# kubectl create -f cephfs-pvc.yaml
persistentvolumeclaim/csi-cephfs-pvc-test-1 created
root@docker-ucsb-4:/home/outin/k8s# kubectl -n outin get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-cephfs-pvc-test-1 Bound pvc-292a1938-2d52-41dd-866d-1ec9c16f9469 10Gi RWX csi-cephfs-sc 9s
root@docker-ucsb-4:/home/outin/k8s# kubectl -n outin get pv pvc-292a1938-2d52-41dd-866d-1ec9c16f9469 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: cephfs.csi.ceph.com
creationTimestamp: "2023-10-23T23:59:23Z"
finalizers:
- kubernetes.io/pv-protection
name: pvc-292a1938-2d52-41dd-866d-1ec9c16f9469
resourceVersion: "396403681"
uid: 3b8fdc36-3c4a-423d-9309-d4e654043d4d
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: csi-cephfs-pvc-test-1
namespace: outin
resourceVersion: "396403669"
uid: 292a1938-2d52-41dd-866d-1ec9c16f9469
csi:
controllerExpandSecretRef:
name: csi-cephfs-secret
namespace: default
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
name: csi-cephfs-node-secret
namespace: default
volumeAttributes:
clusterID: 8aa4d4a0-a209-11ea-baf5-ffc787bfc812
csi.storage.k8s.io/pv/name: pvc-292a1938-2d52-41dd-866d-1ec9c16f9469
csi.storage.k8s.io/pvc/name: csi-cephfs-pvc-test-1
csi.storage.k8s.io/pvc/namespace: outin
fsName: cephfs
storage.kubernetes.io/csiProvisionerIdentity: 1698095422479-8081-cephfs.csi.ceph.com
subvolumeName: k8s-prod-csi-vol-30a4102f-7200-11ee-a2d5-1ebe8cac1bb0
subvolumePath: /volumes/csi/k8s-prod-csi-vol-30a4102f-7200-11ee-a2d5-1ebe8cac1bb0/89046bb5-1830-4560-9fb5-1fdd9d504d56
volumeNamePrefix: k8s-prod-csi-vol-
volumeHandle: 0001-0024-8aa4d4a0-a209-11ea-baf5-ffc787bfc812-0000000000000001-30a4102f-7200-11ee-a2d5-1ebe8cac1bb0
mountOptions:
- debug
persistentVolumeReclaimPolicy: Delete
storageClassName: csi-cephfs-sc
volumeMode: Filesystem
status:
phase: Bound
root@merry-snail:/mnt/cephfs/volumes/csi/k8s-prod-csi-vol-30a4102f-7200-11ee-a2d5-1ebe8cac1bb0/89046bb5-1830-4560-9fb5-1fdd9d504d56# pwd
/mnt/cephfs/volumes/csi/k8s-prod-csi-vol-30a4102f-7200-11ee-a2d5-1ebe8cac1bb0/89046bb5-1830-4560-9fb5-1fdd9d504d56
root@merry-snail:/mnt/cephfs/volumes/csi/k8s-prod-csi-vol-30a4102f-7200-11ee-a2d5-1ebe8cac1bb0/89046bb5-1830-4560-9fb5-1fdd9d504d56# ls -al
total 0
drwxrwxrwx 2 root root 0 Oct 23 16:59 .
drwxrwxrwx 3 root root 2 Oct 23 16:59 ..
Awesome, @nickatnceas !
Can you add this documentation to the k8s-cluster docs where Peter previously documented the setup for the RBD dynamic provisioning, and CephFS static provisioning? That whole storage provisioning section could probably use some editing/reorganization to calrify stuff you do as the storage admin from stuff devs do to create and used PVs and PVCs.
I changed the k8s-dev storageclass to add a similar volumeNamePrefix to k8s-dev volumes as k8s-prod. There were no dynamic CephFS PVCs/PVs created yet, so the change was not disruptive. I'll update my notes above, and will update those setup docs.
I updated the docs with CephFS dynamic provisioning, move the storageclass sections from dynamic provisioning to the CSI setup doc, and moved dynamic provisioning to the top of the docs.
We did not deploy Due to previous requirements that the CephFS CSI driver be granted admin access to the entire Ceph cluster: see https://github.com/ceph/ceph-csi/blob/devel/docs/deploy-cephfs.md
However, this does not appear to be the case anymore, as limited CephFS access can be granted to K8s: https://github.com/ceph/ceph-csi/blob/devel/docs/capabilities.md#cephfs
My plan is to create a new k8s-dev-cephfs user in Ceph and update the
csi-cephfs-sc
storage class, which is deployed, but not using valid credentials, and is not in use according tokubectl get pvc -A