Closed deadjoker closed 2 years ago
I came across the same issue. User for CephFS is able to create subvolumegroups and subvolumes but it fails on the provisioner. An user will full admin rights works without problems. I couldn't find where the call to RADOS is done to find out which permission is missing or which action causes the problem.
@deadjoker @sgissi these are the capabilities we require for the user in a ceph cluster for Ceph CSI to perform its actions https://github.com/ceph/ceph-csi/blob/master/docs/capabilities.md , even after giving these permissions if you still face issues, please revert!
@humblec I followed this docs and still get this error. See my step 1
Thanks @deadjoker for confirming the setup . @yati1998 are we missing any capabilities in the doc ?
Hi @deadjoker , As per the steps mentioned by you, the user creation is done as per the node plugin capabilities, and the cephFS Provisioner capabilities seem to be missing. This might be the reason why you are unable to provision a volume via the cephfs-provisioner. Unlike rbd, cephfs has separate capability requirements for node plugin and provisioner as mentioned here. For solving the issue, you can try creating separate cephfs-plugin and cephfs-provisioner secrets. Feel free to reach out if the issue still persists :)
Hi @Yuggupta27 Here is the secrets in my cluster environment.
kubectl get secret -n ceph-csi
NAME TYPE DATA AGE
cephfs-csi-nodeplugin-token-sx9v2 kubernetes.io/service-account-token 3 97d
cephfs-csi-provisioner-token-xxnrd kubernetes.io/service-account-token 3 97d
csi-cephfs-secret Opaque 2 97d
default-token-ccmsh kubernetes.io/service-account-token 3 105d
Should I use a new ceph id with capability of
"mon", "allow r",
"mgr", "allow rw",
"osd", "allow rw tag cephfs metadata=*"
and create a csi-cephfs-provisioner-secret
for the provisioner?
@deadjoker Did you manage to get the issue resolved? I ran into exactly a similar error as well and not sure yet how to resolve the issue?
@alamsyahho have not resolved this issue yet. I'm using admin account instead
Understood. Probably i will have to use admin account for csi-cephfs as well then. Thanks for your reply
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.
This is still very valid. I have ceph-csi installed via rook, and using the rook scripts to create the ceph clients
client.csi-cephfs-node
caps: [mds] allow rw
caps: [mgr] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs *=*
client.csi-cephfs-provisioner
caps: [mgr] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs metadata=*
trying to provision a cephfs subvolumegroup doesn't work using csi-cephfs-provisioner. However if I tell the storageclass to use admin, it works, so something is either missing from these caps or the code does something different when admin is used.
Update: the csi-cephfs-provisioner is able to create subvolume groups
[root@kw-02000cccea2b /]# ceph -n client.csi-cephfs-provisioner --key xxx== -m v2:10.3.60.25:3300 fs subvolumegroup create cephfs test cephfs_data
[root@kw-02000cccea2b /]# ceph -n client.csi-cephfs-provisioner --key xxx== -m v2:10.3.60.25:3300 fs subvolumegroup ls cephfs
[
{
"name": "test"
}
]
Weirdly enough this still fails if I give the csi-cephfs-provisioner client same caps as admin, but it works if I use the admin client.
[client.csi-cephfs-provisioner]
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
I still wasn't able to solve the problem, I simply worked around it using client.admin
like some other people here.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.
@deadjoker, the ceph capabilities requirements you provided, from the following link have to be used in the userID section of the secret, for static provisioning only. The following example explains the meaning of the userID and adminID sections.
If you expect a dynamic provisioning behaviour, you have to provide an admin user account, for some -not well documented- reasons.
I've faced this issue in the past months -> only the client.admin user worked. When I created another admin user, say "client.admin123" with the same capabilities, it didn't work. A few posts are related to this pb -> this one for example
Last days, users at work asked us to provide dynamic provisioning for our K8S/Ceph environments.
So, I've tried this evening with an "up to date" config :
I've created again an alternative admin account with the same caps as client.admin... inserted these credentials at adminID : .... it works, now, with an alternative admin user !
Here is the user definition and caps for information :
client.admink8s key: AQBB4............jKSb9Kbjg== caps: [mds] allow caps: [mgr] allow caps: [mon] allow caps: [osd] allow *
Very insecure... We do not want to expose an admin token in the clear in Kubernetes as we don't use protected secrets already. At least it would be appreciated not to require write capabilities for the monitors..
Can the development team clarify in the docs directory the minimal caps for an "admin" user for dynamic provisioning ? Or explain why it have to be a full admin having write caps for the Ceph mons.. ?
@humblec ? I will also check at the code and ceph detailed caps next days
Thanks a lot,
Hi guys,
I encountered this problem too, but I have been resolved. The key point was the adminID and adminKey in Secret file must be admin (client.admin in ceph cluster). Once I re-apply the Secret yaml file, csi-cephfs-sc works!!!
I found the doc in ceph-csi/docs/capabilities.md Seems there is some issues in user privilege which config by ceph ( use ceph auth client.xxx caps mon 'allow r' osd....mds...mgr... ). It dosen't work!!!
apiVersion: v1 kind: Secret metadata: name: csi-cephfs-secret namespace: ceph-csi stringData:
adminID: k8sfs <-- here should be admin (client.admin)
adminKey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxx==
@drummerglen It's not resolved. So your "solution"/"resolved" is what exactly what everyone else did to work around the problem and nothing new. It's even written in the original post
... but succeed if I use admin ceph user.
It's not a solution/resolution to run as admin/superuser/god-mode, it's just a temporary work-around. Privilege separation is there for a reason, mainly to reduce the risk of malicious abuse or errors made by code or humans.
@Raboo Oops, sorry I didn't read every comment. May I ask if any version has resolved this issue?
@drummerglen no I don't think so. It seems very hard to figure out why this is happening and probably doesn't affect the majority of the users.
@Raboo My ceph cluster was deployed by cephadm running on docker. I have no idea if it is the problem.
Hi, until today the issue has not been resolved yet. Is there any ongoing fixing that is still pending? Or nobody really cares about this issue? It is very concerning that we need to expose our Ceph superuser credentials into ceph-csi client, a slight human or backend error might jeopardize the whole Ceph cluster.
Hi, I am unsure if the issue is the same but you might want to look at https://github.com/ceph/ceph-csi/issues/2687.
We faced similar issues in crafting the correct caps so to let the ceph provisioner use credentials with restricted access. Like avoiding allow *
in all caps or restricting the permissions to path, fs, volumes.
The caps suggested at the end of the above issue are working for us, but unfortunately, the docs has not been updated yet.
Describe the bug
I deploy ceph-csi in k8s and use cephfs to provide pvc. PVC created fail when I use a normal ceph user but succeed if I use admin ceph user.
Environment details
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) : kernelSteps to reproduce
Steps to reproduce the behavior:
ceph auth caps client.k8sfs mon 'allow r' mgr 'allow rw' mds 'allow rw' osd 'allow rw tag cephfs *=*'
add secret.yaml and create from it
add storage class
Actual results
Expected behavior
PVC should be created successfully and bound to a PV.
Logs
If the issue is in PVC creation, deletion, cloning please attach complete logs of below containers.
Additional context
the ceph user 'k8sfs' caps:
this user has ability to create subvolume and subvolumegroup as well.
the 'csi' subvolumegroup is created when I use admin keyring in ceph-csi.