Closed yehaifeng closed 9 months ago
My user
ceph auth get-or-create client.k8s_provisioner \
mon 'allow r' \
osd 'allow rw tag cephfs metadata=*' \
mgr 'allow rw'
I found a relative issue, but my secret using stringData
.
https://github.com/DataONEorg/k8s-cluster/issues/42
---
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret
namespace: ceph-csi
stringData:
# Required for statically provisioned volumes
userID: client.st2k8s_provisioner
userKey: AQCdJXBlyVJ0ABAASP0QyXCqyJ1jHhBtQNkv5A==
# Required for dynamically provisioned volumes
adminID: client.st2k8s_node
adminKey: AQDMJXBlqsD9GhAAJ+KNPHzEpLezdQtd4IjOyg==
# Encryption passphrase
encryptionPassphrase: test_passphrase
Can you check if its similar to this issue https://github.com/ceph/ceph-csi/issues/2848 ?
Try executing the command manually from the pod csi-cephfsplugin-provisioner csi-cephfsplugin container https://github.com/rook/rook/blob/master/Documentation/Troubleshooting/ceph-csi-common-issues.md#rbd-commands
Can you check if its similar to this issue #2848 ?
Yes, I checked it and the version of ceph is 17.2.5, that bug is fixed.
Try executing the command manually from the pod csi-cephfsplugin-provisioner csi-cephfsplugin container https://github.com/rook/rook/blob/master/Documentation/Troubleshooting/ceph-csi-common-issues.md#rbd-commands
I mounted manually with failed. The secretfile i not found in /tmp/csi/keys, that is added manually.
[root@csi-cephfsplugin-provisioner-69c48ff476-pjks5 keys]# mount -vvv -t ceph 192.168.80.3:6789,192.168.80.4:6789,192.168.80.5:6789:/volumes/cephcsi/k8ssubvol/5ce1ea09-60b9-4ce6-9e30-a06a3fbd5979 /tmp/a -o name=client.st2k8s_provisioner,mds_namespace=st2k8s,secretfile=/tmp/csi/keys/keyring,_netdev
parsing options: rw,name=client.st2k8s_provisioner,mds_namespace=st2k8s,secretfile=/tmp/csi/keys/keyring,_netdev
mount.ceph: options "name=client.st2k8s_provisioner,mds_namespace=st2k8s".
invalid new device string format
Unable to apply new capability set.
Child exited with status 1
secret is not valid base64: Invalid argument.
adding ceph secret key to kernel failed: Invalid argument
couldn't append secret option: -22
I try a new cephfs
and the secret like this, no prefix client.
---
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret
namespace: ceph-csi
stringData:
# Required for statically provisioned volumes
userID: csi-cephfs-provisioner
userKey: AQCdJXBlyVJ0ABAASP0QyXCqyJ1jHhBtQNkv5A==
# Required for dynamically provisioned volumes
adminID: csi-cephfs-node
adminKey: AQDMJXBlqsD9GhAAJ+KNPHzEpLezdQtd4IjOyg==
# Encryption passphrase
encryptionPassphrase: test_passphrase
ceph auth get-or-create client.cephfs-csi-provisioner \
mon 'allow r' \
osd 'allow rw tag cephfs metadata=*, allow rw tag cephfs data=*' \
mds 'allow rw'
ceph auth get-or-create client.cephfs-csi-node \
mon 'allow r' \
osd 'allow rw tag cephfs metadata=*, allow rw tag cephfs data=*' \
mgr 'allow rw' \
mds 'allow rw'
There are 3 cephfs in my ceph cluster, so i need fsname
This is not a bug, closing.
Describe the bug
A clear and concise description of what the bug is.
Permission denied when creating pvc using cephfs-csi
Environment details
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) : kernelSteps to reproduce
Steps to reproduce the behavior:
Actual results
Describe what happened
Expected behavior
A clear and concise description of what you expected to happen.
Logs
If the issue is in PVC creation, deletion, cloning please attach complete logs of below containers.
csi-provisioner
csi-cephfsplugin
If the issue is in PVC resize please attach complete logs of below containers.
If the issue is in snapshot creation and deletion please attach complete logs of below containers.
If the issue is in PVC mounting please attach complete logs of below containers.
csi-rbdplugin/csi-cephfsplugin and driver-registrar container logs from plugin pod from the node where the mount is failing.
if required attach dmesg logs.
Note:- If its a rbd issue please provide only rbd related logs, if its a cephFS issue please provide cephFS logs.
Additional context
Add any other context about the problem here.
For example:
Any existing bug report which describe about the similar issue/behavior