Closed TheDJVG closed 3 years ago
Looks like I get the same error when I manually try to create a subvolume:
[root@rook-ceph-tools-6f58686b5d-8rnnf /]# ceph fs subvolume create mainfs testdir
Error EINVAL: invalid value specified for ceph.dir.subvolume
@TheDJVG Could you upload ceph logs (mgr, monitor, mds logs)?
@kotreshhr sure thing. I think there's a problem in ceph itself. Currently my account is still pending for the ceph tracker so I cannot open an issue there.
Logs attached. mds_debug.log mgr.log mon.log
I think I have found why it was failing, it's working now:
[root@rook-ceph-tools-6f58686b5d-lrq8x /]# ceph fs subvolume create mainfs testing
[root@rook-ceph-tools-6f58686b5d-lrq8x /]# ceph fs subvolume ls mainfs
[
{
"name": "testing"
}
]
It started working after I applied setfattr -n ceph.dir.subvolume -v 0 .
for some reason ceph.dir.subvolume
was set on /
. It's unclear to my why that happened as I've only used ceph-csi and not mounted the directories manually.
@TheDJVG Thanks, Am closing this one as it's not an issue from the cephcsi side.
Today I have hit same issue after upgrading Ceph from 17.x to 18.x. Solution was the same as above, but I have theory how it happened - I had empty subvolumeGroup value provided for CephFS CSI driver helm chart. It seems like it was acceptable value for Ceph in version 17.x, but no longer valid for Ceph 18.x.
Describe the bug
When I try to mount or create a PVC it fails with:
It's unclear to me how I got in this situation as the cluster was working fine with existing claims, when I had to move some pods around and I noticed it didn't want to mount them on the new hosts. Creating a new PVC fails too.
Environment details
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) : kernelSteps to reproduce
Steps to reproduce the behavior:
rook-cephfs
storage class with this spec:Actual results
PVC is unable to be created:
or mounting existing PVC:
Expected behavior
The PVC would be created or mounted in the pod.
Logs
If the issue is in PVC creation, deletion, cloning please attach complete logs of below containers.
Additional context
This is 4 OSD cluster on two nodes and 3 mons. The Ceph cluster is healthy and also has NFS enabled on this cephfs filesystem.