Closed davesargrad closed 4 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
@kkostin Ty. I eventually got CEPH RBD working. If I ever need to revisit CEPH FS then I am sure I'll reference this issue.
For now I'll close it.
I am trying to get CEPHFS working. The procedure I am following is the one found here. The procedure follows the section on RBD.
I've struggled with the process, and I have one outstanding issue. I am documenting the process in this issue, hopefully for the benefit of others, but also because I'd like help relative to the final issue.
I already have a CEPH cluster up, and a seperate K8S cluster, and running.
The steps are as follows:
I've updated the Ceph-FS-Provisioner.yaml to be consistent with K8S 1.16 (e.g. Deployment no longer in extensions/v1beta1, and now has a selector tag)
Further details as follows. The key I am using in step 2:
The creation of the provisioner (step 3):
The creation of the storage class (step 4):
Note that this storage class is defined as follows:
Specifically it uses a claimRoot /pvc-volumes
When I try to create the PVC it never binds.
My provisioner pod:
Its description:
I dont quite understand the claimRoot /pvc-volumes. I have not exported this from CEPH. I am guessing that I need to based on the comment I see here
Do I need to export "/pvc-volumes" and if so, does someone know the command for this?
Thanks.
As an aside, the tail end of the log on the provisioner pod