kubernetes-retired / external-storage

[EOL] External storage plugins, provisioners, and helper libraries
Apache License 2.0
2.7k stars 1.6k forks source link

Help getting CEPHFS working #1258

Closed davesargrad closed 4 years ago

davesargrad commented 4 years ago

I am trying to get CEPHFS working. The procedure I am following is the one found here. The procedure follows the section on RBD.

image

I've struggled with the process, and I have one outstanding issue. I am documenting the process in this issue, hopefully for the benefit of others, but also because I'd like help relative to the final issue.

I already have a CEPH cluster up, and a seperate K8S cluster, and running.

image

The steps are as follows:

  1. Create the namespace: kubectl create ns cephfs
  2. Create the secret: kubectl create secret generic ceph-secret-admin --from-literal=key="AQDJtspdXMyJLRAAZrRBzSyGR2rG5UqLHuDnAw==" -n cephfs
  3. Create the provisioner: kubectl create -n cephfs -f Ceph-FS-Provisioner.yaml
  4. Create the storage class: kubectl create -f Ceph-FS-StorageClass.yaml
  5. Create a PVC: kubectl create -f Ceph-FS-PVC.yaml

I've updated the Ceph-FS-Provisioner.yaml to be consistent with K8S 1.16 (e.g. Deployment no longer in extensions/v1beta1, and now has a selector tag)

Further details as follows. The key I am using in step 2: image

image

The creation of the provisioner (step 3): image

The creation of the storage class (step 4): image

Note that this storage class is defined as follows: image

Specifically it uses a claimRoot /pvc-volumes

When I try to create the PVC it never binds.

image

image

My provisioner pod: image

Its description: image

I dont quite understand the claimRoot /pvc-volumes. I have not exported this from CEPH. I am guessing that I need to based on the comment I see here

image

Do I need to export "/pvc-volumes" and if so, does someone know the command for this?

Thanks.

As an aside, the tail end of the log on the provisioner pod image

kkostin commented 4 years ago

Hi, look at this: https://github.com/kubernetes-incubator/external-storage/issues/941#issuecomment-418971539

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

davesargrad commented 4 years ago

@kkostin Ty. I eventually got CEPH RBD working. If I ever need to revisit CEPH FS then I am sure I'll reference this issue.

For now I'll close it.