Open berendt opened 2 years ago
@Nils98Ar there is a way, but this at moment under testing . Manila is able to use native cephfs or nfs with cephfs, there some open question in scope of security.
Okay then we will wait until there is a recommended way!
Have you tested CephFS NFS-Ganesha with OSISM? Was there any outcome?
Mathias has left our company and has not continued with this task in the past. If there is a concrete need here, we would have to wait for the allocation of SCS VP03 so that we can include the topic of Manila and CephFS/NFS there in the near future.
Ok, thank you for the update.
I think it is quite important for K8s rwx volumes which are still needed sometimes. Currently we are using nfs directly from K8s but Manila CSI integration would be better.
@fkr Please take up and prioritise. IMO after the major release.
Ok, thank you for the update.
I think it is quite important for K8s rwx volumes which are still needed sometimes. Currently we are using nfs directly from K8s but Manila CSI integration would be better.
@Nils98Ar there some open issues with the production grade of ganesha nfs itself, please have a look here: https://github.com/nfs-ganesha/nfs-ganesha/issues
Maybe I will try my luck with the ceph-nfs role and manila integration. @berendt Where would you place the nfs containers? Control or storage nodes?
If I do not succeed we would wait for @fkr and SCS-VP03 ;)
On the storage nodes.
Manila with native cephfs is now working for us with manila-csi. We have created a private manila cephfs share type that is only accessible for selected (internal) projects.
Later we would create a public manila nfs-ganesha-cephfs share type as native cephfs should only be used for private cloud use cases.
@Nils98Ar Can you please share your steps?
OSISM-side or K8s or both?
I think it makes sense to have both.
Deployment in OSISM (see https://github.com/osism/testbed/commit/008e9e7fb2ea8c41869965c1be8d1c74df8f0e37):
monitor_address
of control and storage nodesenvironments/kolla/configuration.yml
:
enable_manila: "yes"
enable_manila_backend_cephfs_native: "yes"
ceph.conf
to environments/kolla/files/overlays/manila
:
[global]
mon host = {% for host in groups['ceph-mon'] %}{{ hostvars[host]['monitor_address'] }}{% if not loop.last %},{% endif %}{% endfor %}
public network = {{ ceph_public_network }} max open files = 131072 fsid = {{ ceph_cluster_fsid }}
- `osism apply manila`
- `osism apply loadbalancer`
- `osism apply horizon` (and maybe `osism apply skyline`)
**Configuration in OpenStack:**
- Create private CephFS share type:
openstack share type create --description "private cephfs share type for trusted projects" --extra-specs "share_backend_name=CEPHFS1" --snapshot-support true --create-share-from-snapshot-support true --public false CephFS false
- Grant access to private CephFS share type for each [„trusted“ project](https://docs.openstack.org/manila/latest/admin/cephfs_driver.html#security-with-cephfs-native-share-backend) that needs to create shares:
openstack share type access create CephFS
**Usage in OpenStack:**
- Create share (protocol and type `CephFS`) + rule (type `cephx` and `access to` chosen freely) via horizon/cli etc.
- Mount e.g. via kernel client
- package `ceph-common` needs to be installed
- client needs access to storage nodes and control nodes `monitor_address`, tcp ports `3300`, `6789`, `6800-7300`
mount -t ceph -o 'name=
(some warnings are expected, command is quite verbose)
- check success e.g. via `df -h`
**Deployment in K8s:**
- install helm chart `ceph-csi-cephfs` from repo `https://ceph.github.io/csi-charts` in namespace `kube-system`
- install helm chart `openstack-manila-csi` from repo `https://kubernetes.github.io/cloud-provider-openstack` in namespace `kube-system` with values:
fullNameOverride: "csi-manila-cephfs" nameOverride: "csi-manila-cephfs" shareProtocols:
- Provide OpenStack application credentials for manila-csi in secret:
apiVersion: v1
kind: Secret
metadata:
name: csi-manila-secret
namespace: kube-system
stringData:
os-authURL: "
- E.g. create secret `csi-manila-secret` based on openstack-cloud-controller-managers `cloud-config` secret (if application credential is used there):
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: csi-manila-secret namespace: kube-system stringData: os-authURL: "$(kubectl get secret -n kube-system cloud-config -o json | jq -r '.data."cloud.conf"' | base64 -d | grep auth-url | cut -d"=" -f2 | tr -d "\"")" os-region: "$(kubectl get secret -n kube-system cloud-config -o json | jq -r '.data."cloud.conf"' | base64 -d | grep region | cut -d"=" -f2 | tr -d "\"")" os-applicationCredentialID: "$(kubectl get secret -n kube-system cloud-config -o json | jq -r '.data."cloud.conf"' | base64 -d | grep application-credential-id | cut -d"=" -f2 | tr -d "\"")" os-applicationCredentialSecret: "$(kubectl get secret -n kube-system cloud-config -o json | jq -r '.data."cloud.conf"' | base64 -d | grep application-credential-secret | cut -d"=" -f2 | tr -d "\"")" EOF
- Create StorageClass:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: "manila-cephfs" provisioner: "cephfs.manila.csi.openstack.org" parameters: type: "CephFS" # name of manila share type csi.storage.k8s.io/provisioner-secret-name: csi-manila-secret # name of manila-csi secret csi.storage.k8s.io/provisioner-secret-namespace: kube-system # namespace of manila-csi secret csi.storage.k8s.io/controller-expand-secret-name: csi-manila-secret csi.storage.k8s.io/controller-expand-secret-namespace: kube-system csi.storage.k8s.io/node-stage-secret-name: csi-manila-secret csi.storage.k8s.io/node-stage-secret-namespace: kube-system csi.storage.k8s.io/node-publish-secret-name: csi-manila-secret csi.storage.k8s.io/node-publish-secret-namespace: kube-system
**Usage in K8s:**
- Create PVC:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: manila-cepfs-test spec: storageClassName: manila-cephfs accessModes:
Sadly manila-csi
seems to need an own secret and does not understand the clouds.yaml
format used in the existing cloud-config
secret from openstack-cloud-controller-manager but I set my hope on this ;)
https://github.com/stfc/cloud-docs/blob/ae87f4fa768c34787b79c20e205712cf4f4d2e3a/source/Manila/manilaKubernetes.rst?plain=1#L116
With 7.0.0 is CephFS and NFS-via-ganesha still supported by OSISM and the ceph playbooks? Or should we use enable_manila_backend_generic
instead?
We still use Ceph Ansible for Ceph Quincy. Not much has changed in this area compared to OSISM 5 and OSISM 6. If it worked before, it should still work now.
And use 7.0.1. Not 7.0.0.
I don‘t know that the manila cephfs nfs-ganesha setup was ever properly tested with OSISM. But I would be interested in it as we currently still use manila with only cephfs.
It probably does not yet affect OSISM but the role ceph-nfs has been removed in the ceph-ansible main branch and I have not found the mentioned separate playbook yet.
I don‘t know that the manila cephfs nfs-ganesha setup was ever properly tested with OSISM. But I would be interested in it as we currently still use manila with only cephfs.
It probably does not yet affect OSISM but the role ceph-nfs has been removed in the ceph-ansible main branch and I have not found the mentioned separate playbook yet.
Good point. Thanks for the pointer. I had also seen the commit, but hadn't thought of it. We ourselves use neither CephFS nor NFS.
In Rook, both seem to work.
In kolla-ansible, Manila assumes that the nfs-ganesha server listens on the api_interface
.^1
Either I don't understand this proper, but the External Ceph Guide is not really clear in that regards.
In kolla-ansible, Manila assumes that the nfs-ganesha server listens on the
api_interface
.1Either I don't understand this proper, but the External Ceph Guide is not really clear in that regards.
Footnotes
You want to use the CephFS integration. As NFS has been kicked out of Ceph-Ansible upstream, it will soon no longer be usable here either.
I think Kolla-Ansible assumes that the NFS servers run on the same node as the Manila share services.
Whatever this means in the issue:
nfs-ganesha support will be implemented in a separate playbook.
Whatever this means in the issue:
nfs-ganesha support will be implemented in a separate playbook.
I would not assume that ceph-ansible is very active and that this is really implemented.
@matfechner Is there already a way to provide shared filesystem storage for a k8s rwx StorageClass in SCS somehow?
We would need this for shared storage between pod replicas.