Closed matteorosani closed 1 year ago
Hi @matteorosani. Thanks for your PR.
I am @kingmakerbot. You can interact with me issuing a slash command in the first line of a comment. Currently, I understand the following commands:
/rebase
: Rebase this PR onto the master branch/merge
: Merge this PR into the master branch/hold
: Adds hold label to prevent merging with /merge/unhold
: Removes the hold label to allow merging with /merge/deploy-staging
: Deploy a staging environment to test this PR (the build-all
flag enables user environments building)/undeploy-staging
: Manually undeploy the staging environmentMake sure this PR appears in the CrownLabs changelog, adding one of the following labels:
kind/breaking
: :boom: Breaking Changekind/feature
: :rocket: New Featurekind/bug
: :bug: Bug Fixkind/cleanup
: :broom: Code Refactoringkind/docs
: :memo: DocumentationOk now we will squash commits
/deploy-staging
/deploy-staging
/merge
Your staging environment has been correctly teared-down!
Description
Problem
In this project we aimed at replacing NextCloud as the storage solution for Crownlabs. Previously NextCloud was used for the user storage and with VMs and FileBrowser was used for containers, this is suboptimal, since the files stored in one are not available in the other. NextCloud has shown reliability and performance issues, while FileBrowser is currently attached to independent volumes destroyed when an instance is deleted.
Proposed solution
In this project we developed a solution to the problem using the CephNFS CRD added in Rook v1.10. With this approach a Ceph Filesystem is created for all users and individual volumes (that are like folders inside the whole Ceph Filesystem) are shared via NFS inside the cluster. The personal volume of the user is then mounted in containers and VMs to have a unified storage solution.
The creation of the personal volume is dynamically provided, a StorageClass is defined specifying the Ceph Filesystem name, NFS service name, NFS cluster name and Ceph ClusterID. With this information when a storage request is issued (creation of a PVC for this StorageClass) the Rook operator creates a volume inside the Ceph Filesystem and creates for this volume a NFS share that can be accessed via the NFS service in the cluster. The created volume is then linked to a PV bounded to the PVC whom requested the storage in the first place.
Implementation details
The personal storage is owned by the user (Tenant) and is managed by the TenantOperator (creation and deletion) that creates also a Secret for the user to store the data necessary to access the NFS share. The Secret created for the user is used by the InstanceOperator since it has to mount the NFS share in the instances via NFS Volume Mount for containers and via CloudInit for VMs.