Open dunefro opened 4 years ago
@dunefro CStorPoolAuto tries to abstract away BlockDevices, SPC & few others. It provides a new CR called CStorClusterConfig
that deals with 'node labels', 'pool counts', 'raid type' & 'csi drivers' that support attach & detach. With this information CStorPoolAuto operator should handle all the operations related to finding BlockDevices, creating CStorPool & so on.
Below is a sample file that does all the above:
apiVersion: dao.mayadata.io/v1alpha1
kind: CStorClusterConfig
metadata:
name: my-cstor-cluster
namespace: openebs
spec:
diskConfig:
externalProvisioner:
csiAttacherName: pd.csi.storage.gke.io
storageClassName: csi-gce-pd
poolConfig:
raidType: mirror
Having said above, I will see if above suggestion can be incorporated into OpenEBS operators themselves. cc @sonasingh46 @mittachaitu
I am not able to understand the clear scope of the cstorclusterconfig and specially this line
Will this information CStorPoolAuto operator should handle all the operations related to finding BlockDevices, creating CStorPool & so on.
Q1 If I am using a basic setup of provisioning certain specific volumes for a deployment, will cstor pool be able to do that automatically starting from analysing the blockdevice and creating storagepool claim based on some filters?
It will be great if we can get a more simple working of cstorpoolauto to get an idea on how this is working
@dunefro I agree docs & simple working demo(s) need to be showcased to explain this operator. I shall try to update & analyse if you can provide your setup & expectations from your application. (In other words, lets park the solution details i.e. BlockDevices, label selectors on BlockDevices, NDM operator, etc for the time being)
e.g. Can you explain how BlockDevices are available on your setup? Finally how do you want your applications to consume these devices?
We are trying to provide a platform packed inside a tar through gravity using openEBS as storage provider. In this We can pack for e.g elasticsearch + k8s(cluster). On unpacking of this tarball es must automatically get installed after the k8s installation completes without any manual interference in between. This is one of the simple things that we wish to achieve.
The above case is possible but that will include manual work that we wish to remove. Right now we are trying to work things out with maxPools option but it's still not fit for day-2 operations as mentioned in the slack channel.
@dunefro this is nice. Is gravity meant to work with all kinds of clusters i.e. on prem as well as cloud. How is gravity provisioning the storage disks? Or does it only expose existing disks that are attached to the nodes already.
@AmitKumarDas Yes, Gravity is meant to support all kinds of clusters, we are supporting on prem. Right Now I am facing the issue of ndm pod being in the container creating stage with the following events
44s Warning FailedMount pod/openebs-ndm-kl7hm MountVolume.SetUp failed for volume "udev" : hostPath type check failed: /run/udev is not a directory
7h30m Warning FailedMount pod/openebs-ndm-kl7hm MountVolume.SetUp failed for volume "config" : configmap "openebs-ndm-config" not found
25m Warning FailedMount pod/openebs-ndm-kl7hm Unable to attach or mount volumes: unmounted volumes=[config udev], unattached volumes=[config udev procmount sparsepath openebs-maya-operator-token-2khww]: timed out waiting for the condition
120m Warning FailedMount pod/openebs-ndm-kl7hm Unable to attach or mount volumes: unmounted volumes=[config udev], unattached volumes=[sparsepath openebs-maya-operator-token-2khww config udev procmount]: timed out waiting for the condition
30m Warning FailedMount pod/openebs-ndm-kl7hm Unable to attach or mount volumes: unmounted volumes=[config udev], unattached volumes=[openebs-maya-operator-token-2khww config udev procmount sparsepath]: timed out waiting for the condition
95m Warning FailedMount pod/openebs-ndm-kl7hm Unable to attach or mount volumes: unmounted volumes=[udev config], unattached volumes=[udev procmount sparsepath openebs-maya-operator-token-2khww config]: timed out waiting for the condition
5m Warning FailedMount pod/openebs-ndm-kl7hm Unable to attach or mount volumes: unmounted volumes=[config udev], unattached volumes=[procmount sparsepath openebs-maya-operator-token-2khww config udev]: timed out waiting for the condition
I know it has nothing to do with this but still I am referring here for any other forwards. The complete issue post is in the gravity forum
Your remaining questions can be answered once we are to able to install openEBS
@dunefro You may ask in openebs channel. You may refer to https://docs.google.com/document/d/16qbdrYplvk7FjYe_oL8tgL1rDiCM6Arq_Pinx0ocULk/
Simple scenario! I want to deploy Elasticsearch and i need to provision some volumes. Now i have the blockdevices ( say 5) and I need to create a storage pool claim that should use (let's say) 2 of the blockdevices. Now for doing so I need to manually look at the blockdevices to be included and add them in the blockdeviceslist. The simple case can be, if the ndm or ndm operator(just saying) through some means is able to put a label on the blockdevice which will match the labels provided in the storage pool claim for es. If both of them match, we need not to mention the block device list in the storage pool claim.