mayadata-io / cstorpoolauto

Data Agility Operator for cstor pool
Apache License 2.0
8 stars 6 forks source link

Migration of volume replicas from one pool to another pool #100

Open mittachaitu opened 4 years ago

mittachaitu commented 4 years ago

Description: I have cstorpoolcluster(CSPC) resource created on top of 3 nodes(That interns creates CSPI resources) and I deployed CSI-Volumes on top of above CSPC.

Now My Setup looks like:

kubectl get nodes
NAME                                        STATUS   ROLES    AGE    VERSION
gke-sai-test-cluster-pool-1-8d7defe8-37rr   Ready    <none>   102m   v1.14.8-gke.33
gke-sai-test-cluster-pool-1-8d7defe8-8471   Ready    <none>   102m   v1.14.8-gke.33
gke-sai-test-cluster-pool-1-8d7defe8-8nt0   Ready    <none>   102m   v1.14.8-gke.33
gke-sai-test-cluster-pool-1-8d7defe8-chws   Ready    <none>   102m   v1.14.8-gke.33
 kubectl get cspi -n openebs
NAME                     HOSTNAME                                    ALLOCATED   FREE    CAPACITY   STATUS   AGE
cstor-sparse-cspc-kdrs   gke-sai-test-cluster-pool-1-8d7defe8-8471   154K        9.94G   9.94G      ONLINE   13m
cstor-sparse-cspc-nb99   gke-sai-test-cluster-pool-1-8d7defe8-8nt0   158K        9.94G   9.94G      ONLINE   13m
cstor-sparse-cspc-twjx   gke-sai-test-cluster-pool-1-8d7defe8-chws   312K        9.94G   9.94G      ONLINE   13m
kubectl get cvr -n openebs
NAME                                                              USED   ALLOCATED   STATUS    AGE
pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-kdrs   6K     6K          Healthy   105s
pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-nb99   6K     6K          Healthy   105s
pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-twjx   6K     6K          Healthy   105s

Now I performed horizontal scaleup of CSPC which created CSPI on new nodes

 kubectl get cspi -n openebs
NAME                     HOSTNAME                                    ALLOCATED   FREE    CAPACITY   STATUS   AGE
cstor-sparse-cspc-kdrs   gke-sai-test-cluster-pool-1-8d7defe8-8471   161K        9.94G   9.94G      ONLINE   15m
cstor-sparse-cspc-kmt7   gke-sai-test-cluster-pool-1-8d7defe8-37rr   50K         9.94G   9.94G      ONLINE   42s
cstor-sparse-cspc-nb99   gke-sai-test-cluster-pool-1-8d7defe8-8nt0   161K        9.94G   9.94G      ONLINE   15m
cstor-sparse-cspc-twjx   gke-sai-test-cluster-pool-1-8d7defe8-chws   161K        9.94G   9.94G      ONLINE   15m

Scenario: I want to remove the node gke-sai-test-cluster-pool-1-8d7defe8-chws from my cluster. I performed horizontally scaled of the pool(i.e removed the pool spec of the above node from CSPC), but before scaling down the pool from that node I want to move volume replicas on that pool to the different pool(which was newly created i.e cstor-sparse-cspc-kmt7). How I can achieve that without many manual steps.

I want volume replicas on below pools

kubectl get cvr -n openebs
NAME                                                              USED   ALLOCATED   STATUS    AGE
pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-kdrs   6K     6K          Healthy   105s
pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-nb99   6K     6K          Healthy   105s
pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-kmt7   6K     6K          Healthy   105s

In above migrated the volume replica from pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-twjx to pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-kmt7

AmitKumarDas commented 4 years ago

@mittachaitu can you provide the exact steps as well. In addition, can you please give some readable / dummy names in your steps. It becomes difficult to understand when we mention the actual volume names with UIDs & so on.

mittachaitu commented 4 years ago

I have the following pool configuration on 4 node cluster

kubectl get cspi -n openebs
NAME  HOSTNAME  ALLOCATED    FREE    CAPACITY   STATUS   AGE
pool1    node-1     161K     9.94G     9.94G       ONLINE   15m
pool2    node-2     50K       9.94G     9.94G       ONLINE   42s
pool3    node-3     161K     9.94G     9.94G       ONLINE   15m
pool4    node-4     161K     9.94G     9.94G       ONLINE   15m

Create a volume with three replicas on top of above pools

kubectl get cvr -n openebs
NAME         USED   ALLOCATED   STATUS    AGE
vol1-pool1   6K        6K      Healthy   105s
vol1-pool2   6K        6K      Healthy   105s
vol1-pool3   6K        6K      Healthy   105s

No, I am scaling down my cluster nodes from 4 to 3 to achieve that I am bringing down the node-3 so data on the pool3 should migrate to pool4(node-4) before scaling down the node-3.

OpenEBS is supporting to achieve this via manual steps with PR.