openebs / openebs

Most popular & widely deployed Open Source Container Native Storage platform for Stateful Persistent Applications on Kubernetes.
https://www.openebs.io
Apache License 2.0
8.94k stars 945 forks source link

Failing to Create cStor storage pools #3498

Closed odidev closed 2 years ago

odidev commented 2 years ago

Hi Team,

I am facing issues while creating cStor storage pools for both AMD64 platform on the AWS instance. I used this document to install cStor and create cStor pool storage. I am getting error in this step while implementing cspi.yaml file. I have also tried to troubleshoot this problem from here.

Below are the detailed logs of cspc and cspi.

xyz@ip-xyz:~$ kubectl describe cspc cstor-disk-pool -n openebs
Name:         cstor-disk-pool
Namespace:    openebs
Labels:       <none>
Annotations:  <none>
API Version:  cstor.openebs.io/v1
Kind:         CStorPoolCluster
Metadata:
  Creation Timestamp:  2021-12-24T08:10:00Z
  Finalizers:
    cstorpoolcluster.openebs.io/finalizer
  Generation:  6
  Managed Fields:
    API Version:  cstor.openebs.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:pools:
    Manager:      kubectl-client-side-apply
    Operation:    Update
    Time:         2021-12-24T08:10:00Z
    API Version:  cstor.openebs.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"cstorpoolcluster.openebs.io/finalizer":
      f:status:
        .:
        f:conditions:
        f:desiredInstances:
        f:provisionedInstances:
      f:versionDetails:
        .:
        f:desired:
        f:status:
          .:
          f:current:
          f:dependentsUpgraded:
          f:lastUpdateTime:
    Manager:         cspc-operator
    Operation:       Update
    Time:            2021-12-24T08:10:03Z
  Resource Version:  1054
  UID:               3a11ffa7-8a28-4d44-908b-7dddd3dcc23a
Spec:
  Pools:
    Data Raid Groups:
      Block Devices:
        Block Device Name:  blockdevice-064a6195a45b32f2f075f88a7e0e2178
    Node Selector:
      kubernetes.io/hostname:  minikube
    Pool Config:
      Data Raid Group Type:  stripe
Status:
  Conditions:
    Last Transition Time:  2021-12-24T08:10:45Z
    Last Update Time:      2021-12-24T08:10:45Z
    Message:               Pool manager(s) have minimum available pod
    Reason:                MinimumPoolManagersAvailable
    Status:                True
    Type:                  PoolManagerAvailable
  Desired Instances:       1
  Provisioned Instances:   1
Version Details:
  Desired:  3.0.0
  Status:
    Current:              3.0.0
    Dependents Upgraded:  true
    Last Update Time:     <nil>
Events:
  Type     Reason  Age   From             Message
  ----     ------  ----  ----             -------
  Warning  Create  74s   cspc-controller  Pool provisioning failed for 1/1
  Normal   Create  71s   cspc-controller  Pool Provisioned 1/1

Cspi

subham@ip-172-31-9-230:~$ kubectl describe cspi  cstor-disk-pool-jj2f -n openebs
Name:         cstor-disk-pool-jj2f
Namespace:    openebs
Labels:       kubernetes.io/hostname=minikube
              openebs.io/cas-type=cstor
              openebs.io/cstor-pool-cluster=cstor-disk-pool
              openebs.io/version=3.0.0
Annotations:  <none>
API Version:  cstor.openebs.io/v1
Kind:         CStorPoolInstance
Metadata:
  Creation Timestamp:  2021-12-24T08:10:03Z
  Finalizers:
    cstorpoolcluster.openebs.io/finalizer
    openebs.io/pool-protection
  Generation:  1
  Managed Fields:
    API Version:  cstor.openebs.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"cstorpoolcluster.openebs.io/finalizer":
        f:labels:
          .:
          f:kubernetes.io/hostname:
          f:openebs.io/cas-type:
          f:openebs.io/cstor-pool-cluster:
          f:openebs.io/version:
        f:ownerReferences:
          .:
          k:{"uid":"3a11ffa7-8a28-4d44-908b-7dddd3dcc23a"}:
      f:spec:
        .:
        f:dataRaidGroups:
        f:hostName:
        f:nodeSelector:
          .:
          f:kubernetes.io/hostname:
        f:poolConfig:
          .:
          f:dataRaidGroupType:
          f:priorityClassName:
          f:roThresholdLimit:
      f:status:
        .:
        f:capacity:
          .:
          f:free:
          f:total:
          f:used:
          f:zfs:
            .:
            f:logicalUsed:
        f:healthyReplicas:
        f:provisionedReplicas:
        f:readOnly:
      f:versionDetails:
        .:
        f:desired:
        f:status:
          .:
          f:current:
          f:dependentsUpgraded:
          f:lastUpdateTime:
    Manager:      cspc-operator
    Operation:    Update
    Time:         2021-12-24T08:10:03Z
    API Version:  cstor.openebs.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          v:"openebs.io/pool-protection":
    Manager:    pool-manager
    Operation:  Update
    Time:       2021-12-24T08:10:32Z
  Owner References:
    API Version:           cstor.openebs.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  CStorPoolCluster
    Name:                  cstor-disk-pool
    UID:                   3a11ffa7-8a28-4d44-908b-7dddd3dcc23a
  Resource Version:        1032
  UID:                     1fd64a93-ad1d-4ce2-82a2-09a6bc58bccb
Spec:
  Data Raid Groups:
    Block Devices:
      Block Device Name:  blockdevice-064a6195a45b32f2f075f88a7e0e2178
  Host Name:              minikube
  Node Selector:
    kubernetes.io/hostname:  minikube
  Pool Config:
    Data Raid Group Type:  stripe
    Priority Class Name:
    Ro Threshold Limit:    85
Status:
  Capacity:
    Free:   0
    Total:  0
    Used:   0
    Zfs:
      Logical Used:      0
  Healthy Replicas:      0
  Provisioned Replicas:  0
  Read Only:             false
Version Details:
  Desired:  3.0.0
  Status:
    Current:              3.0.0
    Dependents Upgraded:  true
    Last Update Time:     <nil>
Events:
  Type     Reason      Age                  From               Message
  ----     ------      ----                 ----               -------
  Warning  FailCreate  10s (x9 over 3m40s)  CStorPoolInstance  Failed to create pool due to 'Failed to create pool {cstor-3a11ffa7-8a28-4d44-908b-7dddd3dcc23a} : Failed to create pool.. invalid vdev specification
use '-f' to override the following errors:
/dev/xvda1 contains a filesystem of type 'ext4'
 .. exit status 1'

Can you please provide some pointers on the same?

mittachaitu commented 2 years ago

Warning FailCreate 10s (x9 over 3m40s) CStorPoolInstance Failed to create pool due to 'Failed to create pool {cstor-3a11ffa7-8a28-4d44-908b-7dddd3dcc23a} : Failed to create pool.. invalid vdev specification use '-f' to override the following errors: /dev/xvda1 contains a filesystem of type 'ext4' .. exit status 1'

For cStor pool to create it shouldn't contain any filesystem... blockdevice-064a6195a45b32f2f075f88a7e0e2178 contains ext4 filesystem. Can you delete the CSPC and wipe the filesystem and then create a pool again?

odidev commented 2 years ago

@mittachaitu I've tried formatting the block device and tried to use it again, but I'm getting a different error this time saying "must be a block device or a regular file". Please find the logs below.

subham@master-node:~$ kubectl describe cspi cstor-disk-pool-2gc5 -n openebs
Name:         cstor-disk-pool-2gc5
Namespace:    openebs
Labels:       kubernetes.io/hostname=minikube
              openebs.io/cas-type=cstor
              openebs.io/cstor-pool-cluster=cstor-disk-pool
              openebs.io/version=3.1.0
Annotations:  <none>
API Version:  cstor.openebs.io/v1
Kind:         CStorPoolInstance
Metadata:
  Creation Timestamp:  2022-01-24T06:40:18Z
  Finalizers:
    cstorpoolcluster.openebs.io/finalizer
    openebs.io/pool-protection
  Generation:  1
  Managed Fields:
    API Version:  cstor.openebs.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"cstorpoolcluster.openebs.io/finalizer":
        f:labels:
          .:
          f:kubernetes.io/hostname:
          f:openebs.io/cas-type:
          f:openebs.io/cstor-pool-cluster:
          f:openebs.io/version:
        f:ownerReferences:
          .:
          k:{"uid":"a88f1619-433f-419f-a44c-f6a79aa96c6a"}:
      f:spec:
        .:
        f:dataRaidGroups:
        f:hostName:
        f:nodeSelector:
          .:
          f:kubernetes.io/hostname:
        f:poolConfig:
          .:
          f:dataRaidGroupType:
          f:priorityClassName:
          f:roThresholdLimit:
      f:status:
        .:
        f:capacity:
          .:
          f:free:
          f:total:
          f:used:
          f:zfs:
            .:
            f:logicalUsed:
        f:healthyReplicas:
        f:provisionedReplicas:
        f:readOnly:
      f:versionDetails:
        .:
        f:desired:
        f:status:
          .:
          f:current:
          f:dependentsUpgraded:
          f:lastUpdateTime:
    Manager:      cspc-operator
    Operation:    Update
    Time:         2022-01-24T06:40:18Z
    API Version:  cstor.openebs.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          v:"openebs.io/pool-protection":
    Manager:    pool-manager
    Operation:  Update
    Time:       2022-01-24T06:40:43Z
  Owner References:
    API Version:           cstor.openebs.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  CStorPoolCluster
    Name:                  cstor-disk-pool
    UID:                   a88f1619-433f-419f-a44c-f6a79aa96c6a
  Resource Version:        1592
  UID:                     9599f357-2fc3-46c1-920d-301351ec2a52
Spec:
  Data Raid Groups:
    Block Devices:
      Block Device Name:  blockdevice-75ef95a18dc6980cb152d7e604b2dd15
  Host Name:              minikube
  Node Selector:
    kubernetes.io/hostname:  minikube
  Pool Config:
    Data Raid Group Type:  stripe
    Priority Class Name:
    Ro Threshold Limit:    85
Status:
  Capacity:
    Free:   0
    Total:  0
    Used:   0
    Zfs:
      Logical Used:      0
  Healthy Replicas:      0
  Provisioned Replicas:  0
  Read Only:             false
Version Details:
  Desired:  3.1.0
  Status:
    Current:              3.1.0
    Dependents Upgraded:  true
    Last Update Time:     <nil>
Events:
  Type     Reason      Age                From               Message
  ----     ------      ----               ----               -------
  Warning  FailCreate  22m (x5 over 23m)  CStorPoolInstance  Failed to create pool due to 'Failed to create pool {cstor-a88f1619-433f-419f-a44c-f6a79aa96c6a} : Failed to create pool.. cannot resolve path '/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol063b9d63f9e554c71-part15'
 .. exit status 1'
  Warning  FailCreate  15m (x13 over 21m)  CStorPoolInstance  Failed to create pool due to 'Failed to create pool {cstor-a88f1619-433f-419f-a44c-f6a79aa96c6a} : Failed to create pool.. cannot use '/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol063b9d63f9e554c71-part15': must be a block device or regular file
 .. exit status 1'
  Warning  FailCreate  3m22s (x25 over 15m)  CStorPoolInstance  Failed to create pool due to 'failed to verify if pool is importable: exit status 1'
subham@master-node:~$ lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0          7:0    0   22M  1 loop /snap/amazon-ssm-agent/4047
loop1          7:1    0   49M  1 loop /snap/core18/2252
loop2          7:2    0   49M  1 loop /snap/core18/2289
loop3          7:3    0 57.4M  1 loop /snap/core20/1244
loop4          7:4    0 57.5M  1 loop /snap/core20/1274
loop6          7:6    0 60.4M  1 loop /snap/lxd/21544
loop7          7:7    0  8.8M  1 loop /snap/kubectl/2293
loop8          7:8    0 60.7M  1 loop /snap/lxd/21843
loop9          7:9    0 37.5M  1 loop /snap/snapd/14296
loop10         7:10   0 36.5M  1 loop /snap/snapd/14063
loop11         7:11   0  8.8M  1 loop /snap/kubectl/2304
nvme0n1      259:0    0  100G  0 disk
├─nvme0n1p1  259:1    0 99.9G  0 part /
└─nvme0n1p15 259:2    0   99M  0 part /mnt/storage1
subham@master-node:~$

Can you please provide some pointers on the same?

github-actions[bot] commented 2 years ago

Issues go stale after 90d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.