Closed odidev closed 2 years ago
Warning FailCreate 10s (x9 over 3m40s) CStorPoolInstance Failed to create pool due to 'Failed to create pool {cstor-3a11ffa7-8a28-4d44-908b-7dddd3dcc23a} : Failed to create pool.. invalid vdev specification use '-f' to override the following errors: /dev/xvda1 contains a filesystem of type 'ext4' .. exit status 1'
For cStor pool to create it shouldn't contain any filesystem... blockdevice-064a6195a45b32f2f075f88a7e0e2178
contains ext4 filesystem. Can you delete the CSPC and wipe the filesystem and then create a pool again?
@mittachaitu I've tried formatting the block device and tried to use it again, but I'm getting a different error this time saying "must be a block device or a regular file". Please find the logs below.
subham@master-node:~$ kubectl describe cspi cstor-disk-pool-2gc5 -n openebs
Name: cstor-disk-pool-2gc5
Namespace: openebs
Labels: kubernetes.io/hostname=minikube
openebs.io/cas-type=cstor
openebs.io/cstor-pool-cluster=cstor-disk-pool
openebs.io/version=3.1.0
Annotations: <none>
API Version: cstor.openebs.io/v1
Kind: CStorPoolInstance
Metadata:
Creation Timestamp: 2022-01-24T06:40:18Z
Finalizers:
cstorpoolcluster.openebs.io/finalizer
openebs.io/pool-protection
Generation: 1
Managed Fields:
API Version: cstor.openebs.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"cstorpoolcluster.openebs.io/finalizer":
f:labels:
.:
f:kubernetes.io/hostname:
f:openebs.io/cas-type:
f:openebs.io/cstor-pool-cluster:
f:openebs.io/version:
f:ownerReferences:
.:
k:{"uid":"a88f1619-433f-419f-a44c-f6a79aa96c6a"}:
f:spec:
.:
f:dataRaidGroups:
f:hostName:
f:nodeSelector:
.:
f:kubernetes.io/hostname:
f:poolConfig:
.:
f:dataRaidGroupType:
f:priorityClassName:
f:roThresholdLimit:
f:status:
.:
f:capacity:
.:
f:free:
f:total:
f:used:
f:zfs:
.:
f:logicalUsed:
f:healthyReplicas:
f:provisionedReplicas:
f:readOnly:
f:versionDetails:
.:
f:desired:
f:status:
.:
f:current:
f:dependentsUpgraded:
f:lastUpdateTime:
Manager: cspc-operator
Operation: Update
Time: 2022-01-24T06:40:18Z
API Version: cstor.openebs.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
v:"openebs.io/pool-protection":
Manager: pool-manager
Operation: Update
Time: 2022-01-24T06:40:43Z
Owner References:
API Version: cstor.openebs.io/v1
Block Owner Deletion: true
Controller: true
Kind: CStorPoolCluster
Name: cstor-disk-pool
UID: a88f1619-433f-419f-a44c-f6a79aa96c6a
Resource Version: 1592
UID: 9599f357-2fc3-46c1-920d-301351ec2a52
Spec:
Data Raid Groups:
Block Devices:
Block Device Name: blockdevice-75ef95a18dc6980cb152d7e604b2dd15
Host Name: minikube
Node Selector:
kubernetes.io/hostname: minikube
Pool Config:
Data Raid Group Type: stripe
Priority Class Name:
Ro Threshold Limit: 85
Status:
Capacity:
Free: 0
Total: 0
Used: 0
Zfs:
Logical Used: 0
Healthy Replicas: 0
Provisioned Replicas: 0
Read Only: false
Version Details:
Desired: 3.1.0
Status:
Current: 3.1.0
Dependents Upgraded: true
Last Update Time: <nil>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailCreate 22m (x5 over 23m) CStorPoolInstance Failed to create pool due to 'Failed to create pool {cstor-a88f1619-433f-419f-a44c-f6a79aa96c6a} : Failed to create pool.. cannot resolve path '/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol063b9d63f9e554c71-part15'
.. exit status 1'
Warning FailCreate 15m (x13 over 21m) CStorPoolInstance Failed to create pool due to 'Failed to create pool {cstor-a88f1619-433f-419f-a44c-f6a79aa96c6a} : Failed to create pool.. cannot use '/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol063b9d63f9e554c71-part15': must be a block device or regular file
.. exit status 1'
Warning FailCreate 3m22s (x25 over 15m) CStorPoolInstance Failed to create pool due to 'failed to verify if pool is importable: exit status 1'
subham@master-node:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 22M 1 loop /snap/amazon-ssm-agent/4047
loop1 7:1 0 49M 1 loop /snap/core18/2252
loop2 7:2 0 49M 1 loop /snap/core18/2289
loop3 7:3 0 57.4M 1 loop /snap/core20/1244
loop4 7:4 0 57.5M 1 loop /snap/core20/1274
loop6 7:6 0 60.4M 1 loop /snap/lxd/21544
loop7 7:7 0 8.8M 1 loop /snap/kubectl/2293
loop8 7:8 0 60.7M 1 loop /snap/lxd/21843
loop9 7:9 0 37.5M 1 loop /snap/snapd/14296
loop10 7:10 0 36.5M 1 loop /snap/snapd/14063
loop11 7:11 0 8.8M 1 loop /snap/kubectl/2304
nvme0n1 259:0 0 100G 0 disk
├─nvme0n1p1 259:1 0 99.9G 0 part /
└─nvme0n1p15 259:2 0 99M 0 part /mnt/storage1
subham@master-node:~$
Can you please provide some pointers on the same?
Issues go stale after 90d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.
Hi Team,
I am facing issues while creating cStor storage pools for both AMD64 platform on the AWS instance. I used this document to install cStor and create cStor pool storage. I am getting error in this step while implementing
cspi.yaml
file. I have also tried to troubleshoot this problem from here.Below are the detailed logs of cspc and cspi.
Cspi
Can you please provide some pointers on the same?