Closed Vishal-Gaur closed 6 years ago
Hi @Vishal-Gaur You need to have at least 3 nodes with separate unformatted disk. Also, please take a look here: https://github.com/gluster/gluster-kubernetes/issues/411 https://github.com/heketi/heketi/issues/1046
Or you can create a Gluster volume that does not have any replicas (the default is replica-3, which needs 3 storage servers). In order to do so, run the volume create command like
heketi-cli volume create --durability=none --size=1
Obviously an environment with a single storage server is not high available or fault tolerant. Use such an environment for testing only :-)
@nixpanic Thanks for that command. it's working ..... and i'm having that one server only because of testing purpose.
but while i'm dynamic provisioning using HEKETI, at that time PVC goes in pending state. have you any idea about that ...????
The default StorageClass will try to create volumes with a replica-3 configuration. Because you only have one storage server, this is not working and the PVC will never get created. You will need to configure a StorageClass with the volumetyp set to 'none', and use that. See https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs for a little more details
This is my Storge Class file
# cat gluster-heketi-external-storage-class.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: gluster-heketi-external
namespace: default
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://Heketi:8080"
restauthenabled: "true"
restuser: "admin"
secretName: "heketi-secret"
secretNamespace: "default"
This is my PVC file
# cat glusterfs-pvc-storageclass.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-dyn-pvc
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: gluster-heketi-external
but here is only one storage server so should i use volumetype: "replicate:1" likewise Please give me some idea about it ....
@Vishal-Gaur You have not done the change @nixpanic suggested. Please read the link he provided and update your StorageClass parameters.
@jarrpa & @nixpanic i have changed the config as you had suggested and here is the storage class file
# cat gluster-heketi-external-storage-class.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: gluster-heketi-external
namespace: test
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://Heketi:8080"
restauthenabled: "true"
restuser: "admin"
secretName: "heketi-secret"
secretNamespace: "default"
volumetype: "none"
is this correct ??? but PVC still in pending state ..
That looks good, as long as you repaced the rest- and secret-values with the ones for your environment.
You can check what Heketi receives by going through its logs. When a PVC is in 'pending' state, it often retries to create the Gluster volume and Heketi will be quite verbose about that.
@Vishal-Gaur did you get it working? If so, we can close this issue. Thanks!
@Vishal-Gaur you still have not done the suggested change: From https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs, you should add
volumetype: "replicate:1"
to your storage class. Without this, kubernetes will try to create replica-3-volumes, which fails by design in a 1-node "cluster" ...
I am facing a similar issue. The PVC stays in PENDING
state for ever. I have setup gluster on 3 node cluster using the following topology
file:
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"worker0"
],
"storage": [
"172.18.1.20"
]
},
"zone": 1
},
"devices": [
"/dev/nbd3"
]
},
{
"node": {
"hostnames": {
"manage": [
"worker1"
],
"storage": [
"172.18.1.21"
]
},
"zone": 1
},
"devices": [
"/dev/nbd3"
]
},
{
"node": {
"hostnames": {
"manage": [
"worker2"
],
"storage": [
"172.18.1.22"
]
},
"zone": 1
},
"devices": [
"/dev/nbd2"
]
}
]
}
]
}
and then ran from deploy
dir, ran ./gk-deploy -n glusterfs -g
. The command ran successfully without any error and created the required kubernetes gluster resources:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds/glusterfs 3 3 3 3 3 storagenode=glusterfs 1h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/heketi 1 1 1 1 57m
NAME DESIRED CURRENT READY AGE
rs/heketi-64654b74d8 1 1 1 57m
NAME AGE
ds/glusterfs 1h
NAME AGE
deploy/heketi 57m
NAME AGE
rs/heketi-64654b74d8 57m
NAME READY STATUS RESTARTS AGE
po/glusterfs-lfrfj 1/1 Running 0 1h
po/glusterfs-mlcfd 1/1 Running 0 1h
po/glusterfs-xkhmz 1/1 Running 0 1h
po/heketi-64654b74d8-vw28f 1/1 Running 0 57m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/heketi ClusterIP 10.233.21.169 <none> 8080/TCP 57m
svc/heketi-storage-endpoints ClusterIP 10.233.31.223 <none> 1/TCP 58m
Post that I created the following storageclass:
---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://10.233.105.146:8080"
restuser: "admin"
volumetype: "replicate:2"
Here's my gluster-kubernetes setup details:
kubectl exec -i heketi-64654b74d8-vw28f heketi-cli topology info
Cluster Id: f20225646098171cf06e1be15e79733c
File: true
Block: true
Volumes:
Name: vol_1ea3c12f52d1ec356cc4935029cfc083
Size: 1
Id: 1ea3c12f52d1ec356cc4935029cfc083
Cluster Id: f20225646098171cf06e1be15e79733c
Mount: 172.18.1.22:vol_1ea3c12f52d1ec356cc4935029cfc083
Mount Options: backup-volfile-servers=172.18.1.20,172.18.1.21
Durability Type: replicate
Replica: 2
Snapshot: Disabled
Bricks:
Id: 54b4b6299c3fd67f259e3a49dfdb80c4
Path: /var/lib/heketi/mounts/vg_08e5e0a99e4c31e1b5cdb5d260ed5606/brick_54b4b6299c3fd67f259e3a49dfdb80c4/brick
Size (GiB): 1
Node: 5cf2cb37b8bdc85c05263ed0ffc919bf
Device: 08e5e0a99e4c31e1b5cdb5d260ed5606
Id: 6c58a9efce790b75bac064de24537bef
Path: /var/lib/heketi/mounts/vg_3e29444549eb2a0e0d3befd2e6ae01f0/brick_6c58a9efce790b75bac064de24537bef/brick
Size (GiB): 1
Node: 823437e8d18d5731cc36638ae7b34405
Device: 3e29444549eb2a0e0d3befd2e6ae01f0
Name: vol_4f1bdc6a95d90265545e77d76b55b7fc
Size: 3
Id: 4f1bdc6a95d90265545e77d76b55b7fc
Cluster Id: f20225646098171cf06e1be15e79733c
Mount: 172.18.1.22:vol_4f1bdc6a95d90265545e77d76b55b7fc
Mount Options: backup-volfile-servers=172.18.1.20,172.18.1.21
Durability Type: replicate
Replica: 2
Snapshot: Disabled
Bricks:
Id: 1ad437790e631ff12b633de0e74623cd
Path: /var/lib/heketi/mounts/vg_08e5e0a99e4c31e1b5cdb5d260ed5606/brick_1ad437790e631ff12b633de0e74623cd/brick
Size (GiB): 3
Node: 5cf2cb37b8bdc85c05263ed0ffc919bf
Device: 08e5e0a99e4c31e1b5cdb5d260ed5606
Id: 67cf529895b793a990c74ef58007dfbd
Path: /var/lib/heketi/mounts/vg_3e29444549eb2a0e0d3befd2e6ae01f0/brick_67cf529895b793a990c74ef58007dfbd/brick
Size (GiB): 3
Node: 823437e8d18d5731cc36638ae7b34405
Device: 3e29444549eb2a0e0d3befd2e6ae01f0
Name: vol_517208fd53df20903e635c21ab8509c8
Size: 1
Id: 517208fd53df20903e635c21ab8509c8
Cluster Id: f20225646098171cf06e1be15e79733c
Mount: 172.18.1.22:vol_517208fd53df20903e635c21ab8509c8
Mount Options: backup-volfile-servers=172.18.1.20,172.18.1.21
Durability Type: replicate
Replica: 2
Snapshot: Disabled
Bricks:
Id: 268f5a829cf4d9897b36727508867a54
Path: /var/lib/heketi/mounts/vg_3e29444549eb2a0e0d3befd2e6ae01f0/brick_268f5a829cf4d9897b36727508867a54/brick
Size (GiB): 1
Node: 823437e8d18d5731cc36638ae7b34405
Device: 3e29444549eb2a0e0d3befd2e6ae01f0
Id: fc0242d2be3241c8386d095e81d1eb9d
Path: /var/lib/heketi/mounts/vg_08e5e0a99e4c31e1b5cdb5d260ed5606/brick_fc0242d2be3241c8386d095e81d1eb9d/brick
Size (GiB): 1
Node: 5cf2cb37b8bdc85c05263ed0ffc919bf
Device: 08e5e0a99e4c31e1b5cdb5d260ed5606
Name: heketidbstorage
Size: 2
Id: 5423f412ae39e29833b986aba77956e9
Cluster Id: f20225646098171cf06e1be15e79733c
Mount: 172.18.1.22:heketidbstorage
Mount Options: backup-volfile-servers=172.18.1.20,172.18.1.21
Durability Type: replicate
Replica: 3
Snapshot: Disabled
Bricks:
Id: 3716a024ec9038f7e7bb9eb9067dc8d7
Path: /var/lib/heketi/mounts/vg_f9ff6efa4a9510b006e973a7c6afddb7/brick_3716a024ec9038f7e7bb9eb9067dc8d7/brick
Size (GiB): 2
Node: bc9c10cdb57d16c738eceb9e5a70ba7f
Device: f9ff6efa4a9510b006e973a7c6afddb7
Id: 74ce2380727ebbcedae61e73a10e0418
Path: /var/lib/heketi/mounts/vg_3e29444549eb2a0e0d3befd2e6ae01f0/brick_74ce2380727ebbcedae61e73a10e0418/brick
Size (GiB): 2
Node: 823437e8d18d5731cc36638ae7b34405
Device: 3e29444549eb2a0e0d3befd2e6ae01f0
Id: dfe2ec8c1666921557459f731bf23f59
Path: /var/lib/heketi/mounts/vg_08e5e0a99e4c31e1b5cdb5d260ed5606/brick_dfe2ec8c1666921557459f731bf23f59/brick
Size (GiB): 2
Node: 5cf2cb37b8bdc85c05263ed0ffc919bf
Device: 08e5e0a99e4c31e1b5cdb5d260ed5606
Name: vol_c5d7e71330fb021d359975cd96c0e322
Size: 1
Id: c5d7e71330fb021d359975cd96c0e322
Cluster Id: f20225646098171cf06e1be15e79733c
Mount: 172.18.1.22:vol_c5d7e71330fb021d359975cd96c0e322
Mount Options: backup-volfile-servers=172.18.1.20,172.18.1.21
Durability Type: replicate
Replica: 2
Snapshot: Disabled
Bricks:
Id: 0149bd7dd9291dc98fa777b734e6918d
Path: /var/lib/heketi/mounts/vg_3e29444549eb2a0e0d3befd2e6ae01f0/brick_0149bd7dd9291dc98fa777b734e6918d/brick
Size (GiB): 1
Node: 823437e8d18d5731cc36638ae7b34405
Device: 3e29444549eb2a0e0d3befd2e6ae01f0
Id: ec798b3a3e931ab6b88afc8efa82cc01
Path: /var/lib/heketi/mounts/vg_f9ff6efa4a9510b006e973a7c6afddb7/brick_ec798b3a3e931ab6b88afc8efa82cc01/brick
Size (GiB): 1
Node: bc9c10cdb57d16c738eceb9e5a70ba7f
Device: f9ff6efa4a9510b006e973a7c6afddb7
Nodes:
Node Id: 5cf2cb37b8bdc85c05263ed0ffc919bf
State: online
Cluster Id: f20225646098171cf06e1be15e79733c
Zone: 1
Management Hostnames: worker2
Storage Hostnames: 172.18.1.22
Devices:
Id:08e5e0a99e4c31e1b5cdb5d260ed5606 Name:/dev/nbd2 State:online Size (GiB):93 Used (GiB):7 Free (GiB):85
Bricks:
Id:1ad437790e631ff12b633de0e74623cd Size (GiB):3 Path: /var/lib/heketi/mounts/vg_08e5e0a99e4c31e1b5cdb5d260ed5606/brick_1ad437790e631ff12b633de0e74623cd/brick
Id:54b4b6299c3fd67f259e3a49dfdb80c4 Size (GiB):1 Path: /var/lib/heketi/mounts/vg_08e5e0a99e4c31e1b5cdb5d260ed5606/brick_54b4b6299c3fd67f259e3a49dfdb80c4/brick
Id:dfe2ec8c1666921557459f731bf23f59 Size (GiB):2 Path: /var/lib/heketi/mounts/vg_08e5e0a99e4c31e1b5cdb5d260ed5606/brick_dfe2ec8c1666921557459f731bf23f59/brick
Id:fc0242d2be3241c8386d095e81d1eb9d Size (GiB):1 Path: /var/lib/heketi/mounts/vg_08e5e0a99e4c31e1b5cdb5d260ed5606/brick_fc0242d2be3241c8386d095e81d1eb9d/brick
Node Id: 823437e8d18d5731cc36638ae7b34405
State: online
Cluster Id: f20225646098171cf06e1be15e79733c
Zone: 1
Management Hostnames: worker0
Storage Hostnames: 172.18.1.20
Devices:
Id:3e29444549eb2a0e0d3befd2e6ae01f0 Name:/dev/nbd3 State:online Size (GiB):93 Used (GiB):8 Free (GiB):84
Bricks:
Id:0149bd7dd9291dc98fa777b734e6918d Size (GiB):1 Path: /var/lib/heketi/mounts/vg_3e29444549eb2a0e0d3befd2e6ae01f0/brick_0149bd7dd9291dc98fa777b734e6918d/brick
Id:268f5a829cf4d9897b36727508867a54 Size (GiB):1 Path: /var/lib/heketi/mounts/vg_3e29444549eb2a0e0d3befd2e6ae01f0/brick_268f5a829cf4d9897b36727508867a54/brick
Id:67cf529895b793a990c74ef58007dfbd Size (GiB):3 Path: /var/lib/heketi/mounts/vg_3e29444549eb2a0e0d3befd2e6ae01f0/brick_67cf529895b793a990c74ef58007dfbd/brick
Id:6c58a9efce790b75bac064de24537bef Size (GiB):1 Path: /var/lib/heketi/mounts/vg_3e29444549eb2a0e0d3befd2e6ae01f0/brick_6c58a9efce790b75bac064de24537bef/brick
Id:74ce2380727ebbcedae61e73a10e0418 Size (GiB):2 Path: /var/lib/heketi/mounts/vg_3e29444549eb2a0e0d3befd2e6ae01f0/brick_74ce2380727ebbcedae61e73a10e0418/brick
Node Id: bc9c10cdb57d16c738eceb9e5a70ba7f
State: online
Cluster Id: f20225646098171cf06e1be15e79733c
Zone: 1
Management Hostnames: worker1
Storage Hostnames: 172.18.1.21
Devices:
Id:f9ff6efa4a9510b006e973a7c6afddb7 Name:/dev/nbd3 State:online Size (GiB):93 Used (GiB):3 Free (GiB):89
Bricks:
Id:3716a024ec9038f7e7bb9eb9067dc8d7 Size (GiB):2 Path: /var/lib/heketi/mounts/vg_f9ff6efa4a9510b006e973a7c6afddb7/brick_3716a024ec9038f7e7bb9eb9067dc8d7/brick
Id:ec798b3a3e931ab6b88afc8efa82cc01 Size (GiB):1 Path: /var/lib/heketi/mounts/vg_f9ff6efa4a9510b006e973a7c6afddb7/brick_ec798b3a3e931ab6b88afc8efa82cc01/brick
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gluster1 Pending glusterfs-storage 21m
kubectl describe pvc gluster1
Name: gluster1
Namespace: glusterfs
StorageClass: glusterfs-storage
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"glusterfs-storage"},"name":"glu...
volume.beta.kubernetes.io/storage-class=glusterfs-storage
volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
Capacity:
Access Modes:
Events: <none>
@humblec @jarrpa Could you help me with the above issue?
Given the silence of the OP I am closing this issue. @rtnpro if you still need help please open a new issue with this information as well as a description of anything you've done since trying your first deployment. Also have a look at kubectl logs <heketi_pod>
and post any relevant sections.
Hii @jarrpa , I have one gluster node installed heketi on it.Now i'm provisioning for 1 Gb volume but it's showing No space.
And i've available space also.