gluster / gluster-kubernetes

GlusterFS Native Storage Service for Kubernetes
Apache License 2.0
875 stars 389 forks source link

Cant add node to cluster #328

Closed ghost closed 7 years ago

ghost commented 7 years ago

Hi:

Im trying to complete the example of this repo with the proper configuration, but when I load the topology the result are:

root@s-smartc2-zprei:/opt/kubernetes/glusterfs# heketi/heketi-cli topology load --json=topology.json Creating cluster ... ID: c164bd356345d066ec9fa7350b889152 Creating node s-smartc2-zprei ... Unable to create node: New Node doesn't have glusterd running Creating node s-smartc3-zprei ... Unable to create node: New Node doesn't have glusterd running Creating node s-smartc4-zprei ... Unable to create node: New Node doesn't have glusterd running

My topology file look like this:

{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "s-smartc2-zprei" ], "storage": [ "192.168.133.2" ] }, "zone": 1 }, "devices": [ "/dev/sdc" ] }, { "node": { "hostnames": { "manage": [ "s-smartc3-zprei" ], "storage": [ "192.168.133.3" ] }, "zone": 1 }, "devices": [ "/dev/sdc" ] }, { "node": { "hostnames": { "manage": [ "s-smartc4-zprei" ], "storage": [ "192.168.133.4" ] }, "zone": 1 }, "devices": [ "/dev/sdc" ] } ] } ] }


My pods:

root@s-smartc2-zprei:/opt/kubernetes/glusterfs# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-9nq8c 1/1 Running 0 13m glusterfs-r0mwj 1/1 Running 0 13m glusterfs-vgw1n 1/1 Running 0 13m heketi-37915784-qqh66 1/1 Running 0 4m


FROM one pod testing peer connectivity with gluster command:

[root@s-smartc3-zprei /]# gluster peer probe 192.168.133.2 peer probe: success. [root@s-smartc3-zprei /]# gluster peer probe 192.168.133.3 peer probe: success. Probe on localhost not needed [root@s-smartc3-zprei /]# gluster peer probe 192.168.133.4 peer probe: success. [root@s-smartc3-zprei /]#


Curl to heketi API

root@s-smartc2-zprei:/opt/kubernetes/glusterfs# curl localhost:33660/hello Hello from Heketiroot@s-smartc2-zprei:/opt/kubernetes/glusterfs#


My nodes:

root@s-smartc2-zprei:/opt/kubernetes/glusterfs# kubectl get nodes NAME STATUS AGE VERSION s-smartc2-zprei Ready 1d v1.7.4 s-smartc3-zprei Ready 1d v1.7.4 s-smartc4-zprei Ready 1d v1.7.4 root@s-smartc2-zprei:/opt/kubernetes/glusterfs#

So, What is the problem???

jarrpa commented 7 years ago

What's the output of your heketi logs?

panigrahis commented 7 years ago

Even I am facing the exact same error. Looks like its related to the bug below. https://bugzilla.redhat.com/show_bug.cgi?id=1484217 @jarrpa could you please suggest a quick workaround... Its kind of urgent...

panigrahis commented 7 years ago

Can you please suggest if using one of the old docker images from below would solve the issue? Its failing for the image dev right now. image

jarrpa commented 7 years ago

@panigrahis Are you certain it's the exact same bug with the same error and not just the same symptom?

I'm out for the next hour or two. If you want to try and hunt something down, can you see if the heketi image you're using might have PR https://github.com/heketi/heketi/pull/778 without PR https://github.com/heketi/heketi/pull/840 ?

jarrpa commented 7 years ago

Give latest/4 a shot

panigrahis commented 7 years ago

@jarrpa using the latest tag solved the issue... However I have ubuntu machines that gluster/gluster-centos images are obviously are not compatible with. I guess i have to add centos VM's to my kubernetes cluster

jarrpa commented 7 years ago

@panigrahis Are they inherently incompatible? I think a few users in this community have gotten it working on Ubuntu...

ghost commented 7 years ago

Hi:

With tag latest I can confirm that I can solve the error: Creating node s-smartc2-zprei ... Unable to create node: New Node doesn't have glusterd running Creating node s-smartc3-zprei ... Unable to create node: New Node doesn't have glusterd running Creating node s-smartc4-zprei ... Unable to create node: New Node doesn't have glusterd running

...but I have other errors about RABC permissions, error say that Pods name cannot be resolved. So, Im using Kubernetes 1.7.4 and since 1.6 kubernetes include RBAC. In orther to solve the problem I create the following YAML file that include the correct permissions to allow heketi read pods names.

-----------------------------------

apiVersion: v1
kind: ServiceAccount
metadata:
  name: heketi-service-account

kind: Role
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: heketi-service-account
rules:
  - apiGroups: [""]
    resources: ["pods", "pods/exec", "pods/attach", "pods/proxy", "pods/portfor$
    verbs: ["get", "put", "patch", "update", "list", "post", "watch", "create",$
    nonResourceURLs: []

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: heketi-secret-writer
subjects:
  - kind: ServiceAccount
    name: default
  - kind: ServiceAccount
    name: heketi-service-account
roleRef:
  kind: Role
  name: heketi-service-account
  apiGroup: rbac.authorization.k8s.io

---------------------------------------------

This workaround work OK for me. I think that glusterfs and heketi are a good solution for kubernetes volumes, and kubernetes team think as well, including an example of using heketi for provisioning dynamic volumes in the official kubernetes documentation. So, please, renew your deployments files according with kubernetes version, at least 1.6 (USE RBAC), your json files now are according to kubernetes 1.5, and are OLD files that rise up a lot of problems with the deployments of your great solution in k8s > 1.5. Consider to put files as YAML files and not JSON files.

BR and have a nice day

ghost commented 7 years ago

Problem was solved using latest tag

jarrpa commented 7 years ago

@felixPG Edited your comment for proper formatting. ;)

I think that glusterfs and heketi are a good solution for kubernetes volumes, and kubernetes team think as well, including an example of using heketi for provisioning dynamic volumes in the official kubernetes documentation.

Thanks! :)

So, please, renew your deployments files according with kubernetes version, at least 1.6 (USE RBAC),

Sure, this makes total sense.

your json files now are according to kubernetes 1.5, and are OLD files that rise up a lot of problems with the deployments of your great solution in k8s > 1.5. Consider to put files as YAML files and not JSON files.

Which JSON files are you referring to? The only JSON in the deployment files are the heketi config and topology.

srflaxu40 commented 7 years ago

Do I have to install glusterd onto all my nodes to use heketi?? I thought heketi takes care of this. I am getting Unable to create node: New Node doesn't have glusterd running

jarrpa commented 7 years ago

heketi does not take care of installing anything, it is only a management service. It will configure the GlusterFS nodes once they are installed and the glusterd processes started.

srflaxu40 commented 7 years ago

Thanks @jarrpa, so pretty much setup the glusterfs binaries / daemon and it should be able to run?

srflaxu40 commented 7 years ago

@jarrpa It seems I got past this issue, but I am getting this error:

Unable to create node: Failed to get list of pods

srflaxu40 commented 7 years ago

This appears to be working, and I have not installed anything just ran the glusterfs pods. I had to clusterrole bind the service account in the repo:

~/heketi/extras/kubernetes$ heketi-cli topology load --json=topology-sample.json Found node ip-10-1-1-190 on cluster b2a9130f63b735f71c9a32c3a04ee75a Adding device /dev/xvdb ... OK Creating node ip-10-1-3-57 ... ID: bab7ac26a0e26cab2e23758a8d720bfe Adding device /dev/xvdb ... OK Creating node ip-10-1-5-157 ... ID: 7396b0616233c28a808e075772795733 Adding device /dev/xvdb ... OK Creating node ip-10-1-6-243 ... ID: 5a015eb07fab82431cc37fa90f157d10 Adding device /dev/xvdb ... OK

I followed this issue here that you helped with in the past :)

https://github.com/gluster/gluster-kubernetes/issues/142

jarrpa commented 7 years ago

If you'd like continued help on this, please open a new Issue. :)