gluster / gluster-kubernetes

GlusterFS Native Storage Service for Kubernetes
Apache License 2.0
875 stars 389 forks source link

Liveness & Readiness Probe is failing for the daemonSet pods #534

Closed ghost closed 5 years ago

ghost commented 5 years ago

I followed all the prereqs mentioned in the Install Doc on my Debian machines.

╰─ ./gk-deploy -n storage -g
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.

Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.

The client machine that will run this script must have:
 * Administrative access to an existing Kubernetes or OpenShift cluster
 * Access to a python interpreter 'python'

Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
 * 2222  - sshd (if running GlusterFS in a pod)
 * 24007 - GlusterFS Management
 * 24008 - GlusterFS RDMA
 * 49152 to 49251 - Each brick for every volume on the host requires its own
   port. For every new brick, one new port will be used starting at 49152. We
   recommend a default range of 49152-49251 on each host, though you can adjust
   this to fit your needs.

The following kernel modules must be loaded:
 * dm_snapshot
 * dm_mirror
 * dm_thin_pool

For systems with SELinux, the following settings need to be considered:
 * virt_sandbox_use_fusefs should be enabled on each node to allow writing to
   remote GlusterFS volumes

In addition, for an OpenShift deployment you must:
 * Have 'cluster_admin' role on the administrative account doing the deployment
 * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
 * Have a router deployed that is configured to allow apps to access services
   running in the cluster

Do you wish to proceed with deployment?

[Y]es, [N]o? [Default: Y]: y
Using Kubernetes CLI.
Using namespace "storage".
Checking for pre-existing resources...
  GlusterFS pods ... not found.
  deploy-heketi pod ... not found.
  heketi pod ... not found.
  gluster-s3 pod ... not found.
Creating initial resources ... serviceaccount/heketi-service-account created
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view created
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view labeled
OK
node/kubernetes-master-agent labeled
node/kubernetes-agent-2 labeled
node/kubernetes-agent-3 labeled
daemonset.extensions/glusterfs created
Waiting for GlusterFS pods to start ... pods not found.

Doing a describe on the pods, I get ;

Events:
  Type     Reason                 Age              From                              Message
  ----     ------                 ----             ----                              -------
  Normal   SuccessfulMountVolume  1m               kubelet, kubernetes-master-agent  MountVolume.SetUp succeeded for volume "glusterfs-ssl"
  Normal   SuccessfulMountVolume  1m               kubelet, kubernetes-master-agent  MountVolume.SetUp succeeded for volume "kernel-modules"
  Normal   SuccessfulMountVolume  1m               kubelet, kubernetes-master-agent  MountVolume.SetUp succeeded for volume "glusterfs-lvm"
  Normal   SuccessfulMountVolume  1m               kubelet, kubernetes-master-agent  MountVolume.SetUp succeeded for volume "glusterfs-cgroup"
  Normal   SuccessfulMountVolume  1m               kubelet, kubernetes-master-agent  MountVolume.SetUp succeeded for volume "glusterfs-config"
  Normal   SuccessfulMountVolume  1m               kubelet, kubernetes-master-agent  MountVolume.SetUp succeeded for volume "glusterfs-etc"
  Normal   SuccessfulMountVolume  1m               kubelet, kubernetes-master-agent  MountVolume.SetUp succeeded for volume "glusterfs-misc"
  Normal   SuccessfulMountVolume  1m               kubelet, kubernetes-master-agent  MountVolume.SetUp succeeded for volume "glusterfs-logs"
  Normal   SuccessfulMountVolume  1m               kubelet, kubernetes-master-agent  MountVolume.SetUp succeeded for volume "glusterfs-dev"
  Normal   SuccessfulMountVolume  1m (x3 over 1m)  kubelet, kubernetes-master-agent  (combined from similar events): MountVolume.SetUp succeeded for volume "default-token-k8m59"
  Normal   Pulled                 1m               kubelet, kubernetes-master-agent  Container image "gluster/gluster-centos:latest" already present on machine
  Normal   Created                1m               kubelet, kubernetes-master-agent  Created container
  Normal   Started                1m               kubelet, kubernetes-master-agent  Started container
  Warning  Unhealthy              18s              kubelet, kubernetes-master-agent  Readiness probe failed: /usr/local/bin/status-probe.sh
failed check: systemctl -q is-active gluster-blockd.service
  Warning  Unhealthy  11s (x2 over 36s)  kubelet, kubernetes-master-agent  Liveness probe failed: /usr/local/bin/status-probe.sh
failed check: systemctl -q is-active gluster-blockd.service

On the nodes I only installed glusterfs-client.

Is there something else I need to deploy on the nodes ?

TommyKTheDJ commented 5 years ago

I am finding this exact issue too - was there a resolution? I am struggling to find any mention of a gluster-blockd.service anywhere.

ericparland commented 5 years ago

I'm also experiencing the same problem. @ghost can you share your solution?

gnreddy06 commented 5 years ago

I had faced the same problem. I had glusterfs service running on K8 nodes too. After stopping and disabling the glusterfs service, restarted the K8 nodes. All the glusterfs pods came up automatically.