gluster / gluster-kubernetes

GlusterFS Native Storage Service for Kubernetes
Apache License 2.0
875 stars 389 forks source link

PVC stuck in pending more #517

Closed dimthe closed 5 years ago

dimthe commented 6 years ago

Hello installation using the gk-deploy went fine and completed with no errors . I am on Centos using kubectl 1.7.4 i can not get pvc to bound , its stuck at pending .

logs from heketi as soon as i create the pvc

[negroni] Completed 202 Accepted in 274.176392ms
[asynchttp] INFO 2018/09/07 11:58:52 asynchttp.go:125: Started job 0e57371802d690f1f06ac96680c71ffc
[heketi] INFO 2018/09/07 11:58:52 Started async operation: Create Volume
[negroni] Started GET /queue/0e57371802d690f1f06ac96680c71ffc
[negroni] Completed 200 OK in 17.434µs
[heketi] INFO 2018/09/07 11:58:52 Creating brick 94b4103ae88227f343512693928b1c06
[heketi] INFO 2018/09/07 11:58:52 Creating brick 9cf34dcf2468e3c3307e504c1c32b030
[heketi] INFO 2018/09/07 11:58:52 Creating brick ae9b99faba8852f507e4b1ac43e5ce10
[kubeexec] DEBUG 2018/09/07 11:58:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: mkdir -p /var/lib/heketi/mounts/vg_cacb122b9866184ab3c493d918117a3d/brick_ae9b99faba8852f507e4b1ac43e5ce10
Result:
[kubeexec] DEBUG 2018/09/07 11:58:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: mkdir -p /var/lib/heketi/mounts/vg_876cce0304a7a5eef186065b6ab0eb4e/brick_9cf34dcf2468e3c3307e504c1c32b030
Result:
[negroni] Started GET /queue/e492274674660a64011c80904d0c7d0d
[negroni] Completed 200 OK in 63.37µs
[negroni] Started GET /queue/1d578edf745481cc5d93eaab23090ab5
[negroni] Completed 200 OK in 54.852µs
[negroni] Started GET /queue/aad9c87f5e88e05d629334602a62002d
[negroni] Completed 200 OK in 70.507µs
[negroni] Started GET /queue/04ef2147024b1e9609cf386b69ab779e
[negroni] Completed 200 OK in 177.175µs
[negroni] Started GET /queue/0e57371802d690f1f06ac96680c71ffc
[negroni] Completed 200 OK in 99.859µs
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: lvcreate --poolmetadatasize 8192K -c 256K -L 1048576K -T vg_cacb122b9866184ab3c493d918117a3d/tp_ae9b99faba8852f507e4b1ac43e5ce10 -V 1048576K -n brick_ae9b99faba8852f507e4b1ac43e5ce10
Result:   Using default stripesize 64.00 KiB.
  Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
  Logical volume "brick_ae9b99faba8852f507e4b1ac43e5ce10" created.
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: lvcreate --poolmetadatasize 8192K -c 256K -L 1048576K -T vg_876cce0304a7a5eef186065b6ab0eb4e/tp_9cf34dcf2468e3c3307e504c1c32b030 -V 1048576K -n brick_9cf34dcf2468e3c3307e504c1c32b030
Result:   Using default stripesize 64.00 KiB.
  Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
  Logical volume "brick_9cf34dcf2468e3c3307e504c1c32b030" created.
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_cacb122b9866184ab3c493d918117a3d-brick_ae9b99faba8852f507e4b1ac43e5ce10
Result: meta-data=/dev/mapper/vg_cacb122b9866184ab3c493d918117a3d-brick_ae9b99faba8852f507e4b1ac43e5ce10 isize=512    agcount=8, agsize=32752 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262016, imaxpct=25
         =                       sunit=16     swidth=64 blks
naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=864, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: awk "BEGIN {print \"/dev/mapper/vg_cacb122b9866184ab3c493d918117a3d-brick_ae9b99faba8852f507e4b1ac43e5ce10 /var/lib/heketi/mounts/vg_cacb122b9866184ab3c493d918117a3d/brick_ae9b99faba8852f507e4b1ac43e5ce10 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
Result:
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_876cce0304a7a5eef186065b6ab0eb4e-brick_9cf34dcf2468e3c3307e504c1c32b030
Result: meta-data=/dev/mapper/vg_876cce0304a7a5eef186065b6ab0eb4e-brick_9cf34dcf2468e3c3307e504c1c32b030 isize=512    agcount=8, agsize=32752 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262016, imaxpct=25
         =                       sunit=16     swidth=64 blks
naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=864, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_cacb122b9866184ab3c493d918117a3d-brick_ae9b99faba8852f507e4b1ac43e5ce10 /var/lib/heketi/mounts/vg_cacb122b9866184ab3c493d918117a3d/brick_ae9b99faba8852f507e4b1ac43e5ce10
Result:
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: mkdir /var/lib/heketi/mounts/vg_cacb122b9866184ab3c493d918117a3d/brick_ae9b99faba8852f507e4b1ac43e5ce10/brick
Result:
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: awk "BEGIN {print \"/dev/mapper/vg_876cce0304a7a5eef186065b6ab0eb4e-brick_9cf34dcf2468e3c3307e504c1c32b030 /var/lib/heketi/mounts/vg_876cce0304a7a5eef186065b6ab0eb4e/brick_9cf34dcf2468e3c3307e504c1c32b030 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
Result:
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: chown :2004 /var/lib/heketi/mounts/vg_cacb122b9866184ab3c493d918117a3d/brick_ae9b99faba8852f507e4b1ac43e5ce10/brick
Result:
[negroni] Started GET /queue/e492274674660a64011c80904d0c7d0d
[negroni] Completed 200 OK in 52.23µs
[negroni] Started GET /queue/1d578edf745481cc5d93eaab23090ab5
[negroni] Completed 200 OK in 63.651µs
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_876cce0304a7a5eef186065b6ab0eb4e-brick_9cf34dcf2468e3c3307e504c1c32b030 /var/lib/heketi/mounts/vg_876cce0304a7a5eef186065b6ab0eb4e/brick_9cf34dcf2468e3c3307e504c1c32b030
Result:
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: chmod 2775 /var/lib/heketi/mounts/vg_cacb122b9866184ab3c493d918117a3d/brick_ae9b99faba8852f507e4b1ac43e5ce10/brick
Result:
[negroni] Started GET /queue/04ef2147024b1e9609cf386b69ab779e
[negroni] Completed 200 OK in 62.525µs
[negroni] Started GET /queue/aad9c87f5e88e05d629334602a62002d
[negroni] Completed 200 OK in 245.558µs
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: mkdir /var/lib/heketi/mounts/vg_876cce0304a7a5eef186065b6ab0eb4e/brick_9cf34dcf2468e3c3307e504c1c32b030/brick
Result:
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: chown :2004 /var/lib/heketi/mounts/vg_876cce0304a7a5eef186065b6ab0eb4e/brick_9cf34dcf2468e3c3307e504c1c32b030/brick
Result:
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: chmod 2775 /var/lib/heketi/mounts/vg_876cce0304a7a5eef186065b6ab0eb4e/brick_9cf34dcf2468e3c3307e504c1c32b030/brick
Result:
[negroni] Started GET /queue/0e57371802d690f1f06ac96680c71ffc
[negroni] Completed 200 OK in 65.312µs
[negroni] Started GET /queue/e492274674660a64011c80904d0c7d0d
[negroni] Completed 200 OK in 64.274µs
[negroni] Started GET /queue/1d578edf745481cc5d93eaab23090ab5
[negroni] Completed 200 OK in 56.647µs
[negroni] Started GET /queue/aad9c87f5e88e05d629334602a62002d
[negroni] Started GET /queue/04ef2147024b1e9609cf386b69ab779e
[negroni] Completed 200 OK in 90.309µs
[negroni] Completed 200 OK in 110.882µs
[negroni] Started GET /queue/0e57371802d690f1f06ac96680c71ffc
[negroni] Completed 200 OK in 79.047µs
[negroni] Started GET /queue/e492274674660a64011c80904d0c7d0d
[negroni] Completed 200 OK in 56.882µs
[negroni] Started GET /queue/1d578edf745481cc5d93eaab23090ab5
[negroni] Completed 200 OK in 58.596µs

what else can i check ?


kubectl describe service heketi
Name:           heketi
Namespace:      default
Labels:         glusterfs=heketi-service
            heketi=service
Annotations:        description=Exposes Heketi Service
Selector:       glusterfs=heketi-pod
Type:           ClusterIP
IP:         10.233.10.100
Port:           heketi  8080/TCP
Endpoints:      10.233.103.181:8080
Session Affinity:   None
Events:         <none>
[root@centos01 deploy]# curl 10.233.103.181:8080/hello
Hello from Heketi[root@centos01 deploy]#

[root@centos01 deploy]#
[root@centos01 deploy]#
[root@centos01 deploy]# kubectl describe pvc gluster1
Name:       gluster1
Namespace:  default
StorageClass:   glusterfs-storage
Status:     Pending
Volume:
Labels:     <none>
Annotations:    volume.beta.kubernetes.io/storage-class=glusterfs-storage
        volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
Capacity:
Access Modes:
Events:     <none>
[root@centos01 deploy]# cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: gluster1
 annotations:
   volume.beta.kubernetes.io/storage-class: glusterfs-storage
spec:
 accessModes:
  - ReadWriteOnce
 resources:
   requests:
     storage: 1Gi
[root@centos01 deploy]# cat storageclass.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: http://10.233.103.181:8080