Hello
installation using the gk-deploy went fine and completed with no errors .
I am on Centos using kubectl 1.7.4
i can not get pvc to bound , its stuck at pending .
logs from heketi as soon as i create the pvc
[negroni] Completed 202 Accepted in 274.176392ms
[asynchttp] INFO 2018/09/07 11:58:52 asynchttp.go:125: Started job 0e57371802d690f1f06ac96680c71ffc
[heketi] INFO 2018/09/07 11:58:52 Started async operation: Create Volume
[negroni] Started GET /queue/0e57371802d690f1f06ac96680c71ffc
[negroni] Completed 200 OK in 17.434µs
[heketi] INFO 2018/09/07 11:58:52 Creating brick 94b4103ae88227f343512693928b1c06
[heketi] INFO 2018/09/07 11:58:52 Creating brick 9cf34dcf2468e3c3307e504c1c32b030
[heketi] INFO 2018/09/07 11:58:52 Creating brick ae9b99faba8852f507e4b1ac43e5ce10
[kubeexec] DEBUG 2018/09/07 11:58:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: mkdir -p /var/lib/heketi/mounts/vg_cacb122b9866184ab3c493d918117a3d/brick_ae9b99faba8852f507e4b1ac43e5ce10
Result:
[kubeexec] DEBUG 2018/09/07 11:58:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: mkdir -p /var/lib/heketi/mounts/vg_876cce0304a7a5eef186065b6ab0eb4e/brick_9cf34dcf2468e3c3307e504c1c32b030
Result:
[negroni] Started GET /queue/e492274674660a64011c80904d0c7d0d
[negroni] Completed 200 OK in 63.37µs
[negroni] Started GET /queue/1d578edf745481cc5d93eaab23090ab5
[negroni] Completed 200 OK in 54.852µs
[negroni] Started GET /queue/aad9c87f5e88e05d629334602a62002d
[negroni] Completed 200 OK in 70.507µs
[negroni] Started GET /queue/04ef2147024b1e9609cf386b69ab779e
[negroni] Completed 200 OK in 177.175µs
[negroni] Started GET /queue/0e57371802d690f1f06ac96680c71ffc
[negroni] Completed 200 OK in 99.859µs
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: lvcreate --poolmetadatasize 8192K -c 256K -L 1048576K -T vg_cacb122b9866184ab3c493d918117a3d/tp_ae9b99faba8852f507e4b1ac43e5ce10 -V 1048576K -n brick_ae9b99faba8852f507e4b1ac43e5ce10
Result: Using default stripesize 64.00 KiB.
Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
Logical volume "brick_ae9b99faba8852f507e4b1ac43e5ce10" created.
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: lvcreate --poolmetadatasize 8192K -c 256K -L 1048576K -T vg_876cce0304a7a5eef186065b6ab0eb4e/tp_9cf34dcf2468e3c3307e504c1c32b030 -V 1048576K -n brick_9cf34dcf2468e3c3307e504c1c32b030
Result: Using default stripesize 64.00 KiB.
Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
Logical volume "brick_9cf34dcf2468e3c3307e504c1c32b030" created.
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_cacb122b9866184ab3c493d918117a3d-brick_ae9b99faba8852f507e4b1ac43e5ce10
Result: meta-data=/dev/mapper/vg_cacb122b9866184ab3c493d918117a3d-brick_ae9b99faba8852f507e4b1ac43e5ce10 isize=512 agcount=8, agsize=32752 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=262016, imaxpct=25
= sunit=16 swidth=64 blks
naming =version 2 bsize=8192 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=864, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: awk "BEGIN {print \"/dev/mapper/vg_cacb122b9866184ab3c493d918117a3d-brick_ae9b99faba8852f507e4b1ac43e5ce10 /var/lib/heketi/mounts/vg_cacb122b9866184ab3c493d918117a3d/brick_ae9b99faba8852f507e4b1ac43e5ce10 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
Result:
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_876cce0304a7a5eef186065b6ab0eb4e-brick_9cf34dcf2468e3c3307e504c1c32b030
Result: meta-data=/dev/mapper/vg_876cce0304a7a5eef186065b6ab0eb4e-brick_9cf34dcf2468e3c3307e504c1c32b030 isize=512 agcount=8, agsize=32752 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=262016, imaxpct=25
= sunit=16 swidth=64 blks
naming =version 2 bsize=8192 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=864, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_cacb122b9866184ab3c493d918117a3d-brick_ae9b99faba8852f507e4b1ac43e5ce10 /var/lib/heketi/mounts/vg_cacb122b9866184ab3c493d918117a3d/brick_ae9b99faba8852f507e4b1ac43e5ce10
Result:
[kubeexec] DEBUG 2018/09/07 11:58:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: mkdir /var/lib/heketi/mounts/vg_cacb122b9866184ab3c493d918117a3d/brick_ae9b99faba8852f507e4b1ac43e5ce10/brick
Result:
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: awk "BEGIN {print \"/dev/mapper/vg_876cce0304a7a5eef186065b6ab0eb4e-brick_9cf34dcf2468e3c3307e504c1c32b030 /var/lib/heketi/mounts/vg_876cce0304a7a5eef186065b6ab0eb4e/brick_9cf34dcf2468e3c3307e504c1c32b030 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
Result:
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: chown :2004 /var/lib/heketi/mounts/vg_cacb122b9866184ab3c493d918117a3d/brick_ae9b99faba8852f507e4b1ac43e5ce10/brick
Result:
[negroni] Started GET /queue/e492274674660a64011c80904d0c7d0d
[negroni] Completed 200 OK in 52.23µs
[negroni] Started GET /queue/1d578edf745481cc5d93eaab23090ab5
[negroni] Completed 200 OK in 63.651µs
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_876cce0304a7a5eef186065b6ab0eb4e-brick_9cf34dcf2468e3c3307e504c1c32b030 /var/lib/heketi/mounts/vg_876cce0304a7a5eef186065b6ab0eb4e/brick_9cf34dcf2468e3c3307e504c1c32b030
Result:
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos03 Pod: glusterfs-njg0t Command: chmod 2775 /var/lib/heketi/mounts/vg_cacb122b9866184ab3c493d918117a3d/brick_ae9b99faba8852f507e4b1ac43e5ce10/brick
Result:
[negroni] Started GET /queue/04ef2147024b1e9609cf386b69ab779e
[negroni] Completed 200 OK in 62.525µs
[negroni] Started GET /queue/aad9c87f5e88e05d629334602a62002d
[negroni] Completed 200 OK in 245.558µs
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: mkdir /var/lib/heketi/mounts/vg_876cce0304a7a5eef186065b6ab0eb4e/brick_9cf34dcf2468e3c3307e504c1c32b030/brick
Result:
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: chown :2004 /var/lib/heketi/mounts/vg_876cce0304a7a5eef186065b6ab0eb4e/brick_9cf34dcf2468e3c3307e504c1c32b030/brick
Result:
[kubeexec] DEBUG 2018/09/07 11:58:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: centos02 Pod: glusterfs-z1sm8 Command: chmod 2775 /var/lib/heketi/mounts/vg_876cce0304a7a5eef186065b6ab0eb4e/brick_9cf34dcf2468e3c3307e504c1c32b030/brick
Result:
[negroni] Started GET /queue/0e57371802d690f1f06ac96680c71ffc
[negroni] Completed 200 OK in 65.312µs
[negroni] Started GET /queue/e492274674660a64011c80904d0c7d0d
[negroni] Completed 200 OK in 64.274µs
[negroni] Started GET /queue/1d578edf745481cc5d93eaab23090ab5
[negroni] Completed 200 OK in 56.647µs
[negroni] Started GET /queue/aad9c87f5e88e05d629334602a62002d
[negroni] Started GET /queue/04ef2147024b1e9609cf386b69ab779e
[negroni] Completed 200 OK in 90.309µs
[negroni] Completed 200 OK in 110.882µs
[negroni] Started GET /queue/0e57371802d690f1f06ac96680c71ffc
[negroni] Completed 200 OK in 79.047µs
[negroni] Started GET /queue/e492274674660a64011c80904d0c7d0d
[negroni] Completed 200 OK in 56.882µs
[negroni] Started GET /queue/1d578edf745481cc5d93eaab23090ab5
[negroni] Completed 200 OK in 58.596µs
Hello installation using the gk-deploy went fine and completed with no errors . I am on Centos using kubectl 1.7.4 i can not get pvc to bound , its stuck at pending .
logs from heketi as soon as i create the pvc
what else can i check ?