kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.26k stars 4.87k forks source link

FR: Add disk-size configuration to addon storage-provisioner-gluster #6205

Open renich opened 4 years ago

renich commented 4 years ago

The exact command to reproduce the issue:

minikube config set disk-size 50GiB
minikube addon enable storage-provisioner-gluster

The full output of the command that failed:

[renich@introdesk ~]$ kubectl -n storage-gluster exec -it glusterfs-k9vqb -- ls -lh /srv
total 1.1M
-rw-r--r-- 1 root root 10G Jan  3 22:13 fake-disk.img

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Fri 2020-01-03 20:31:48 UTC, end at Fri 2020-01-03 22:17:48 UTC. --
Jan 03 20:34:08 minikube dockerd[2157]: time="2020-01-03T20:34:08.944695874Z" level=warning msg="Published ports are discarded when using host network mode"
Jan 03 20:34:08 minikube dockerd[2157]: time="2020-01-03T20:34:08.992021986Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4d66440bc7de6617099132f6360f83088c0abe208d1abe790ba5b5157ecd6010/shim.sock" debug=false pid=5499
Jan 03 20:34:09 minikube dockerd[2157]: time="2020-01-03T20:34:09.346781594Z" level=warning msg="Published ports are discarded when using host network mode"
Jan 03 20:34:10 minikube dockerd[2157]: time="2020-01-03T20:34:10.357044144Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bea9789d935adbb0de1e42bb28358bb1358162dfde0315ca9ba1287efe134cf0/shim.sock" debug=false pid=5623
Jan 03 20:34:10 minikube dockerd[2157]: time="2020-01-03T20:34:10.537118417Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/96788271f1077e7502940f87037737e8dc86a6180bdba8695212a2a6e4d46974/shim.sock" debug=false pid=5667
Jan 03 20:34:10 minikube dockerd[2157]: time="2020-01-03T20:34:10.583109672Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/245ad6ccae4da5211c5bad515973a284f76d9c7a0190caad2ccb3f45febb668b/shim.sock" debug=false pid=5699
Jan 03 20:34:11 minikube dockerd[2157]: time="2020-01-03T20:34:11.035756703Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bcf1e28e2e6d502e86b40875abab22be5f0ff6b9580421d2e987c1c20b36bd65/shim.sock" debug=false pid=5836
Jan 03 20:34:11 minikube dockerd[2157]: time="2020-01-03T20:34:11.204514439Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fb9586808b19bd1ffc31f85b1903c73cb4abca16502703239802df67de0a26bd/shim.sock" debug=false pid=5880
Jan 03 20:34:11 minikube dockerd[2157]: time="2020-01-03T20:34:11.351156480Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/78aec03ba16422e04185978ec61a82ba1640155b9fd0b6a2d4f3b6b08e09defd/shim.sock" debug=false pid=5925
Jan 03 20:34:11 minikube dockerd[2157]: time="2020-01-03T20:34:11.842439532Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6ffde22a160e827e56d9993b4d0780737e6ae133630ac64e4e40daf9b4bb570a/shim.sock" debug=false pid=6042
Jan 03 20:34:11 minikube dockerd[2157]: time="2020-01-03T20:34:11.956721474Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/af68be523a9d6e9afc05e39f0bc20d66f9960df8c3d0249eb07ea3ab631ce0a7/shim.sock" debug=false pid=6085
Jan 03 20:34:12 minikube dockerd[2157]: time="2020-01-03T20:34:12.459501921Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/573be02720b69af074ed00e6904f1ea1e45c5d8595c60bf20689b6545966c4ae/shim.sock" debug=false pid=6198
Jan 03 20:34:12 minikube dockerd[2157]: time="2020-01-03T20:34:12.678636891Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6bbfd1b6bd836e7b9eaab50393d8e99ca87e6744271e57645add21c883dc774f/shim.sock" debug=false pid=6240
Jan 03 20:34:12 minikube dockerd[2157]: time="2020-01-03T20:34:12.820309516Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8db287dd4bd92bd03957531482bc602e5f2d6c0b2423df58c894e59931f32ee3/shim.sock" debug=false pid=6297
Jan 03 20:34:13 minikube dockerd[2157]: time="2020-01-03T20:34:13.601923701Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f71e516f6819487805e1910863e10935e334b88199de41f1a37cea67fd574855/shim.sock" debug=false pid=6382
Jan 03 20:34:13 minikube dockerd[2157]: time="2020-01-03T20:34:13.740616867Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/84706ad811d52152231328909800201af4579f2cae2b855a96def9d8db1ddd51/shim.sock" debug=false pid=6428
Jan 03 20:34:14 minikube dockerd[2157]: time="2020-01-03T20:34:14.528096726Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cd67465120d4b339900601d13a2c1ec14e15239c39c2276177ca1f3f939a66b5/shim.sock" debug=false pid=6511
Jan 03 20:34:14 minikube dockerd[2157]: time="2020-01-03T20:34:14.596592275Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7bc8ee281dd23c72530f7bc8f434c1f9fd62a4c31d6dbe43109a9620ed48e020/shim.sock" debug=false pid=6535
Jan 03 20:34:37 minikube dockerd[2157]: time="2020-01-03T20:34:37.024438491Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8742d867d47dd946ad8e8dc465044fddbb1d6a02cb7e12d57858ab0b542f1f2f/shim.sock" debug=false pid=7061
Jan 03 20:34:51 minikube dockerd[2157]: time="2020-01-03T20:34:51.389056042Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/49ac42ed8fb29892850f01eaafdb60e1a9786b28d76152236b38318c11e84bf8/shim.sock" debug=false pid=7388
Jan 03 20:35:21 minikube dockerd[2157]: time="2020-01-03T20:35:21.313621633Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6a1d5a5c02361b962a87d13727bff7cc30748fe3a39fc530d884e32b6c0827f0/shim.sock" debug=false pid=7946
Jan 03 20:35:23 minikube dockerd[2157]: time="2020-01-03T20:35:23.258545919Z" level=info msg="shim reaped" id=6a1d5a5c02361b962a87d13727bff7cc30748fe3a39fc530d884e32b6c0827f0
Jan 03 20:35:23 minikube dockerd[2157]: time="2020-01-03T20:35:23.268627455Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 03 20:35:23 minikube dockerd[2157]: time="2020-01-03T20:35:23.268798509Z" level=warning msg="6a1d5a5c02361b962a87d13727bff7cc30748fe3a39fc530d884e32b6c0827f0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/6a1d5a5c02361b962a87d13727bff7cc30748fe3a39fc530d884e32b6c0827f0/mounts/shm, flags: 0x2: no such file or directory"
Jan 03 20:35:24 minikube dockerd[2157]: time="2020-01-03T20:35:24.328083344Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/21217338bb132d8b3e6522572baee07ab4ead0a1ca5ac6c9681b5003d92b24c5/shim.sock" debug=false pid=8393
Jan 03 20:36:00 minikube dockerd[2157]: time="2020-01-03T20:36:00.281891917Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b321b304a6133ad1fba41be1174d30fd630c8887dca2783a06b4e4a7fdc57c98/shim.sock" debug=false pid=9094
Jan 03 20:36:03 minikube dockerd[2157]: time="2020-01-03T20:36:03.861584507Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/87c6662d74223153bf0f4bb6bf7290b3bb71733472aea1b6ae3f5478a45621f0/shim.sock" debug=false pid=9454
Jan 03 20:36:22 minikube dockerd[2157]: time="2020-01-03T20:36:22.867520112Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0155c2f70bd3772b489cf8defce14d1ae021009120489a0053e17cbe75fd39e5/shim.sock" debug=false pid=9780
Jan 03 20:39:41 minikube dockerd[2157]: time="2020-01-03T20:39:41.265379462Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b58b67a8837180d6723a9bc9e326c2f2f146c65c7ec0757b4689ceb3e773dd40/shim.sock" debug=false pid=14180
Jan 03 20:40:00 minikube dockerd[2157]: time="2020-01-03T20:40:00.046619755Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c031bbf1969222b2e862ec49e57fb6dd5cbf802c172dc91d518a8f6142406918/shim.sock" debug=false pid=14629
Jan 03 20:44:37 minikube dockerd[2157]: time="2020-01-03T20:44:37.619526230Z" level=info msg="shim reaped" id=c031bbf1969222b2e862ec49e57fb6dd5cbf802c172dc91d518a8f6142406918
Jan 03 20:44:37 minikube dockerd[2157]: time="2020-01-03T20:44:37.629711581Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 03 20:44:37 minikube dockerd[2157]: time="2020-01-03T20:44:37.629855564Z" level=warning msg="c031bbf1969222b2e862ec49e57fb6dd5cbf802c172dc91d518a8f6142406918 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c031bbf1969222b2e862ec49e57fb6dd5cbf802c172dc91d518a8f6142406918/mounts/shm, flags: 0x2: no such file or directory"
Jan 03 20:44:37 minikube dockerd[2157]: time="2020-01-03T20:44:37.921078184Z" level=info msg="shim reaped" id=b58b67a8837180d6723a9bc9e326c2f2f146c65c7ec0757b4689ceb3e773dd40
Jan 03 20:44:37 minikube dockerd[2157]: time="2020-01-03T20:44:37.933662142Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 03 22:07:03 minikube dockerd[2157]: time="2020-01-03T22:07:03.836033073Z" level=info msg="Container 8742d867d47dd946ad8e8dc465044fddbb1d6a02cb7e12d57858ab0b542f1f2f failed to exit within 30 seconds of signal 15 - using the force"
Jan 03 22:07:03 minikube dockerd[2157]: time="2020-01-03T22:07:03.979501645Z" level=info msg="shim reaped" id=8742d867d47dd946ad8e8dc465044fddbb1d6a02cb7e12d57858ab0b542f1f2f
Jan 03 22:07:03 minikube dockerd[2157]: time="2020-01-03T22:07:03.992267107Z" level=warning msg="8742d867d47dd946ad8e8dc465044fddbb1d6a02cb7e12d57858ab0b542f1f2f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8742d867d47dd946ad8e8dc465044fddbb1d6a02cb7e12d57858ab0b542f1f2f/mounts/shm, flags: 0x2: no such file or directory"
Jan 03 22:07:03 minikube dockerd[2157]: time="2020-01-03T22:07:03.992450260Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 03 22:07:04 minikube dockerd[2157]: time="2020-01-03T22:07:04.273618870Z" level=info msg="shim reaped" id=4d66440bc7de6617099132f6360f83088c0abe208d1abe790ba5b5157ecd6010
Jan 03 22:07:04 minikube dockerd[2157]: time="2020-01-03T22:07:04.283862994Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 03 22:07:17 minikube dockerd[2157]: time="2020-01-03T22:07:17.300909454Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ca43ede32e87189e0ad80e2495a82e6e081d12b89772a3bead181858c65daca7/shim.sock" debug=false pid=2923
Jan 03 22:07:17 minikube dockerd[2157]: time="2020-01-03T22:07:17.803189424Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bba9a97407d481254eb866f943c839ee18b613e74db34c196fb508ad6db39c42/shim.sock" debug=false pid=2973
Jan 03 22:11:06 minikube dockerd[2157]: time="2020-01-03T22:11:06.408879423Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4680e63ef0c16378fc589ee17e8adac80673badf3af1c4b672ffcb59aa5a7756/shim.sock" debug=false pid=8458
Jan 03 22:11:07 minikube dockerd[2157]: time="2020-01-03T22:11:07.071210291Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2238eebe789adaee7c106480b00204b65ba54ac0c929477d97537ced524ef782/shim.sock" debug=false pid=8535
Jan 03 22:13:25 minikube dockerd[2157]: time="2020-01-03T22:13:25.610109800Z" level=info msg="Container bba9a97407d481254eb866f943c839ee18b613e74db34c196fb508ad6db39c42 failed to exit within 30 seconds of signal 15 - using the force"
Jan 03 22:13:25 minikube dockerd[2157]: time="2020-01-03T22:13:25.768667004Z" level=info msg="shim reaped" id=bba9a97407d481254eb866f943c839ee18b613e74db34c196fb508ad6db39c42
Jan 03 22:13:25 minikube dockerd[2157]: time="2020-01-03T22:13:25.778715051Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 03 22:13:25 minikube dockerd[2157]: time="2020-01-03T22:13:25.778788142Z" level=warning msg="bba9a97407d481254eb866f943c839ee18b613e74db34c196fb508ad6db39c42 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/bba9a97407d481254eb866f943c839ee18b613e74db34c196fb508ad6db39c42/mounts/shm, flags: 0x2: no such file or directory"
Jan 03 22:13:26 minikube dockerd[2157]: time="2020-01-03T22:13:26.034852318Z" level=info msg="shim reaped" id=ca43ede32e87189e0ad80e2495a82e6e081d12b89772a3bead181858c65daca7
Jan 03 22:13:26 minikube dockerd[2157]: time="2020-01-03T22:13:26.045000276Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 03 22:13:37 minikube dockerd[2157]: time="2020-01-03T22:13:37.444217197Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b067107ed52102434057e7b46122b8b2311dedb5263f1fe230e4801962105d8d/shim.sock" debug=false pid=12536
Jan 03 22:13:38 minikube dockerd[2157]: time="2020-01-03T22:13:38.025064694Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/72fef55da20f1d1396fcb4b7ad5dadfa07a66efe7a6e23b498a9f4191765190f/shim.sock" debug=false pid=12603
Jan 03 22:16:02 minikube dockerd[2157]: time="2020-01-03T22:16:02.827981128Z" level=info msg="shim reaped" id=2238eebe789adaee7c106480b00204b65ba54ac0c929477d97537ced524ef782
Jan 03 22:16:02 minikube dockerd[2157]: time="2020-01-03T22:16:02.838108563Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 03 22:16:02 minikube dockerd[2157]: time="2020-01-03T22:16:02.838301844Z" level=warning msg="2238eebe789adaee7c106480b00204b65ba54ac0c929477d97537ced524ef782 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2238eebe789adaee7c106480b00204b65ba54ac0c929477d97537ced524ef782/mounts/shm, flags: 0x2: no such file or directory"
Jan 03 22:16:03 minikube dockerd[2157]: time="2020-01-03T22:16:03.097401973Z" level=info msg="shim reaped" id=4680e63ef0c16378fc589ee17e8adac80673badf3af1c4b672ffcb59aa5a7756
Jan 03 22:16:03 minikube dockerd[2157]: time="2020-01-03T22:16:03.107546250Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 03 22:16:19 minikube dockerd[2157]: time="2020-01-03T22:16:19.073487743Z" level=error msg="Handler for POST /exec/1078761deb1975731a79519b837330d365e58eb9694b09acaab92f1f7d6a0754/resize returned error: cannot resize a stopped container: unknown"
Jan 03 22:16:23 minikube dockerd[2157]: time="2020-01-03T22:16:23.489210533Z" level=error msg="Handler for POST /exec/e3cc30b5c9b59d2da474249f301d7b2c7721f56c27d11198cdb56a647f8b3f85/resize returned error: cannot resize a stopped container: unknown"

==> container status <==
CONTAINER           IMAGE                                                                                                                                    CREATED             STATE               NAME                        ATTEMPT             POD ID
72fef55da20f1       497f54bbc45ef                                                                                                                            4 minutes ago       Running             glusterfs                   0                   b067107ed5210
0155c2f70bd37       gluster/glusterfile-provisioner@sha256:9961a35cb3f06701958e202324141c30024b195579e5eb1704599659ddea5223                                  2 hours ago         Running             glusterfile-provisioner     0                   6bbfd1b6bd836
87c6662d74223       k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892                                  2 hours ago         Running             metrics-server              0                   af68be523a9d6
b321b304a6133       quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:d0b22f715fcea5598ef7f869d308b55289a3daaa12922fa52a1abf17703c88e7   2 hours ago         Running             nginx-ingress-controller    0                   fb9586808b19b
21217338bb132       3ad9ca98f56c4                                                                                                                            2 hours ago         Running             heketi                      1                   245ad6ccae4da
6a1d5a5c02361       heketi/heketi@sha256:829150e31bd4af27019dd4c2893519d5aabee9c688eaa9360108db1e0affd79b                                                    2 hours ago         Exited              heketi                      0                   245ad6ccae4da
49ac42ed8fb29       cryptexlabs/minikube-ingress-dns@sha256:d07dfd1b882d8ee70d71514434c10fdd8c54d347b5a883323154d6096f1e8c67                                 2 hours ago         Running             minikube-ingress-dns        0                   bea9789d935ad
7bc8ee281dd23       70f311871ae12                                                                                                                            2 hours ago         Running             coredns                     0                   84706ad811d52
cd67465120d4b       70f311871ae12                                                                                                                            2 hours ago         Running             coredns                     0                   f71e516f68194
8db287dd4bd92       eb51a35975256                                                                                                                            2 hours ago         Running             kubernetes-dashboard        0                   6ffde22a160e8
573be02720b69       3b08661dc379d                                                                                                                            2 hours ago         Running             dashboard-metrics-scraper   0                   bcf1e28e2e6d5
78aec03ba1642       4689081edb103                                                                                                                            2 hours ago         Running             storage-provisioner         0                   96788271f1077
ebf5e040f1912       7d54289267dc5                                                                                                                            2 hours ago         Running             kube-proxy                  0                   13cd15a5291d4
5157bbfdef0dc       0cae8d5cc64c7                                                                                                                            2 hours ago         Running             kube-apiserver              0                   8fe2a5960531e
a4cef92ccb237       303ce5db0e90d                                                                                                                            2 hours ago         Running             etcd                        0                   f4a721d4cbd6e
01a6d327f48b4       78c190f736b11                                                                                                                            2 hours ago         Running             kube-scheduler              0                   7ded2e48dac4a
0e3813578da1d       5eb3b74868724                                                                                                                            2 hours ago         Running             kube-controller-manager     0                   f25ebd37dc51f
88334102eeb72       bd12a212f9dcb                                                                                                                            2 hours ago         Running             kube-addon-manager          0                   c4c16aa055686

==> coredns ["7bc8ee281dd2"] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> coredns ["cd67465120d4"] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> dmesg <==
[Jan 3 20:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[  +0.032466] Decoding supported only on Scalable MCA processors.
[  +0.000065]  #2
[  +0.000629]  #3
[ +17.753504] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.852962] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[  +0.004189] systemd-fstab-generator[1167]: Ignoring "noauto" for root device
[  +0.003259] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.774816] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +0.237120] vboxguest: loading out-of-tree module taints kernel.
[  +0.002421] vboxguest: PCI device not found, probably running on physical hardware.
[  +5.863225] systemd-fstab-generator[2057]: Ignoring "noauto" for root device
[Jan 3 20:33] systemd-fstab-generator[2958]: Ignoring "noauto" for root device
[  +8.185462] systemd-fstab-generator[3338]: Ignoring "noauto" for root device
[ +22.765128] kauditd_printk_skb: 68 callbacks suppressed
[ +11.699030] systemd-fstab-generator[4840]: Ignoring "noauto" for root device
[  +5.889503] NFSD: Unable to end grace period: -110
[Jan 3 20:34] kauditd_printk_skb: 29 callbacks suppressed
[  +9.135181] kauditd_printk_skb: 89 callbacks suppressed
[  +5.237990] kauditd_printk_skb: 8 callbacks suppressed
[Jan 3 20:35] kauditd_printk_skb: 14 callbacks suppressed
[Jan 3 20:39] kauditd_printk_skb: 27 callbacks suppressed
[Jan 3 20:45] kauditd_printk_skb: 5 callbacks suppressed
[Jan 3 20:46] kauditd_printk_skb: 2 callbacks suppressed
[Jan 3 22:06] systemd-sysv-generator[3229]: Failed to create unit file /run/systemd/generator.late/netconsole.service: File exists
[  +0.000011] systemd-sysv-generator[3229]: Failed to create unit file /run/systemd/generator.late/network.service: File exists
[Jan 3 22:08] kauditd_printk_skb: 10 callbacks suppressed
[Jan 3 22:11] kauditd_printk_skb: 27 callbacks suppressed
[Jan 3 22:12] systemd-sysv-generator[447]: Failed to create unit file /run/systemd/generator.late/netconsole.service: File exists
[  +0.000010] systemd-sysv-generator[447]: Failed to create unit file /run/systemd/generator.late/network.service: File exists
[Jan 3 22:14] kauditd_printk_skb: 11 callbacks suppressed

==> kernel <==
 22:17:48 up  1:46,  0 users,  load average: 0.22, 0.39, 0.46
Linux minikube 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.7"

==> kube-addon-manager ["88334102eeb7"] <==
error: no objects passed to apply
serviceaccount/nginx-ingress unchanged
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged
deployment.apps/metrics-server unchanged
service/metrics-server unchanged
namespace/storage-gluster unchanged
clusterrole.rbac.authorization.k8s.io/glusterfile-provisioner-runner unchanged
serviceaccount/glusterfile-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/glusterfile-provisioner unchanged
deployment.apps/glusterfile-provisioner unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-03T22:17:39+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-03T22:17:39+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
cerror: no objnec.tks8 sp.aiso/eku bteor naeptpes-dashboya
rd unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/heketi-service-account unchanged
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view unchanged
service/heketi unchanged
configmap/heketi-topology unchanged
deployment.apps/heketi unchanged
serviceaccount/minikube-ingress-dns unchanged
clusterrole.rbac.authorization.k8s.io/minikube-ingress-dns unchanged
clusterrolebinding.rbac.authorization.k8s.io/minikube-ingress-dns unchanged
pod/kube-ingress-dns-minikube unchanged
deployment.apps/nginx-ingress-controller unchanged
serviceaccount/nginx-ingress unchanged
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged
deployment.apps/metrics-server unchanged
service/metrics-server unchanged
namespace/storage-gluster unchanged
clusterrole.rbac.authorization.k8s.io/glusterfile-provisioner-runner unchanged
serviceaccount/glusterfile-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/glusterfile-provisioner unchanged
deployment.apps/glusterfile-provisioner unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-03T22:17:47+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-03T22:17:47+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==

==> kube-apiserver ["5157bbfdef0d"] <==
I0103 21:44:42.230820       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0103 21:45:20.034164       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
I0103 21:46:42.230954       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 21:46:42.233293       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 21:46:42.233310       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 21:48:42.227883       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 21:48:42.230114       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 21:48:42.230136       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 21:49:42.230297       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 21:49:42.232978       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 21:49:42.233138       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 21:51:42.233348       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 21:51:42.235945       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 21:51:42.235970       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 21:53:42.230299       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 21:53:42.232872       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 21:53:42.232888       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 21:54:42.233069       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 21:54:42.235374       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 21:54:42.235400       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 21:56:42.235616       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 21:56:42.238248       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 21:56:42.238331       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 21:58:42.231717       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 21:58:42.235084       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 21:58:42.235166       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0103 21:59:21.061908       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
I0103 21:59:42.235351       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 21:59:42.241414       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 21:59:42.241435       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 22:01:42.241684       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 22:01:42.245060       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 22:01:42.245084       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 22:03:42.234424       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 22:03:42.237200       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 22:03:42.237274       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 22:04:42.237595       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 22:04:42.240285       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 22:04:42.240304       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 22:06:42.240517       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 22:06:42.243653       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 22:06:42.243675       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 22:08:42.236908       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 22:08:42.240053       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 22:08:42.240077       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 22:09:42.240278       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 22:09:42.243066       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 22:09:42.243083       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 22:11:42.243314       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 22:11:42.246358       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 22:11:42.246381       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 22:13:42.238821       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 22:13:42.241558       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 22:13:42.241600       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 22:14:42.241893       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 22:14:42.244646       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 22:14:42.244666       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0103 22:16:42.244855       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0103 22:16:42.247846       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0103 22:16:42.247861       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.

==> kube-controller-manager ["0e3813578da1"] <==
I0103 22:13:07.161999       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14778", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:08.130836       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14849", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:10.137807       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14856", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:12.145834       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14862", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:14.159589       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14868", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:16.166216       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14875", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:18.173541       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14881", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:20.180023       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14887", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:22.162148       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14887", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:22.186729       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14892", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:24.194903       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14900", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:36.519401       1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"storage-gluster", Name:"glusterfs", UID:"0f034140-d228-4237-8881-8e64581c7f77", APIVersion:"apps/v1", ResourceVersion:"14908", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: glusterfs-k9vqb
I0103 22:13:37.162638       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14900", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:52.162644       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14900", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:53.132793       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14983", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:55.141508       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14990", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:57.147238       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"14996", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:13:59.153884       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15003", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:01.161200       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15009", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:03.167092       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15015", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:05.173326       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15021", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:07.162783       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15021", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:07.179711       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15026", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:09.189108       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15033", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:11.197181       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15038", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:13.204845       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15044", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:15.209558       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15051", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:17.218248       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15056", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:19.224364       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15062", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:21.230591       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15068", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:22.162971       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15068", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:23.240794       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15075", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:37.163147       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15075", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:52.163389       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15075", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:53.130363       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15150", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:55.138814       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15156", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:57.160527       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15161", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:14:59.168254       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15167", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:01.174681       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15173", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:03.179985       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15178", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:05.189216       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15184", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:07.163590       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15184", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:07.196305       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15189", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:09.202933       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15196", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:11.208340       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15201", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:13.214139       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15206", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:15.219664       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15213", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:17.225369       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15218", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:19.230682       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15224", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:21.236733       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15229", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:22.163798       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15229", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:23.243644       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15235", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:37.164240       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15235", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:52.164302       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15235", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:53.133705       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15303", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:55.140295       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15309", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:57.145290       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15314", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:15:59.152640       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15320", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0103 22:16:00.883964       1 stateful_set.go:420] StatefulSet has been deleted wp-test/wp-test-mariadb
I0103 22:16:00.914433       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"wp-test", Name:"wp-test-wordpress", UID:"cfaae698-a3e2-42f3-b1e6-d3dd27c172bf", APIVersion:"v1", ResourceVersion:"15339", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator

==> kube-proxy ["ebf5e040f191"] <==
W0103 20:34:09.117106       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I0103 20:34:09.124973       1 node.go:135] Successfully retrieved node IP: 192.168.39.175
I0103 20:34:09.125002       1 server_others.go:145] Using iptables Proxier.
W0103 20:34:09.125095       1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0103 20:34:09.125331       1 server.go:571] Version: v1.17.0
I0103 20:34:09.125752       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0103 20:34:09.125777       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0103 20:34:09.126088       1 conntrack.go:83] Setting conntrack hashsize to 32768
I0103 20:34:09.132663       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0103 20:34:09.132729       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0103 20:34:09.132876       1 config.go:131] Starting endpoints config controller
I0103 20:34:09.133089       1 config.go:313] Starting service config controller
I0103 20:34:09.133107       1 shared_informer.go:197] Waiting for caches to sync for service config
I0103 20:34:09.132912       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0103 20:34:09.233477       1 shared_informer.go:204] Caches are synced for service config 
I0103 20:34:09.233775       1 shared_informer.go:204] Caches are synced for endpoints config 

==> kube-scheduler ["01a6d327f48b"] <==
E0103 22:13:16.323718       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:13:16.323788       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:13:16.324384       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:13:25.325086       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:13:25.325150       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:13:25.325307       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:13:36.482428       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:13:36.482499       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:13:36.482540       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:13:53.132025       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:13:53.132099       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:13:53.132128       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:13:55.141944       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:13:55.142103       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:13:55.142233       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:13:57.331268       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:13:57.331515       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:13:57.331613       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:02.332122       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:02.332188       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:14:02.332216       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:11.334294       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:11.334366       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:14:11.334560       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:22.336072       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:22.336281       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:14:22.336379       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:24.336335       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:24.336418       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:14:24.336455       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:53.129500       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:53.129553       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:14:53.129591       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:55.139243       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:55.139298       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:14:55.139322       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:57.341615       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:14:57.341674       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:14:57.341699       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:02.344312       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:02.344517       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:15:02.344642       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:11.346067       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:11.346222       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:15:11.346336       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:22.347854       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:22.347928       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:15:22.347963       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:24.348177       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:24.348244       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:15:24.348273       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:53.133226       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:53.133281       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:15:53.133303       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:55.140187       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:55.140429       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:15:55.140495       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:57.353288       1 framework.go:411] error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims
E0103 22:15:57.353367       1 factory.go:469] Error scheduling wp-test/wp-test-wordpress-75bcf4685f-zp4d2: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims; retrying
E0103 22:15:57.353409       1 scheduler.go:638] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "wp-test-wordpress-75bcf4685f-zp4d2": pod has unbound immediate PersistentVolumeClaims

==> kubelet <==
-- Logs begin at Fri 2020-01-03 20:31:48 UTC, end at Fri 2020-01-03 22:17:48 UTC. --
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.751560    4849 reconciler.go:183] operationExecutor.UnmountVolume started for volume "default-token-n6n6c" (UniqueName: "kubernetes.io/secret/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-default-token-n6n6c") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21")
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.751627    4849 reconciler.go:183] operationExecutor.UnmountVolume started for volume "glusterfs-heketi" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-heketi") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21")
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.751655    4849 reconciler.go:183] operationExecutor.UnmountVolume started for volume "fake-disk" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-fake-disk") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21")
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.751691    4849 reconciler.go:183] operationExecutor.UnmountVolume started for volume "glusterfs-misc" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-misc") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21")
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.751780    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-misc" (OuterVolumeSpecName: "glusterfs-misc") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21"). InnerVolumeSpecName "glusterfs-misc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.751781    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-config" (OuterVolumeSpecName: "glusterfs-config") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21"). InnerVolumeSpecName "glusterfs-config". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.751824    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-logs" (OuterVolumeSpecName: "glusterfs-logs") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21"). InnerVolumeSpecName "glusterfs-logs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.751833    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-kernel-modules" (OuterVolumeSpecName: "kernel-modules") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21"). InnerVolumeSpecName "kernel-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.751849    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-lvm" (OuterVolumeSpecName: "glusterfs-lvm") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21"). InnerVolumeSpecName "glusterfs-lvm". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.751858    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-cgroup" (OuterVolumeSpecName: "glusterfs-cgroup") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21"). InnerVolumeSpecName "glusterfs-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.751884    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-dev" (OuterVolumeSpecName: "glusterfs-dev") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21"). InnerVolumeSpecName "glusterfs-dev". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 03 22:13:26 minikube kubelet[4849]: W0103 22:13:26.751931    4849 empty_dir.go:418] Warning: Failed to clear quota on /var/lib/kubelet/pods/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21/volumes/kubernetes.io~empty-dir/glusterfs-run: ClearQuota called, but quotas disabled
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.752353    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-heketi" (OuterVolumeSpecName: "glusterfs-heketi") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21"). InnerVolumeSpecName "glusterfs-heketi". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.752405    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-fake-disk" (OuterVolumeSpecName: "fake-disk") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21"). InnerVolumeSpecName "fake-disk". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.752448    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-ssl" (OuterVolumeSpecName: "glusterfs-ssl") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21"). InnerVolumeSpecName "glusterfs-ssl". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.756406    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-run" (OuterVolumeSpecName: "glusterfs-run") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21"). InnerVolumeSpecName "glusterfs-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.766203    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-default-token-n6n6c" (OuterVolumeSpecName: "default-token-n6n6c") pod "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21" (UID: "92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21"). InnerVolumeSpecName "default-token-n6n6c". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jan 03 22:13:26 minikube kubelet[4849]: E0103 22:13:26.766477    4849 remote_runtime.go:295] ContainerStatus "bba9a97407d481254eb866f943c839ee18b613e74db34c196fb508ad6db39c42" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: bba9a97407d481254eb866f943c839ee18b613e74db34c196fb508ad6db39c42
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.851984    4849 reconciler.go:303] Volume detached for volume "glusterfs-cgroup" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-cgroup") on node "minikube" DevicePath ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.852172    4849 reconciler.go:303] Volume detached for volume "glusterfs-dev" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-dev") on node "minikube" DevicePath ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.852275    4849 reconciler.go:303] Volume detached for volume "glusterfs-ssl" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-ssl") on node "minikube" DevicePath ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.852376    4849 reconciler.go:303] Volume detached for volume "glusterfs-run" (UniqueName: "kubernetes.io/empty-dir/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-run") on node "minikube" DevicePath ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.852470    4849 reconciler.go:303] Volume detached for volume "glusterfs-config" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-config") on node "minikube" DevicePath ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.852594    4849 reconciler.go:303] Volume detached for volume "glusterfs-logs" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-logs") on node "minikube" DevicePath ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.852696    4849 reconciler.go:303] Volume detached for volume "glusterfs-lvm" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-lvm") on node "minikube" DevicePath ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.852795    4849 reconciler.go:303] Volume detached for volume "default-token-n6n6c" (UniqueName: "kubernetes.io/secret/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-default-token-n6n6c") on node "minikube" DevicePath ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.852892    4849 reconciler.go:303] Volume detached for volume "glusterfs-heketi" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-heketi") on node "minikube" DevicePath ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.852984    4849 reconciler.go:303] Volume detached for volume "fake-disk" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-fake-disk") on node "minikube" DevicePath ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.853073    4849 reconciler.go:303] Volume detached for volume "glusterfs-misc" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-glusterfs-misc") on node "minikube" DevicePath ""
Jan 03 22:13:26 minikube kubelet[4849]: I0103 22:13:26.853161    4849 reconciler.go:303] Volume detached for volume "kernel-modules" (UniqueName: "kubernetes.io/host-path/92df17d9-8d85-4fe3-8fbd-9dfcfd9f5a21-kernel-modules") on node "minikube" DevicePath ""
Jan 03 22:13:36 minikube kubelet[4849]: I0103 22:13:36.578600    4849 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-heketi" (UniqueName: "kubernetes.io/host-path/e747e706-ab94-448d-9bfc-3401a9b2a8b5-glusterfs-heketi") pod "glusterfs-k9vqb" (UID: "e747e706-ab94-448d-9bfc-3401a9b2a8b5")
Jan 03 22:13:36 minikube kubelet[4849]: I0103 22:13:36.578654    4849 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-config" (UniqueName: "kubernetes.io/host-path/e747e706-ab94-448d-9bfc-3401a9b2a8b5-glusterfs-config") pod "glusterfs-k9vqb" (UID: "e747e706-ab94-448d-9bfc-3401a9b2a8b5")
Jan 03 22:13:36 minikube kubelet[4849]: I0103 22:13:36.578693    4849 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "fake-disk" (UniqueName: "kubernetes.io/host-path/e747e706-ab94-448d-9bfc-3401a9b2a8b5-fake-disk") pod "glusterfs-k9vqb" (UID: "e747e706-ab94-448d-9bfc-3401a9b2a8b5")
Jan 03 22:13:36 minikube kubelet[4849]: I0103 22:13:36.578722    4849 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-cgroup" (UniqueName: "kubernetes.io/host-path/e747e706-ab94-448d-9bfc-3401a9b2a8b5-glusterfs-cgroup") pod "glusterfs-k9vqb" (UID: "e747e706-ab94-448d-9bfc-3401a9b2a8b5")
Jan 03 22:13:36 minikube kubelet[4849]: I0103 22:13:36.578749    4849 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-misc" (UniqueName: "kubernetes.io/host-path/e747e706-ab94-448d-9bfc-3401a9b2a8b5-glusterfs-misc") pod "glusterfs-k9vqb" (UID: "e747e706-ab94-448d-9bfc-3401a9b2a8b5")
Jan 03 22:13:36 minikube kubelet[4849]: I0103 22:13:36.578774    4849 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-ssl" (UniqueName: "kubernetes.io/host-path/e747e706-ab94-448d-9bfc-3401a9b2a8b5-glusterfs-ssl") pod "glusterfs-k9vqb" (UID: "e747e706-ab94-448d-9bfc-3401a9b2a8b5")
Jan 03 22:13:36 minikube kubelet[4849]: I0103 22:13:36.578802    4849 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-run" (UniqueName: "kubernetes.io/empty-dir/e747e706-ab94-448d-9bfc-3401a9b2a8b5-glusterfs-run") pod "glusterfs-k9vqb" (UID: "e747e706-ab94-448d-9bfc-3401a9b2a8b5")
Jan 03 22:13:36 minikube kubelet[4849]: I0103 22:13:36.578830    4849 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-lvm" (UniqueName: "kubernetes.io/host-path/e747e706-ab94-448d-9bfc-3401a9b2a8b5-glusterfs-lvm") pod "glusterfs-k9vqb" (UID: "e747e706-ab94-448d-9bfc-3401a9b2a8b5")
Jan 03 22:13:36 minikube kubelet[4849]: I0103 22:13:36.578900    4849 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kernel-modules" (UniqueName: "kubernetes.io/host-path/e747e706-ab94-448d-9bfc-3401a9b2a8b5-kernel-modules") pod "glusterfs-k9vqb" (UID: "e747e706-ab94-448d-9bfc-3401a9b2a8b5")
Jan 03 22:13:36 minikube kubelet[4849]: I0103 22:13:36.578955    4849 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-logs" (UniqueName: "kubernetes.io/host-path/e747e706-ab94-448d-9bfc-3401a9b2a8b5-glusterfs-logs") pod "glusterfs-k9vqb" (UID: "e747e706-ab94-448d-9bfc-3401a9b2a8b5")
Jan 03 22:13:36 minikube kubelet[4849]: I0103 22:13:36.579007    4849 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-dev" (UniqueName: "kubernetes.io/host-path/e747e706-ab94-448d-9bfc-3401a9b2a8b5-glusterfs-dev") pod "glusterfs-k9vqb" (UID: "e747e706-ab94-448d-9bfc-3401a9b2a8b5")
Jan 03 22:13:36 minikube kubelet[4849]: I0103 22:13:36.579056    4849 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-n6n6c" (UniqueName: "kubernetes.io/secret/e747e706-ab94-448d-9bfc-3401a9b2a8b5-default-token-n6n6c") pod "glusterfs-k9vqb" (UID: "e747e706-ab94-448d-9bfc-3401a9b2a8b5")
Jan 03 22:13:37 minikube kubelet[4849]: E0103 22:13:37.809944    4849 remote_runtime.go:295] ContainerStatus "72fef55da20f1d1396fcb4b7ad5dadfa07a66efe7a6e23b498a9f4191765190f" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 72fef55da20f1d1396fcb4b7ad5dadfa07a66efe7a6e23b498a9f4191765190f
Jan 03 22:13:37 minikube kubelet[4849]: E0103 22:13:37.809996    4849 kuberuntime_manager.go:955] getPodContainerStatuses for pod "glusterfs-k9vqb_storage-gluster(e747e706-ab94-448d-9bfc-3401a9b2a8b5)" failed: rpc error: code = Unknown desc = Error: No such container: 72fef55da20f1d1396fcb4b7ad5dadfa07a66efe7a6e23b498a9f4191765190f
Jan 03 22:13:50 minikube kubelet[4849]: W0103 22:13:50.521041    4849 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/b2cf435c-6dbb-4d7a-9931-813eeb33eb28/volumes/kubernetes.io~configmap/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Jan 03 22:13:50 minikube kubelet[4849]: W0103 22:13:50.521633    4849 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/b2cf435c-6dbb-4d7a-9931-813eeb33eb28/volumes/kubernetes.io~secret/default-token-svxzp and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Jan 03 22:15:02 minikube kubelet[4849]: W0103 22:15:02.516676    4849 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/b2cf435c-6dbb-4d7a-9931-813eeb33eb28/volumes/kubernetes.io~secret/default-token-svxzp and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Jan 03 22:15:02 minikube kubelet[4849]: W0103 22:15:02.516889    4849 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/b2cf435c-6dbb-4d7a-9931-813eeb33eb28/volumes/kubernetes.io~configmap/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Jan 03 22:16:03 minikube kubelet[4849]: I0103 22:16:03.788632    4849 reconciler.go:183] operationExecutor.UnmountVolume started for volume "default-token-svxzp" (UniqueName: "kubernetes.io/secret/b2cf435c-6dbb-4d7a-9931-813eeb33eb28-default-token-svxzp") pod "b2cf435c-6dbb-4d7a-9931-813eeb33eb28" (UID: "b2cf435c-6dbb-4d7a-9931-813eeb33eb28")
Jan 03 22:16:03 minikube kubelet[4849]: I0103 22:16:03.789537    4849 reconciler.go:183] operationExecutor.UnmountVolume started for volume "data" (UniqueName: "kubernetes.io/glusterfs/b2cf435c-6dbb-4d7a-9931-813eeb33eb28-pvc-003cd546-1e8c-422f-acc0-e74e8f7fb932") pod "b2cf435c-6dbb-4d7a-9931-813eeb33eb28" (UID: "b2cf435c-6dbb-4d7a-9931-813eeb33eb28")
Jan 03 22:16:03 minikube kubelet[4849]: I0103 22:16:03.789930    4849 reconciler.go:183] operationExecutor.UnmountVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/b2cf435c-6dbb-4d7a-9931-813eeb33eb28-config") pod "b2cf435c-6dbb-4d7a-9931-813eeb33eb28" (UID: "b2cf435c-6dbb-4d7a-9931-813eeb33eb28")
Jan 03 22:16:03 minikube kubelet[4849]: I0103 22:16:03.798330    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/glusterfs/b2cf435c-6dbb-4d7a-9931-813eeb33eb28-pvc-003cd546-1e8c-422f-acc0-e74e8f7fb932" (OuterVolumeSpecName: "data") pod "b2cf435c-6dbb-4d7a-9931-813eeb33eb28" (UID: "b2cf435c-6dbb-4d7a-9931-813eeb33eb28"). InnerVolumeSpecName "pvc-003cd546-1e8c-422f-acc0-e74e8f7fb932". PluginName "kubernetes.io/glusterfs", VolumeGidValue "2010"
Jan 03 22:16:03 minikube kubelet[4849]: I0103 22:16:03.799766    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cf435c-6dbb-4d7a-9931-813eeb33eb28-default-token-svxzp" (OuterVolumeSpecName: "default-token-svxzp") pod "b2cf435c-6dbb-4d7a-9931-813eeb33eb28" (UID: "b2cf435c-6dbb-4d7a-9931-813eeb33eb28"). InnerVolumeSpecName "default-token-svxzp". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jan 03 22:16:03 minikube kubelet[4849]: W0103 22:16:03.803176    4849 empty_dir.go:418] Warning: Failed to clear quota on /var/lib/kubelet/pods/b2cf435c-6dbb-4d7a-9931-813eeb33eb28/volumes/kubernetes.io~configmap/config: ClearQuota called, but quotas disabled
Jan 03 22:16:03 minikube kubelet[4849]: I0103 22:16:03.803439    4849 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2cf435c-6dbb-4d7a-9931-813eeb33eb28-config" (OuterVolumeSpecName: "config") pod "b2cf435c-6dbb-4d7a-9931-813eeb33eb28" (UID: "b2cf435c-6dbb-4d7a-9931-813eeb33eb28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Jan 03 22:16:03 minikube kubelet[4849]: E0103 22:16:03.810704    4849 remote_runtime.go:295] ContainerStatus "2238eebe789adaee7c106480b00204b65ba54ac0c929477d97537ced524ef782" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 2238eebe789adaee7c106480b00204b65ba54ac0c929477d97537ced524ef782
Jan 03 22:16:03 minikube kubelet[4849]: I0103 22:16:03.890350    4849 reconciler.go:303] Volume detached for volume "default-token-svxzp" (UniqueName: "kubernetes.io/secret/b2cf435c-6dbb-4d7a-9931-813eeb33eb28-default-token-svxzp") on node "minikube" DevicePath ""
Jan 03 22:16:03 minikube kubelet[4849]: I0103 22:16:03.890391    4849 reconciler.go:303] Volume detached for volume "pvc-003cd546-1e8c-422f-acc0-e74e8f7fb932" (UniqueName: "kubernetes.io/glusterfs/b2cf435c-6dbb-4d7a-9931-813eeb33eb28-pvc-003cd546-1e8c-422f-acc0-e74e8f7fb932") on node "minikube" DevicePath ""
Jan 03 22:16:03 minikube kubelet[4849]: I0103 22:16:03.890403    4849 reconciler.go:303] Volume detached for volume "config" (UniqueName: "kubernetes.io/configmap/b2cf435c-6dbb-4d7a-9931-813eeb33eb28-config") on node "minikube" DevicePath ""
Jan 03 22:16:23 minikube kubelet[4849]: E0103 22:16:23.425909    4849 upgradeaware.go:357] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56788->127.0.0.1:42851: write tcp 127.0.0.1:56788->127.0.0.1:42851: write: broken pipe

==> kubernetes-dashboard ["8db287dd4bd9"] <==
2020/01/03 22:09:17 [2020-01-03T22:09:17Z] Outcoming response to 172.17.0.1:43358 with 200 status code
2020/01/03 22:09:19 [2020-01-03T22:09:19Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:43358: 
2020/01/03 22:09:19 Getting list of namespaces
2020/01/03 22:09:19 [2020-01-03T22:09:19Z] Outcoming response to 172.17.0.1:43358 with 200 status code
2020/01/03 22:09:21 [2020-01-03T22:09:21Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs request from 172.17.0.1:43358: 
2020/01/03 22:09:21 Getting details of glusterfs daemon set in storage-gluster namespace
2020/01/03 22:09:21 [2020-01-03T22:09:21Z] Outcoming response to 172.17.0.1:43358 with 200 status code
2020/01/03 22:09:21 [2020-01-03T22:09:21Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs/pod?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:43358: 
2020/01/03 22:09:21 Getting replication controller glusterfs pods in namespace storage-gluster
2020/01/03 22:09:21 Getting pod metrics
2020/01/03 22:09:21 [2020-01-03T22:09:21Z] Outcoming response to 172.17.0.1:43358 with 200 status code
2020/01/03 22:09:22 [2020-01-03T22:09:22Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs/service?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:43358: 
2020/01/03 22:09:22 [2020-01-03T22:09:22Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs/event?itemsPerPage=10&page=1 request from 172.17.0.1:43186: 
2020/01/03 22:09:22 [2020-01-03T22:09:22Z] Outcoming response to 172.17.0.1:43358 with 200 status code
2020/01/03 22:09:22 [2020-01-03T22:09:22Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:24 [2020-01-03T22:09:24Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:43186: 
2020/01/03 22:09:24 Getting list of namespaces
2020/01/03 22:09:24 [2020-01-03T22:09:24Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:26 [2020-01-03T22:09:26Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs request from 172.17.0.1:43186: 
2020/01/03 22:09:26 Getting details of glusterfs daemon set in storage-gluster namespace
2020/01/03 22:09:26 [2020-01-03T22:09:26Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:26 [2020-01-03T22:09:26Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs/pod?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:43186: 
2020/01/03 22:09:26 Getting replication controller glusterfs pods in namespace storage-gluster
2020/01/03 22:09:26 Getting pod metrics
2020/01/03 22:09:26 [2020-01-03T22:09:26Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:27 [2020-01-03T22:09:27Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs/service?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:43186: 
2020/01/03 22:09:27 [2020-01-03T22:09:27Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:27 [2020-01-03T22:09:27Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs/event?itemsPerPage=10&page=1 request from 172.17.0.1:43186: 
2020/01/03 22:09:27 [2020-01-03T22:09:27Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:29 [2020-01-03T22:09:29Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:43186: 
2020/01/03 22:09:29 Getting list of namespaces
2020/01/03 22:09:29 [2020-01-03T22:09:29Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:31 [2020-01-03T22:09:31Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs request from 172.17.0.1:43186: 
2020/01/03 22:09:31 Getting details of glusterfs daemon set in storage-gluster namespace
2020/01/03 22:09:31 [2020-01-03T22:09:31Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:31 [2020-01-03T22:09:31Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs/pod?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:43186: 
2020/01/03 22:09:31 Getting replication controller glusterfs pods in namespace storage-gluster
2020/01/03 22:09:31 Getting pod metrics
2020/01/03 22:09:31 [2020-01-03T22:09:31Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:32 [2020-01-03T22:09:32Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs/service?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:43186: 
2020/01/03 22:09:32 [2020-01-03T22:09:32Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:32 [2020-01-03T22:09:32Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs/event?itemsPerPage=10&page=1 request from 172.17.0.1:43186: 
2020/01/03 22:09:32 [2020-01-03T22:09:32Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:34 [2020-01-03T22:09:34Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:43186: 
2020/01/03 22:09:34 Getting list of namespaces
2020/01/03 22:09:34 [2020-01-03T22:09:34Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:36 [2020-01-03T22:09:36Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs request from 172.17.0.1:43186: 
2020/01/03 22:09:36 Getting details of glusterfs daemon set in storage-gluster namespace
2020/01/03 22:09:36 [2020-01-03T22:09:36Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:36 [2020-01-03T22:09:36Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs/pod?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:43186: 
2020/01/03 22:09:36 Getting replication controller glusterfs pods in namespace storage-gluster
2020/01/03 22:09:36 Getting pod metrics
2020/01/03 22:09:36 [2020-01-03T22:09:36Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:37 [2020-01-03T22:09:37Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs/service?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:43186: 
2020/01/03 22:09:37 [2020-01-03T22:09:37Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:37 [2020-01-03T22:09:37Z] Incoming HTTP/1.1 GET /api/v1/daemonset/storage-gluster/glusterfs/event?itemsPerPage=10&page=1 request from 172.17.0.1:43186: 
2020/01/03 22:09:37 [2020-01-03T22:09:37Z] Outcoming response to 172.17.0.1:43186 with 200 status code
2020/01/03 22:09:39 [2020-01-03T22:09:39Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:43186: 
2020/01/03 22:09:39 Getting list of namespaces
2020/01/03 22:09:39 [2020-01-03T22:09:39Z] Outcoming response to 172.17.0.1:43186 with 200 status code

==> storage-provisioner ["78aec03ba164"] <==

The operating system version: Fedora 31 x86_64

I managed to fix this by:

renich commented 4 years ago

OK, that didn't fix it. The new 50 GiB .img file was created, but I still don't have enough space. It thinks it's 10 G still. I'll dig more into it.

That said, I think it's just better to have the addon honor the disk-size setting.

afbjorklund commented 4 years ago

The disk-size in the minikube config is for the VM disk, and seems to have worked you say.

In order to change the glusterfs deployment, you need to change the storage provisioner.

There seems to be a environment variable USE_FAKE_SIZE than you can use (now 10G) ?

      containers:
      - image: quay.io/nixpanic/glusterfs-server:pr_fake-disk
        imagePullPolicy: IfNotPresent
        name: glusterfs
        env:
        - name: USE_FAKE_DISK
          value: "enabled"
        #- name: USE_FAKE_FILE
        #  value: "/srv/fake-disk.img"
        #- name: USE_FAKE_SIZE
        #  value: "10G"
        #- name: USE_FAKE_DEV
        #  value: "/dev/fake"

https://quay.io/repository/nixpanic/glusterfs-server?tag=pr_fake-disk

afbjorklund commented 4 years ago

It seems like the correct variable name is FAKE_DISK_SIZE (not "USE_FAKE_SIZE")

https://github.com/gluster/gluster-containers/blob/master/CentOS/Dockerfile

renich commented 4 years ago

OK, how do I set this up with minikube? I can't use glusterfs for now given the small size of the disk.

afbjorklund commented 4 years ago

I don't think it's a minikube-specific thing, it's how the kubernetes packaging for glusterfs works ?

Most likely you would kubectl edit (or similar edit in the yaml file) the glusterfs DaemonSet

renich commented 4 years ago

@afbjorklund I disagree. You can set the CPU and RAM amounts for your minikube k8s cluster. Why not enable setting the size of your storage layer?

afbjorklund commented 4 years ago

It would be fine to add a setting to the gluster addon (somehow), but the disk-size setting is for the size of the VM. Alternatively one could take up a discussion with the glusterfs packaging for kubernetes, and avoid having such a hard-coded value for the /srv/fake-disk.img in the first place ? Like have it be a percentage of the disk size or something ? Either way, not exactly minikube.

Currently we are still using the hostpath storage provider as the default for minikube.

afbjorklund commented 4 years ago

But we should change the name of those variables, commented out or not. Maybe even uncomment the size, just to make it perfectly clear where the 10G value can be edited if desired.

renich commented 4 years ago

@afbjorklund yes, you're right.

afbjorklund commented 4 years ago

Actually we do have an addons configuration possibility, but it's not implemented for this one:

❌ glusterfs has no available configuration options

In fact, I think the only addon that has any configuration at the moment is the registry-creds

So maybe we'll just go for the Kubernetes configuration here.


We will need to revisit the storage options in minikube sometime in the future, anyway. Need to rewrite the old hostpath provisioner, and need to introduce proper support for CSI.

And when the multinode option becomes available, it will need proper multinode storage... Probably using Ganesha would be the easiest, but GlusterFS could be considered as well.

renich commented 4 years ago

Thank you for the update.

medyagh commented 4 years ago

@renich is this issue solved or is there anything that we need to fix on our end ? (adimtably I didnt read the whole discussion)

renich commented 4 years ago

@medyagh well, @afbjorklund said you would revisit the storage options. I dunno if it can be closed and referenced later.

tstromberg commented 4 years ago

FYI - minikube config set disk-size sets the disk size for minikube - not for gluster.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

renich commented 4 years ago

Is this ticket gonna be automatically closed?

medyagh commented 4 years ago

I would be happy to accept a PR that will add a setting to gluster addon to set the Interal Disk Size for Gluster.

would you be interested in contributing that ? @renich

renich commented 4 years ago

@medyagh I can try. I have no idea of where to start, though, but it'll be fun.

priyawadhwa commented 3 years ago

Hey @renich are you still interested in working on this?

renich commented 3 years ago

Hey @renich are you still interested in working on this?

I just don't know where to start...

sharifelgamal commented 2 years ago

So currently the gluster daemonset yaml file has a bit that looks like:

      - image: {{.CustomRegistries.GlusterfsServer  | default .ImageRepository | default .Registries.GlusterfsServer }}{{.Images.GlusterfsServer}}
        imagePullPolicy: IfNotPresent
        name: glusterfs
        env:
        - name: USE_FAKE_DISK
          value: "enabled"
        #- name: USE_FAKE_FILE
        #  value: "/srv/fake-disk.img"
        #- name: USE_FAKE_SIZE
        #  value: "10G"
        #- name: USE_FAKE_DEV
        #  value: "/dev/fake"

It looks like we should (a) change these variable names to be correct, and then (b) add an option to minikube addons configure to be able to uncomment/change these. Adding a configure option would involve adding a case to https://github.com/kubernetes/minikube/blob/master/cmd/minikube/cmd/config/configure.go to do that work in the yaml file.