kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.42k stars 4.88k forks source link

storage-provisioner-gluster fails to provision volume due to Invalid JWT token #8846

Open mikemybytes opened 4 years ago

mikemybytes commented 4 years ago

Steps to reproduce the issue: Following the instructions described on storage-provisioner-gluster docs:

  1. minikube start --driver=virtualbox --kubernetes-version=v1.18.3 (fails on k8s v1.15.5 as well)
  2. minikube addons enable storage-provisioner-gluster
  3. Create pvc.yaml file with the following content (copied from here)
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
    name: website
    spec:
    accessModes:
    - ReadWriteMany
    resources:
    requests:
      storage: 2Mi
    storageClassName: glusterfile
  4. minikube kubectl -- apply -f pvc.yaml
  5. minikube kubectl -- get pvc -A - observe website PVC stuck in the Pending state
  6. minikube kubectl -- describe pvc website

Full output of failed command:

$ minikube kubectl -- describe pvc website 
Name:          website
Namespace:     default
StorageClass:  glusterfile
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   control-plane.alpha.kubernetes.io/leader:
                 {"holderIdentity":"1f877e9d-cff1-11ea-9caa-0242ac110005","leaseDurationSeconds":15,"acquireTime":"2020-07-27T10:10:11Z","renewTime":"2020-...
               volume.beta.kubernetes.io/storage-provisioner: gluster.org/glusterfile
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type     Reason                Age                  From                                                                                                   Message
  ----     ------                ----                 ----                                                                                                   -------
  Normal   Provisioning          17s (x7 over 112s)   gluster.org/glusterfile glusterfile-provisioner-86d86cd7db-v9dbv 1f877e9d-cff1-11ea-9caa-0242ac110005  External provisioner is provisioning volume for claim "default/website"
  Warning  ProvisioningFailed    17s (x7 over 112s)   gluster.org/glusterfile glusterfile-provisioner-86d86cd7db-v9dbv 1f877e9d-cff1-11ea-9caa-0242ac110005  Failed to provision volume with StorageClass "glusterfile": failed to create volume: failed to create gluster volume: Invalid JWT token: signature is invalid (client and server secrets may not match)
  Normal   ExternalProvisioning  11s (x23 over 112s)  persistentvolume-controller                                                                            waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator

Full output of minikube start command used:

πŸ˜„  minikube v1.11.0 on Arch 20.0.3
✨  Using the virtualbox driver based on user configuration
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”₯  Creating virtualbox VM (CPUs=4, Memory=12000MB, Disk=80000MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
πŸ”Ž  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
πŸ„  Done! kubectl is now configured to use "minikube"

Full output of minikube logs command:

minikube logs
==> Docker <==
-- Logs begin at Mon 2020-07-27 10:05:21 UTC, end at Mon 2020-07-27 10:13:19 UTC. --
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900199349Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900208960Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900217122Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900224828Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900232272Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900240444Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900248266Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900256037Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900263656Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900287844Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900297115Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900304948Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900313728Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900408301Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900468960Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.900479359Z" level=info msg="containerd successfully booted in 0.003893s"
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.907059975Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.907142350Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.907159452Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  }] }" module=grpc
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.907167972Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.907903295Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.907929533Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.907962307Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  }] }" module=grpc
Jul 27 10:05:39 minikube dockerd[2663]: time="2020-07-27T10:05:39.908003080Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.110673341Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.110705561Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.110738978Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.110745289Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.111006593Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.111032816Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.111315423Z" level=info msg="Loading containers: start."
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.194062730Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.228466846Z" level=info msg="Loading containers: done."
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.249679056Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.249781989Z" level=info msg="Daemon has completed initialization"
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.265846214Z" level=info msg="API listen on /var/run/docker.sock"
Jul 27 10:05:40 minikube systemd[1]: Started Docker Application Container Engine.
Jul 27 10:05:40 minikube dockerd[2663]: time="2020-07-27T10:05:40.266039486Z" level=info msg="API listen on [::]:2376"
Jul 27 10:05:52 minikube dockerd[2663]: time="2020-07-27T10:05:52.199414049Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e3189e24b22f7554976de4d58ec8d5b5ab757d3fc55ef639928260f5c0877fd6/shim.sock" debug=false pid=3656
Jul 27 10:05:52 minikube dockerd[2663]: time="2020-07-27T10:05:52.215632664Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6062d80b72c03dbe46fde751475d2072ee7ead6a1c2b7095f31648d8ee4a8de1/shim.sock" debug=false pid=3668
Jul 27 10:05:52 minikube dockerd[2663]: time="2020-07-27T10:05:52.225748533Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/009c9e73e658006a9bc187db985370b693a8aa54056c4650a9bd7a7cc0ec01e6/shim.sock" debug=false pid=3674
Jul 27 10:05:52 minikube dockerd[2663]: time="2020-07-27T10:05:52.226832236Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b0b23aef3a51062f4fab67521a8ba2c1f6563111745429a0c5a4c0016a5b902a/shim.sock" debug=false pid=3682
Jul 27 10:05:52 minikube dockerd[2663]: time="2020-07-27T10:05:52.445100892Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/82d49e386a3eb5e432102700f094b3884f6eb79be0a74850aa68a3e832999d22/shim.sock" debug=false pid=3838
Jul 27 10:05:52 minikube dockerd[2663]: time="2020-07-27T10:05:52.460490935Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/663a8f7210a88ebf87cdc4ce490119f66dfebf264663e76d864ae3761aa96153/shim.sock" debug=false pid=3852
Jul 27 10:05:52 minikube dockerd[2663]: time="2020-07-27T10:05:52.461544661Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/92c84704f028d6995f27d0d3181d2662cbf6258e098907afd9e9330c1108fbaf/shim.sock" debug=false pid=3855
Jul 27 10:05:52 minikube dockerd[2663]: time="2020-07-27T10:05:52.463259850Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/64a48eda5dcba5edd13861184431095c83239cf6944d3f87b384e6862e449b55/shim.sock" debug=false pid=3858
Jul 27 10:06:07 minikube dockerd[2663]: time="2020-07-27T10:06:07.588537453Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/790e884f25db2978046bac416c37aa6905ed0a6079500731c28f3ce8ea46151b/shim.sock" debug=false pid=4628
Jul 27 10:06:07 minikube dockerd[2663]: time="2020-07-27T10:06:07.804137191Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/28703a8f931a5260af0689c8ac2c25d6b6317f05dc29bad8e531f3424e4c4a24/shim.sock" debug=false pid=4673
Jul 27 10:06:07 minikube dockerd[2663]: time="2020-07-27T10:06:07.912524168Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c7c7273d88b8ed8022c40af86790e28fff1eba2ab5467d736503310290111dd7/shim.sock" debug=false pid=4705
Jul 27 10:06:08 minikube dockerd[2663]: time="2020-07-27T10:06:08.118232799Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b4147a2451beb84cfb956989d84a20656b77c5954c93420c104032140ff4ba4f/shim.sock" debug=false pid=4779
Jul 27 10:06:08 minikube dockerd[2663]: time="2020-07-27T10:06:08.368041166Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/05d0f21ed8b4f8dd54254f8363ed596ef835b88a86b4225d9e7e7b7279464de5/shim.sock" debug=false pid=4862
Jul 27 10:06:08 minikube dockerd[2663]: time="2020-07-27T10:06:08.380463078Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c376229e967ccfd3d5fb615ee57b9c606ddf4c85bf8814dd29950420cf99acee/shim.sock" debug=false pid=4877
Jul 27 10:06:08 minikube dockerd[2663]: time="2020-07-27T10:06:08.664820255Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4c6eeff46a3a6bb8725bba5fec4ea4643cc59fbbf251e088968bcffc9fdc83c8/shim.sock" debug=false pid=4977
Jul 27 10:06:08 minikube dockerd[2663]: time="2020-07-27T10:06:08.683547952Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/12c33593c866eefc961508fb08516c4a3152beeaeb16ecaced2b80b0a6b9d4a6/shim.sock" debug=false pid=4993
Jul 27 10:08:07 minikube dockerd[2663]: time="2020-07-27T10:08:07.184082567Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6e7f31d3c7f2f8ad1a37fd2e0b269b61dced9399dd2dda0c055700724fc5ccc4/shim.sock" debug=false pid=5697
Jul 27 10:08:07 minikube dockerd[2663]: time="2020-07-27T10:08:07.237486023Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e63a6b7f5eef58c228217297d7e6cba2631f28f60d6b51d45840592f8c4bfbdd/shim.sock" debug=false pid=5723
Jul 27 10:08:08 minikube dockerd[2663]: time="2020-07-27T10:08:08.125547862Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fbd8a52be66e4208065859302a26210824f0846b5cf60144c0f27a52fcdb3f3c/shim.sock" debug=false pid=5817
Jul 27 10:08:22 minikube dockerd[2663]: time="2020-07-27T10:08:22.326854458Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/306ce1376113f1a47788a15791cbe06c3abb0ec0b1fb9fc3591a01e718384ad4/shim.sock" debug=false pid=5927
Jul 27 10:08:30 minikube dockerd[2663]: time="2020-07-27T10:08:30.967527909Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dc2c1d0388727aee5ddbe0fb518a532b2b3615055d1d0139fbfe94bb8b2a6cec/shim.sock" debug=false pid=6048
Jul 27 10:08:46 minikube dockerd[2663]: time="2020-07-27T10:08:46.548433338Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/099959d7846c2d7c7d7e2d7a6767d7dd2499ad5468d560cd26c4bb269f8477b5/shim.sock" debug=false pid=6300

==> container status <==
CONTAINER           IMAGE                                                                                                       CREATED             STATE               NAME                      ATTEMPT             POD ID
099959d7846c2       quay.io/nixpanic/glusterfs-server@sha256:3c58ae9d4e2007758954879d3f4095533831eb757c64ca6a0e32d1fc53fb6034   4 minutes ago       Running             glusterfs                 0                   fbd8a52be66e4
dc2c1d0388727       gluster/glusterfile-provisioner@sha256:9961a35cb3f06701958e202324141c30024b195579e5eb1704599659ddea5223     4 minutes ago       Running             glusterfile-provisioner   0                   e63a6b7f5eef5
306ce1376113f       heketi/heketi@sha256:7b9e9a11e47a8b45c79b2d7df6fc8e29d036bedf27da3edce7e2bda88e48812e                       4 minutes ago       Running             heketi                    0                   6e7f31d3c7f2f
12c33593c866e       67da37a9a360e                                                                                               7 minutes ago       Running             coredns                   0                   05d0f21ed8b4f
4c6eeff46a3a6       67da37a9a360e                                                                                               7 minutes ago       Running             coredns                   0                   c376229e967cc
b4147a2451beb       4689081edb103                                                                                               7 minutes ago       Running             storage-provisioner       0                   c7c7273d88b8e
28703a8f931a5       3439b7546f29b                                                                                               7 minutes ago       Running             kube-proxy                0                   790e884f25db2
64a48eda5dcba       303ce5db0e90d                                                                                               7 minutes ago       Running             etcd                      0                   009c9e73e6580
663a8f7210a88       76216c34ed0c7                                                                                               7 minutes ago       Running             kube-scheduler            0                   b0b23aef3a510
82d49e386a3eb       da26705ccb4b5                                                                                               7 minutes ago       Running             kube-controller-manager   0                   6062d80b72c03
92c84704f028d       7e28efa976bd1                                                                                               7 minutes ago       Running             kube-apiserver            0                   e3189e24b22f7

==> coredns [12c33593c866] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b

==> coredns [4c6eeff46a3a] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b

==> describe nodes <==
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2020_07_27T12_05_59_0700
                    minikube.k8s.io/version=v1.11.0
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 27 Jul 2020 10:05:56 +0000
Taints:             
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     
  RenewTime:       Mon, 27 Jul 2020 10:13:16 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Mon, 27 Jul 2020 10:09:06 +0000   Mon, 27 Jul 2020 10:05:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 27 Jul 2020 10:09:06 +0000   Mon, 27 Jul 2020 10:05:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 27 Jul 2020 10:09:06 +0000   Mon, 27 Jul 2020 10:05:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Mon, 27 Jul 2020 10:09:06 +0000   Mon, 27 Jul 2020 10:05:56 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.99.204
  Hostname:    minikube
Capacity:
  cpu:                4
  ephemeral-storage:  71143408Ki
  hugepages-2Mi:      0
  memory:             12001492Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  71143408Ki
  hugepages-2Mi:      0
  memory:             12001492Ki
  pods:               110
System Info:
  Machine ID:                 336f0a50e586483c9655f9c96a310706
  System UUID:                cc8156f3-956d-c04e-9c80-aa1d568f7d7f
  Boot ID:                    787b5940-f153-4b1a-b8e6-a2622f5f621f
  Kernel Version:             4.19.107
  OS Image:                   Buildroot 2019.02.10
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.8
  Kubelet Version:            v1.18.3
  Kube-Proxy Version:         v1.18.3
Non-terminated Pods:          (11 in total)
  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-66bff467f8-8cx7l                    100m (2%)     0 (0%)      70Mi (0%)        170Mi (1%)     7m13s
  kube-system                 coredns-66bff467f8-qf9s8                    100m (2%)     0 (0%)      70Mi (0%)        170Mi (1%)     7m13s
  kube-system                 etcd-minikube                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
  kube-system                 kube-apiserver-minikube                     250m (6%)     0 (0%)      0 (0%)           0 (0%)         7m14s
  kube-system                 kube-controller-manager-minikube            200m (5%)     0 (0%)      0 (0%)           0 (0%)         7m14s
  kube-system                 kube-proxy-mdccl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
  kube-system                 kube-scheduler-minikube                     100m (2%)     0 (0%)      0 (0%)           0 (0%)         7m14s
  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m20s
  storage-gluster             glusterfile-provisioner-86d86cd7db-v9dbv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
  storage-gluster             glusterfs-h5ct2                             100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         5m13s
  storage-gluster             heketi-686d48d874-ft4b4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (21%)  0 (0%)
  memory             240Mi (2%)  340Mi (2%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                    From                  Message
  ----    ------                   ----                   ----                  -------
  Normal  NodeHasSufficientMemory  7m29s (x5 over 7m29s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    7m29s (x5 over 7m29s)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     7m29s (x4 over 7m29s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  Starting                 7m15s                  kubelet, minikube     Starting kubelet.
  Normal  NodeHasSufficientMemory  7m15s                  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    7m15s                  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     7m15s                  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  7m15s                  kubelet, minikube     Updated Node Allocatable limit across pods
  Normal  Starting                 7m12s                  kube-proxy, minikube  Starting kube-proxy.

==> dmesg <==
[  +5.000538] hpet1: lost 318 rtc interrupts
[  +5.004092] hpet1: lost 318 rtc interrupts
[  +4.997810] hpet1: lost 318 rtc interrupts
[  +5.002615] hpet1: lost 318 rtc interrupts
[  +5.000331] hpet_rtc_timer_reinit: 11 callbacks suppressed
[  +0.000009] hpet1: lost 318 rtc interrupts
[  +5.003604] hpet1: lost 318 rtc interrupts
[  +5.001623] hpet1: lost 318 rtc interrupts
[Jul27 10:09] hpet1: lost 318 rtc interrupts
[  +5.005983] hpet1: lost 319 rtc interrupts
[  +5.001636] hpet1: lost 318 rtc interrupts
[  +5.000617] hpet1: lost 318 rtc interrupts
[  +5.001207] hpet1: lost 318 rtc interrupts
[  +5.001530] hpet1: lost 318 rtc interrupts
[  +5.001196] hpet1: lost 318 rtc interrupts
[  +4.999839] hpet1: lost 318 rtc interrupts
[  +5.004485] hpet1: lost 318 rtc interrupts
[  +5.003934] hpet1: lost 319 rtc interrupts
[  +5.006293] hpet1: lost 318 rtc interrupts
[  +5.006851] hpet1: lost 319 rtc interrupts
[Jul27 10:10] hpet1: lost 318 rtc interrupts
[  +5.002754] hpet1: lost 318 rtc interrupts
[  +5.004365] hpet1: lost 318 rtc interrupts
[  +5.004810] hpet1: lost 318 rtc interrupts
[  +5.004511] hpet1: lost 319 rtc interrupts
[  +5.002412] hpet1: lost 319 rtc interrupts
[  +5.004823] hpet1: lost 319 rtc interrupts
[  +5.005339] hpet1: lost 318 rtc interrupts
[  +5.004539] hpet1: lost 319 rtc interrupts
[  +5.006321] hpet1: lost 319 rtc interrupts
[  +5.003436] hpet1: lost 318 rtc interrupts
[  +5.009520] hpet1: lost 319 rtc interrupts
[Jul27 10:11] hpet1: lost 318 rtc interrupts
[  +5.005889] hpet1: lost 319 rtc interrupts
[  +5.004962] hpet1: lost 318 rtc interrupts
[  +5.004840] hpet1: lost 318 rtc interrupts
[  +5.005185] hpet1: lost 319 rtc interrupts
[  +5.004553] hpet1: lost 318 rtc interrupts
[  +5.008944] hpet1: lost 319 rtc interrupts
[  +5.005026] hpet1: lost 319 rtc interrupts
[  +5.005612] hpet1: lost 318 rtc interrupts
[  +5.005218] hpet1: lost 318 rtc interrupts
[  +5.004896] hpet1: lost 319 rtc interrupts
[  +5.005201] hpet1: lost 318 rtc interrupts
[Jul27 10:12] hpet1: lost 318 rtc interrupts
[  +5.004512] hpet1: lost 319 rtc interrupts
[  +5.006522] hpet1: lost 318 rtc interrupts
[  +5.005424] hpet1: lost 319 rtc interrupts
[  +5.004520] hpet1: lost 318 rtc interrupts
[  +5.004982] hpet1: lost 318 rtc interrupts
[  +5.004195] hpet1: lost 319 rtc interrupts
[  +5.005412] hpet1: lost 319 rtc interrupts
[  +5.004708] hpet1: lost 318 rtc interrupts
[  +5.004846] hpet1: lost 318 rtc interrupts
[  +5.000959] hpet1: lost 318 rtc interrupts
[  +5.000240] hpet1: lost 318 rtc interrupts
[Jul27 10:13] hpet1: lost 319 rtc interrupts
[  +5.000505] hpet1: lost 318 rtc interrupts
[  +5.003772] hpet1: lost 318 rtc interrupts
[  +5.000521] hpet1: lost 318 rtc interrupts

==> etcd [64a48eda5dcb] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-07-27 10:05:52.618516 I | etcdmain: etcd Version: 3.4.3
2020-07-27 10:05:52.618637 I | etcdmain: Git SHA: 3cf2f69b5
2020-07-27 10:05:52.618641 I | etcdmain: Go Version: go1.12.12
2020-07-27 10:05:52.618644 I | etcdmain: Go OS/Arch: linux/amd64
2020-07-27 10:05:52.618647 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-07-27 10:05:52.618713 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-07-27 10:05:52.619555 I | embed: name = minikube
2020-07-27 10:05:52.619567 I | embed: data dir = /var/lib/minikube/etcd
2020-07-27 10:05:52.619573 I | embed: member dir = /var/lib/minikube/etcd/member
2020-07-27 10:05:52.619577 I | embed: heartbeat = 100ms
2020-07-27 10:05:52.619581 I | embed: election = 1000ms
2020-07-27 10:05:52.619583 I | embed: snapshot count = 10000
2020-07-27 10:05:52.619593 I | embed: advertise client URLs = https://192.168.99.204:2379
2020-07-27 10:05:52.624429 I | etcdserver: starting member 28a26828a554136c in cluster 9e7f0180172d94d1
raft2020/07/27 10:05:52 INFO: 28a26828a554136c switched to configuration voters=()
raft2020/07/27 10:05:52 INFO: 28a26828a554136c became follower at term 0
raft2020/07/27 10:05:52 INFO: newRaft 28a26828a554136c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/07/27 10:05:52 INFO: 28a26828a554136c became follower at term 1
raft2020/07/27 10:05:52 INFO: 28a26828a554136c switched to configuration voters=(2928017231525974892)
2020-07-27 10:05:52.628761 W | auth: simple token is not cryptographically signed
2020-07-27 10:05:52.630616 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-07-27 10:05:52.631161 I | etcdserver: 28a26828a554136c as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/07/27 10:05:52 INFO: 28a26828a554136c switched to configuration voters=(2928017231525974892)
2020-07-27 10:05:52.634315 I | etcdserver/membership: added member 28a26828a554136c [https://192.168.99.204:2380] to cluster 9e7f0180172d94d1
2020-07-27 10:05:52.637382 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-07-27 10:05:52.637533 I | embed: listening for metrics on http://127.0.0.1:2381
2020-07-27 10:05:52.637596 I | embed: listening for peers on 192.168.99.204:2380
raft2020/07/27 10:05:53 INFO: 28a26828a554136c is starting a new election at term 1
raft2020/07/27 10:05:53 INFO: 28a26828a554136c became candidate at term 2
raft2020/07/27 10:05:53 INFO: 28a26828a554136c received MsgVoteResp from 28a26828a554136c at term 2
raft2020/07/27 10:05:53 INFO: 28a26828a554136c became leader at term 2
raft2020/07/27 10:05:53 INFO: raft.node: 28a26828a554136c elected leader 28a26828a554136c at term 2
2020-07-27 10:05:53.326685 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.99.204:2379]} to cluster 9e7f0180172d94d1
2020-07-27 10:05:53.330856 I | embed: ready to serve client requests
2020-07-27 10:05:53.331531 I | embed: ready to serve client requests
2020-07-27 10:05:53.336199 I | etcdserver: setting up the initial cluster version to 3.4
2020-07-27 10:05:53.338697 I | embed: serving client requests on 127.0.0.1:2379
2020-07-27 10:05:53.340104 I | embed: serving client requests on 192.168.99.204:2379
2020-07-27 10:05:53.340473 N | etcdserver/membership: set the initial cluster version to 3.4
2020-07-27 10:05:53.340783 I | etcdserver/api: enabled capabilities for version 3.4

==> kernel <==
 10:13:20 up 8 min,  0 users,  load average: 0.06, 0.33, 0.25
Linux minikube 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.10"

==> kube-apiserver [92c84704f028] <==
I0727 10:05:54.377598       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379   0 }]
W0727 10:05:54.386891       1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0727 10:05:54.401610       1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0727 10:05:54.404314       1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0727 10:05:54.415578       1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0727 10:05:54.431068       1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0727 10:05:54.431093       1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0727 10:05:54.438224       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0727 10:05:54.438248       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0727 10:05:54.439770       1 client.go:361] parsed scheme: "endpoint"
I0727 10:05:54.439846       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379   0 }]
I0727 10:05:54.448535       1 client.go:361] parsed scheme: "endpoint"
I0727 10:05:54.448636       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379   0 }]
I0727 10:05:56.089332       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0727 10:05:56.089373       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0727 10:05:56.089706       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0727 10:05:56.090083       1 secure_serving.go:178] Serving securely on [::]:8443
I0727 10:05:56.090119       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0727 10:05:56.090154       1 autoregister_controller.go:141] Starting autoregister controller
I0727 10:05:56.090159       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0727 10:05:56.090205       1 crd_finalizer.go:266] Starting CRDFinalizer
I0727 10:05:56.090217       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0727 10:05:56.090221       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0727 10:05:56.090236       1 controller.go:81] Starting OpenAPI AggregationController
I0727 10:05:56.090378       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0727 10:05:56.090504       1 establishing_controller.go:76] Starting EstablishingController
I0727 10:05:56.090428       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0727 10:05:56.090440       1 controller.go:86] Starting OpenAPI controller
I0727 10:05:56.090480       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0727 10:05:56.090491       1 naming_controller.go:291] Starting NamingConditionController
I0727 10:05:56.091311       1 available_controller.go:387] Starting AvailableConditionController
I0727 10:05:56.091407       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0727 10:05:56.091830       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0727 10:05:56.092010       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0727 10:05:56.092109       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0727 10:05:56.092157       1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0727 10:05:56.093763       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0727 10:05:56.093786       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
E0727 10:05:56.109243       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.204, ResourceVersion: 0, AdditionalErrorMsg: 
I0727 10:05:56.190449       1 cache.go:39] Caches are synced for autoregister controller
I0727 10:05:56.190514       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0727 10:05:56.191740       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0727 10:05:56.192329       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
I0727 10:05:56.193913       1 shared_informer.go:230] Caches are synced for crd-autoregister 
I0727 10:05:57.090656       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0727 10:05:57.091047       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0727 10:05:57.101611       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0727 10:05:57.108849       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0727 10:05:57.109114       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0727 10:05:57.472111       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0727 10:05:57.509042       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0727 10:05:57.583570       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.99.204]
I0727 10:05:57.584365       1 controller.go:606] quota admission added evaluator for: endpoints
I0727 10:05:57.590937       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0727 10:05:59.252124       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0727 10:05:59.266275       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0727 10:05:59.450378       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0727 10:05:59.494496       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0727 10:06:06.891813       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0727 10:06:07.017468       1 controller.go:606] quota admission added evaluator for: replicasets.apps

==> kube-controller-manager [82d49e386a3e] <==
I0727 10:06:07.111528       1 shared_informer.go:230] Caches are synced for attach detach 
I0727 10:06:07.111551       1 shared_informer.go:230] Caches are synced for persistent volume 
I0727 10:06:07.111745       1 shared_informer.go:230] Caches are synced for PV protection 
I0727 10:06:07.119942       1 shared_informer.go:230] Caches are synced for garbage collector 
I0727 10:06:07.119976       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0727 10:06:07.161355       1 shared_informer.go:230] Caches are synced for expand 
I0727 10:08:06.762733       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"storage-gluster", Name:"heketi", UID:"6861e624-6f3c-4d28-9f5d-1e1f2e443185", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set heketi-686d48d874 to 1
I0727 10:08:06.775823       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"storage-gluster", Name:"heketi-686d48d874", UID:"9532804e-5529-4dd5-b814-1baf1b6a8432", APIVersion:"apps/v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: heketi-686d48d874-ft4b4
I0727 10:08:06.840017       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"storage-gluster", Name:"glusterfile-provisioner", UID:"da8ce1bc-de3b-41f0-94c5-5561c69ee895", APIVersion:"apps/v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set glusterfile-provisioner-86d86cd7db to 1
I0727 10:08:06.844527       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"storage-gluster", Name:"glusterfile-provisioner-86d86cd7db", UID:"9a9553e3-0a59-438e-891f-a1cc1434bdfc", APIVersion:"apps/v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: glusterfile-provisioner-86d86cd7db-v9dbv
I0727 10:08:07.726466       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"storage-gluster", Name:"glusterfs", UID:"f71a8f28-0f26-4947-8488-c19f2a052b38", APIVersion:"apps/v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: glusterfs-h5ct2
E0727 10:08:07.753374       1 daemon_controller.go:292] storage-gluster/glusterfs failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"glusterfs", GenerateName:"", Namespace:"storage-gluster", SelfLink:"/apis/apps/v1/namespaces/storage-gluster/daemonsets/glusterfs", UID:"f71a8f28-0f26-4947-8488-c19f2a052b38", ResourceVersion:"676", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731441286, loc:(*time.Location)(0x6d09200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "glusterfs":"daemonset", "k8s-app":"storage-provisioner-gluster", "kubernetes.io/minikube-addons":"storage-provisioner-gluster"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "description":"GlusterFS DaemonSet", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"description\":\"GlusterFS DaemonSet\",\"tags\":\"glusterfs\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"glusterfs\":\"daemonset\",\"k8s-app\":\"storage-provisioner-gluster\",\"kubernetes.io/minikube-addons\":\"storage-provisioner-gluster\"},\"name\":\"glusterfs\",\"namespace\":\"storage-gluster\"},\"spec\":{\"selector\":{\"matchLabels\":{\"glusterfs\":\"pod\",\"glusterfs-node\":\"pod\",\"k8s-app\":\"storage-provisioner-gluster\"}},\"template\":{\"metadata\":{\"labels\":{\"glusterfs\":\"pod\",\"glusterfs-node\":\"pod\",\"k8s-app\":\"storage-provisioner-gluster\"},\"name\":\"glusterfs\",\"namespace\":\"storage-gluster\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"USE_FAKE_DISK\",\"value\":\"enabled\"}],\"image\":\"quay.io/nixpanic/glusterfs-server:pr_fake-disk\",\"imagePullPolicy\":\"IfNotPresent\",\"livenessProbe\":{\"exec\":{\"command\":[\"/bin/bash\",\"-c\",\"systemctl status glusterd.service\"]},\"failureThreshold\":50,\"initialDelaySeconds\":40,\"periodSeconds\":25,\"successThreshold\":1,\"timeoutSeconds\":3},\"name\":\"glusterfs\",\"readinessProbe\":{\"exec\":{\"command\":[\"/bin/bash\",\"-c\",\"systemctl status glusterd.service\"]},\"failureThreshold\":50,\"initialDelaySeconds\":40,\"periodSeconds\":25,\"successThreshold\":1,\"timeoutSeconds\":3},\"resources\":{\"requests\":{\"cpu\":\"100m\",\"memory\":\"100Mi\"}},\"securityContext\":{\"capabilities\":{},\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/srv\",\"name\":\"fake-disk\"},{\"mountPath\":\"/var/lib/heketi\",\"name\":\"glusterfs-heketi\"},{\"mountPath\":\"/run\",\"name\":\"glusterfs-run\"},{\"mountPath\":\"/run/lvm\",\"name\":\"glusterfs-lvm\"},{\"mountPath\":\"/var/log/glusterfs\",\"name\":\"glusterfs-logs\"},{\"mountPath\":\"/var/lib/glusterd\",\"name\":\"glusterfs-config\"},{\"mountPath\":\"/dev\",\"name\":\"glusterfs-dev\"},{\"mountPath\":\"/var/lib/misc/glusterfsd\",\"name\":\"glusterfs-misc\"},{\"mountPath\":\"/sys/fs/cgroup\",\"name\":\"glusterfs-cgroup\",\"readOnly\":true},{\"mountPath\":\"/etc/ssl\",\"name\":\"glusterfs-ssl\",\"readOnly\":true},{\"mountPath\":\"/usr/lib/modules\",\"name\":\"kernel-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"volumes\":[{\"hostPath\":{\"path\":\"/srv\"},\"name\":\"fake-disk\"},{\"hostPath\":{\"path\":\"/var/lib/heketi\"},\"name\":\"glusterfs-heketi\"},{\"name\":\"glusterfs-run\"},{\"hostPath\":{\"path\":\"/run/lvm\"},\"name\":\"glusterfs-lvm\"},{\"hostPath\":{\"path\":\"/etc/glusterfs\"},\"name\":\"glusterfs-etc\"},{\"hostPath\":{\"path\":\"/var/log/glusterfs\"},\"name\":\"glusterfs-logs\"},{\"hostPath\":{\"path\":\"/var/lib/glusterd\"},\"name\":\"glusterfs-config\"},{\"hostPath\":{\"path\":\"/dev\"},\"name\":\"glusterfs-dev\"},{\"hostPath\":{\"path\":\"/var/lib/misc/glusterfsd\"},\"name\":\"glusterfs-misc\"},{\"hostPath\":{\"path\":\"/sys/fs/cgroup\"},\"name\":\"glusterfs-cgroup\"},{\"hostPath\":{\"path\":\"/etc/ssl\"},\"name\":\"glusterfs-ssl\"},{\"hostPath\":{\"path\":\"/usr/lib/modules\"},\"name\":\"kernel-modules\"}]}}}}\n", "tags":"glusterfs"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001ed8420), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001ed8440)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001ed8460), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"glusterfs", GenerateName:"", Namespace:"storage-gluster", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"glusterfs":"pod", "glusterfs-node":"pod", "k8s-app":"storage-provisioner-gluster"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"fake-disk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ed8480), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"glusterfs-heketi", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ed84a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"glusterfs-run", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc001ed84c0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"glusterfs-lvm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ed84e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"glusterfs-etc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ed8500), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"glusterfs-logs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ed8520), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"glusterfs-config", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ed8540), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"glusterfs-dev", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ed8560), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"glusterfs-misc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ed8580), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"glusterfs-cgroup", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ed85a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"glusterfs-ssl", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ed85c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"kernel-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ed85e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"glusterfs", Image:"quay.io/nixpanic/glusterfs-server:pr_fake-disk", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"USE_FAKE_DISK", Value:"enabled", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"fake-disk", ReadOnly:false, MountPath:"/srv", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"glusterfs-heketi", ReadOnly:false, MountPath:"/var/lib/heketi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"glusterfs-run", ReadOnly:false, MountPath:"/run", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"glusterfs-lvm", ReadOnly:false, MountPath:"/run/lvm", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"glusterfs-logs", ReadOnly:false, MountPath:"/var/log/glusterfs", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"glusterfs-config", ReadOnly:false, MountPath:"/var/lib/glusterd", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"glusterfs-dev", ReadOnly:false, MountPath:"/dev", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"glusterfs-misc", ReadOnly:false, MountPath:"/var/lib/misc/glusterfsd", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"glusterfs-cgroup", ReadOnly:true, MountPath:"/sys/fs/cgroup", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"glusterfs-ssl", ReadOnly:true, MountPath:"/etc/ssl", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kernel-modules", ReadOnly:true, MountPath:"/usr/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001f74660), ReadinessProbe:(*v1.Probe)(0xc001f74690), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001e1e0f0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e607a0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00022a070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0001080e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001e607a8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "glusterfs": the object has been modified; please apply your changes to the latest version and try again
I0727 10:10:11.030058       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:10:11.050352       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1032", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:10:13.061163       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1040", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:10:22.125133       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1040", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:10:31.161495       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1085", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:10:33.171202       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:10:37.125935       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:10:52.126332       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:01.168728       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1162", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:03.179093       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1170", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:07.126641       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1170", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:22.126893       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1170", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:31.155328       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1236", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:33.164436       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1242", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:35.179308       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1248", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:37.127701       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1248", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:37.192721       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1256", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:39.205531       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1263", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:41.222121       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1269", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:43.229973       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1275", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:45.240215       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1281", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:47.247796       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1290", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:11:52.130385       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1290", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:07.131084       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1290", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:16.170290       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1358", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:18.181515       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1365", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:20.192182       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1370", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:22.131784       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1370", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:22.203101       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1375", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:24.215266       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1380", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:26.225922       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1386", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:28.236145       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1392", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:30.247289       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1397", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:32.260453       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1404", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:37.132431       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1404", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:12:52.133130       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1404", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:13:01.157967       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1467", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:13:03.168646       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1472", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:13:05.176938       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1477", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:13:07.133358       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1477", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:13:07.185599       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1483", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:13:09.196638       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1489", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:13:11.211995       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1494", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:13:13.218560       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1499", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:13:15.230292       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1504", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:13:17.241166       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1510", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator
I0727 10:13:19.251205       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"website", UID:"f035df06-342b-4c45-9fb5-643c97a5b6e3", APIVersion:"v1", ResourceVersion:"1516", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "gluster.org/glusterfile" or manually created by system administrator

==> kube-proxy [28703a8f931a] <==
W0727 10:06:07.993773       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0727 10:06:08.000044       1 node.go:136] Successfully retrieved node IP: 192.168.99.204
I0727 10:06:08.000177       1 server_others.go:186] Using iptables Proxier.
W0727 10:06:08.000253       1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0727 10:06:08.000302       1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I0727 10:06:08.000605       1 server.go:583] Version: v1.18.3
I0727 10:06:08.001204       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0727 10:06:08.001261       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0727 10:06:08.001393       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0727 10:06:08.001529       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0727 10:06:08.001838       1 config.go:315] Starting service config controller
I0727 10:06:08.001872       1 shared_informer.go:223] Waiting for caches to sync for service config
I0727 10:06:08.002016       1 config.go:133] Starting endpoints config controller
I0727 10:06:08.002080       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0727 10:06:08.102270       1 shared_informer.go:230] Caches are synced for service config 
I0727 10:06:08.102271       1 shared_informer.go:230] Caches are synced for endpoints config 

==> kube-scheduler [663a8f7210a8] <==
I0727 10:05:52.710784       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0727 10:05:52.710837       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0727 10:05:53.174680       1 serving.go:313] Generated self-signed cert in-memory
W0727 10:05:56.111376       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0727 10:05:56.111401       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0727 10:05:56.111408       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0727 10:05:56.111412       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0727 10:05:56.130620       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0727 10:05:56.130732       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0727 10:05:56.131795       1 authorization.go:47] Authorization is disabled
W0727 10:05:56.131817       1 authentication.go:40] Authentication is disabled
I0727 10:05:56.131826       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0727 10:05:56.139396       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0727 10:05:56.139512       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0727 10:05:56.139913       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0727 10:05:56.140078       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0727 10:05:56.141484       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0727 10:05:56.145000       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0727 10:05:56.145113       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0727 10:05:56.145278       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0727 10:05:56.148021       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0727 10:05:56.148850       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0727 10:05:56.148921       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0727 10:05:56.152541       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0727 10:05:56.153955       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0727 10:05:56.986558       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0727 10:05:57.018801       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0727 10:05:57.058024       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0727 10:05:57.150882       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0727 10:05:57.196487       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0727 10:05:57.442043       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0727 10:06:00.039871       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0727 10:06:00.040445       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0727 10:06:00.048777       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0727 10:06:00.070818       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue

==> kubelet <==
-- Logs begin at Mon 2020-07-27 10:05:21 UTC, end at Mon 2020-07-27 10:13:19 UTC. --
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.075717    4305 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.077969    4305 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.080111    4305 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.082233    4305 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.158489    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/9efece3e99e04b36d087dc04baeb2b45-etcd-data") pod "etcd-minikube" (UID: "9efece3e99e04b36d087dc04baeb2b45")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.158567    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/72290c5e9de5454e964116b38c8e5cb7-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "72290c5e9de5454e964116b38c8e5cb7")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.158599    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/a8caea92c80c24c844216eb1d68fe417-kubeconfig") pod "kube-scheduler-minikube" (UID: "a8caea92c80c24c844216eb1d68fe417")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.158671    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/6188fbbe64e28a0413e239e610f71669-ca-certs") pod "kube-controller-manager-minikube" (UID: "6188fbbe64e28a0413e239e610f71669")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.158700    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/6188fbbe64e28a0413e239e610f71669-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "6188fbbe64e28a0413e239e610f71669")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.158830    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/6188fbbe64e28a0413e239e610f71669-k8s-certs") pod "kube-controller-manager-minikube" (UID: "6188fbbe64e28a0413e239e610f71669")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.158999    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6188fbbe64e28a0413e239e610f71669-kubeconfig") pod "kube-controller-manager-minikube" (UID: "6188fbbe64e28a0413e239e610f71669")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.159152    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/6188fbbe64e28a0413e239e610f71669-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "6188fbbe64e28a0413e239e610f71669")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.159319    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/9efece3e99e04b36d087dc04baeb2b45-etcd-certs") pod "etcd-minikube" (UID: "9efece3e99e04b36d087dc04baeb2b45")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.159389    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/72290c5e9de5454e964116b38c8e5cb7-ca-certs") pod "kube-apiserver-minikube" (UID: "72290c5e9de5454e964116b38c8e5cb7")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.159418    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/72290c5e9de5454e964116b38c8e5cb7-k8s-certs") pod "kube-apiserver-minikube" (UID: "72290c5e9de5454e964116b38c8e5cb7")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.159456    4305 reconciler.go:157] Reconciler: start to sync state
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.911068    4305 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.932263    4305 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.963094    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/098d3e03-9613-4377-a252-c05cb9532986-xtables-lock") pod "kube-proxy-mdccl" (UID: "098d3e03-9613-4377-a252-c05cb9532986")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.963162    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-f6ttb" (UniqueName: "kubernetes.io/secret/098d3e03-9613-4377-a252-c05cb9532986-kube-proxy-token-f6ttb") pod "kube-proxy-mdccl" (UID: "098d3e03-9613-4377-a252-c05cb9532986")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.963188    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/0b04c589-6ca8-4096-91de-65121f11359c-tmp") pod "storage-provisioner" (UID: "0b04c589-6ca8-4096-91de-65121f11359c")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.963208    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-kvbsp" (UniqueName: "kubernetes.io/secret/0b04c589-6ca8-4096-91de-65121f11359c-storage-provisioner-token-kvbsp") pod "storage-provisioner" (UID: "0b04c589-6ca8-4096-91de-65121f11359c")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.963226    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/098d3e03-9613-4377-a252-c05cb9532986-kube-proxy") pod "kube-proxy-mdccl" (UID: "098d3e03-9613-4377-a252-c05cb9532986")
Jul 27 10:06:06 minikube kubelet[4305]: I0727 10:06:06.963246    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/098d3e03-9613-4377-a252-c05cb9532986-lib-modules") pod "kube-proxy-mdccl" (UID: "098d3e03-9613-4377-a252-c05cb9532986")
Jul 27 10:06:07 minikube kubelet[4305]: I0727 10:06:07.041126    4305 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jul 27 10:06:07 minikube kubelet[4305]: I0727 10:06:07.058785    4305 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jul 27 10:06:07 minikube kubelet[4305]: I0727 10:06:07.064221    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e35fe81a-03d1-409f-8790-99e8abe264de-config-volume") pod "coredns-66bff467f8-qf9s8" (UID: "e35fe81a-03d1-409f-8790-99e8abe264de")
Jul 27 10:06:07 minikube kubelet[4305]: I0727 10:06:07.064352    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-q9pn9" (UniqueName: "kubernetes.io/secret/e35fe81a-03d1-409f-8790-99e8abe264de-coredns-token-q9pn9") pod "coredns-66bff467f8-qf9s8" (UID: "e35fe81a-03d1-409f-8790-99e8abe264de")
Jul 27 10:06:07 minikube kubelet[4305]: I0727 10:06:07.164752    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-q9pn9" (UniqueName: "kubernetes.io/secret/2b6cf860-bf3f-46d6-ab17-3faefda05434-coredns-token-q9pn9") pod "coredns-66bff467f8-8cx7l" (UID: "2b6cf860-bf3f-46d6-ab17-3faefda05434")
Jul 27 10:06:07 minikube kubelet[4305]: I0727 10:06:07.164896    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2b6cf860-bf3f-46d6-ab17-3faefda05434-config-volume") pod "coredns-66bff467f8-8cx7l" (UID: "2b6cf860-bf3f-46d6-ab17-3faefda05434")
Jul 27 10:06:08 minikube kubelet[4305]: I0727 10:06:08.142970    4305 request.go:621] Throttling request took 1.101552177s, request: GET:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dcoredns-token-q9pn9&limit=500&resourceVersion=0
Jul 27 10:06:08 minikube kubelet[4305]: W0727 10:06:08.613339    4305 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-8cx7l through plugin: invalid network status for
Jul 27 10:06:08 minikube kubelet[4305]: W0727 10:06:08.642086    4305 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-qf9s8 through plugin: invalid network status for
Jul 27 10:06:08 minikube kubelet[4305]: W0727 10:06:08.986317    4305 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-8cx7l through plugin: invalid network status for
Jul 27 10:06:09 minikube kubelet[4305]: W0727 10:06:09.000770    4305 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-qf9s8 through plugin: invalid network status for
Jul 27 10:08:06 minikube kubelet[4305]: I0727 10:08:06.781797    4305 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jul 27 10:08:06 minikube kubelet[4305]: I0727 10:08:06.855634    4305 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jul 27 10:08:06 minikube kubelet[4305]: I0727 10:08:06.860461    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "initial-topology" (UniqueName: "kubernetes.io/configmap/e84f1ab5-18de-4d4b-83e3-1472edee0ef9-initial-topology") pod "heketi-686d48d874-ft4b4" (UID: "e84f1ab5-18de-4d4b-83e3-1472edee0ef9")
Jul 27 10:08:06 minikube kubelet[4305]: I0727 10:08:06.860582    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "heketi-service-account-token-gptnt" (UniqueName: "kubernetes.io/secret/e84f1ab5-18de-4d4b-83e3-1472edee0ef9-heketi-service-account-token-gptnt") pod "heketi-686d48d874-ft4b4" (UID: "e84f1ab5-18de-4d4b-83e3-1472edee0ef9")
Jul 27 10:08:06 minikube kubelet[4305]: I0727 10:08:06.860648    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "db" (UniqueName: "kubernetes.io/host-path/e84f1ab5-18de-4d4b-83e3-1472edee0ef9-db") pod "heketi-686d48d874-ft4b4" (UID: "e84f1ab5-18de-4d4b-83e3-1472edee0ef9")
Jul 27 10:08:06 minikube kubelet[4305]: I0727 10:08:06.962185    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfile-provisioner-token-8hnqm" (UniqueName: "kubernetes.io/secret/a7a2bc3e-178d-48e3-9c6d-77943bd88ef6-glusterfile-provisioner-token-8hnqm") pod "glusterfile-provisioner-86d86cd7db-v9dbv" (UID: "a7a2bc3e-178d-48e3-9c6d-77943bd88ef6")
Jul 27 10:08:07 minikube kubelet[4305]: W0727 10:08:07.424118    4305 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for storage-gluster/heketi-686d48d874-ft4b4 through plugin: invalid network status for
Jul 27 10:08:07 minikube kubelet[4305]: W0727 10:08:07.450261    4305 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for storage-gluster/glusterfile-provisioner-86d86cd7db-v9dbv through plugin: invalid network status for
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.743802    4305 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.772719    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-heketi" (UniqueName: "kubernetes.io/host-path/8c99678c-db46-4f6e-a061-a6a4a4a5088a-glusterfs-heketi") pod "glusterfs-h5ct2" (UID: "8c99678c-db46-4f6e-a061-a6a4a4a5088a")
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.772820    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-run" (UniqueName: "kubernetes.io/empty-dir/8c99678c-db46-4f6e-a061-a6a4a4a5088a-glusterfs-run") pod "glusterfs-h5ct2" (UID: "8c99678c-db46-4f6e-a061-a6a4a4a5088a")
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.772930    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-dev" (UniqueName: "kubernetes.io/host-path/8c99678c-db46-4f6e-a061-a6a4a4a5088a-glusterfs-dev") pod "glusterfs-h5ct2" (UID: "8c99678c-db46-4f6e-a061-a6a4a4a5088a")
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.772954    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-misc" (UniqueName: "kubernetes.io/host-path/8c99678c-db46-4f6e-a061-a6a4a4a5088a-glusterfs-misc") pod "glusterfs-h5ct2" (UID: "8c99678c-db46-4f6e-a061-a6a4a4a5088a")
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.772980    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "fake-disk" (UniqueName: "kubernetes.io/host-path/8c99678c-db46-4f6e-a061-a6a4a4a5088a-fake-disk") pod "glusterfs-h5ct2" (UID: "8c99678c-db46-4f6e-a061-a6a4a4a5088a")
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.773003    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-logs" (UniqueName: "kubernetes.io/host-path/8c99678c-db46-4f6e-a061-a6a4a4a5088a-glusterfs-logs") pod "glusterfs-h5ct2" (UID: "8c99678c-db46-4f6e-a061-a6a4a4a5088a")
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.773027    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-7kkbc" (UniqueName: "kubernetes.io/secret/8c99678c-db46-4f6e-a061-a6a4a4a5088a-default-token-7kkbc") pod "glusterfs-h5ct2" (UID: "8c99678c-db46-4f6e-a061-a6a4a4a5088a")
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.773047    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-lvm" (UniqueName: "kubernetes.io/host-path/8c99678c-db46-4f6e-a061-a6a4a4a5088a-glusterfs-lvm") pod "glusterfs-h5ct2" (UID: "8c99678c-db46-4f6e-a061-a6a4a4a5088a")
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.773070    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-config" (UniqueName: "kubernetes.io/host-path/8c99678c-db46-4f6e-a061-a6a4a4a5088a-glusterfs-config") pod "glusterfs-h5ct2" (UID: "8c99678c-db46-4f6e-a061-a6a4a4a5088a")
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.773091    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-cgroup" (UniqueName: "kubernetes.io/host-path/8c99678c-db46-4f6e-a061-a6a4a4a5088a-glusterfs-cgroup") pod "glusterfs-h5ct2" (UID: "8c99678c-db46-4f6e-a061-a6a4a4a5088a")
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.773115    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "glusterfs-ssl" (UniqueName: "kubernetes.io/host-path/8c99678c-db46-4f6e-a061-a6a4a4a5088a-glusterfs-ssl") pod "glusterfs-h5ct2" (UID: "8c99678c-db46-4f6e-a061-a6a4a4a5088a")
Jul 27 10:08:07 minikube kubelet[4305]: I0727 10:08:07.773136    4305 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kernel-modules" (UniqueName: "kubernetes.io/host-path/8c99678c-db46-4f6e-a061-a6a4a4a5088a-kernel-modules") pod "glusterfs-h5ct2" (UID: "8c99678c-db46-4f6e-a061-a6a4a4a5088a")
Jul 27 10:08:08 minikube kubelet[4305]: W0727 10:08:08.004407    4305 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for storage-gluster/heketi-686d48d874-ft4b4 through plugin: invalid network status for
Jul 27 10:08:08 minikube kubelet[4305]: W0727 10:08:08.015024    4305 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for storage-gluster/glusterfile-provisioner-86d86cd7db-v9dbv through plugin: invalid network status for
Jul 27 10:08:23 minikube kubelet[4305]: W0727 10:08:23.167051    4305 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for storage-gluster/heketi-686d48d874-ft4b4 through plugin: invalid network status for
Jul 27 10:08:31 minikube kubelet[4305]: W0727 10:08:31.230531    4305 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for storage-gluster/glusterfile-provisioner-86d86cd7db-v9dbv through plugin: invalid network status for

==> storage-provisioner [b4147a2451be] <==
tbox1911 commented 4 years ago

Hi, the same issue here, I'll try to fight with JWT token,

I follow the doc here: https://github.com/gluster/gluster-kubernetes

even if I set use "_auth = false" in heketi.json, i've got the "Invalid JWT token"

edit: i found a 'workaround' edit the heketi deployment

and add the declaration below in container spec

command: ["/usr/bin/heketi"] args: ["--config=/etc/heketi/heketi.json", "--disable-auth"]

priyawadhwa commented 4 years ago

Hey @mjkowalski did that workaround fix your issue?

bmangesh commented 4 years ago

facing the same issue with glusterFS on minikube

xuchengli commented 4 years ago

@tbox1911 I meet the same issue, but how to edit the heketi deployment in minikube? Thanks.

tbox1911 commented 4 years ago

hi,

kubectl -n storage-gluster scale deployment heketi --replicas=0 kubectl -n storage-gluster edit deployment heketi

(...) spec: containers:

kubectl -n storage-gluster scale deployment heketi --replicas=1

you may have to reload topology from heketi pod heketi-cli topology load --json=/etc/heketi/topology/minikube.json

mikemybytes commented 4 years ago

@priyawadhwa proposed workaround allows to bound PVC successfully. However, I'd say it does not solve the issue.

IMO, the main question is: is that expected behavior? If so, Minikube Gluster addon README has to be updated accordingly (I think I could even create a PR for that πŸ˜‰). If it's not, then there is still something wrong with the default configuration of the heketi deployment, so the issue stands still.

BTW the description from @tbox1911 works like a charm (and yes, topology reload seems to be required)

tstromberg commented 4 years ago

I'm afraid the minikube maintainer team doesn't have anyone knowledgeable in gluster at the moment. Is anyone willing to help us to improve the gluster add-on so that this is no longer an issue?

I don't think there is much to it other than the YAML that's provisioned at the moment: https://github.com/kubernetes/minikube/tree/master/deploy/addons/storage-provisioner-gluster

Help wanted!

tbox1911 commented 4 years ago

Hi, I'll be glad to help. I have been working with Gfs and minikube for a while I think I can fix this issue with this addon see you :)

tbox1911 commented 4 years ago

/assign