carlosedp / cluster-monitoring

Cluster monitoring stack for clusters based on Prometheus Operator
MIT License
740 stars 200 forks source link

running "VolumeBinding" filter plugin for pod "prometheus-k8s-0": pod has unbound immediate PersistentVolumeClaims` #52

Closed aafishman closed 4 years ago

aafishman commented 4 years ago

Hello,

Thanks for getting all of these images together! Its been a lot more challenging to track down all the right images for my cluster than I initially thought.

I tried following the quickstart non k3s guide to deploy the monitoring stack on my cluster (Pi 4's, running Raspian Buster lite, full k8s set up with kubeadm) and receive the following PCV-related errors for both the prometheus-k8s-0 pod and the grafana pod:

running "VolumeBinding" filter plugin for pod "prometheus-k8s-0": pod has unbound immediate PersistentVolumeClaims

running "VolumeBinding" filter plugin for pod "grafana-759f594549-5mrsj": pod has unbound immediate PersistentVolumeClaims

I tried setting up the volumes manually but am not able to get around the PVC issue described. Both pods are in pending until they can attach to their volumes. Is there a way to go around the plugin used to create the PV and do it manually? Is it possible to run the plugin on its own to attempt to create the required volumes? I haven't used any filter plugins as described in the log before so there could be something simple I am missing as well.

I re-made and re-deployed the manifests once, not sure yet what else to try. I don't see the PV volumes for either initialized in the cluster either.

I could be missing something obvious in set-up, let me know if there are some obvious missed items that could lead to this.

Please let me know if there is any other information I can provide that would be helpful.

Thanks!

aafishman commented 4 years ago

Some more info - looks like the PVC just can't find the volume. How can I try setting up the correct volume manually?

$kubectl describe pvc prometheus-k8s-db-prometheus-k8s-0
Name:          prometheus-k8s-db-prometheus-k8s-0
Namespace:     monitoring
StorageClass:
Status:        Pending
Volume:
Labels:        app=prometheus
               prometheus=k8s
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    prometheus-k8s-0
Events:
  Type    Reason         Age                    From                         Message
  ----    ------         ----                   ----                         -------
  Normal  FailedBinding  3m6s (x4643 over 19h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
carlosedp commented 4 years ago

I advise having a dynamic provisioner with a StorageClass. This way the PVCs will ask the SC to create the PVs for you on demand.

Another way is creating the PVs with the same size (or bigger) than the requested PVC before deploying the stack. So when the stack goes up, the PVC claims the created PVs.

carlosedp commented 4 years ago

Any news on this?

aafishman commented 4 years ago

I was not able to get it working with manually created volumes before or after deploying the stack. I did however have success with dynamic volumes using the an NFS-client provisioner as described in this article.

Working well now.

carlosedp commented 4 years ago

Great to know!