prometheus-community / helm-charts

Prometheus community Helm charts
Apache License 2.0
4.97k stars 4.98k forks source link

[prometheus] pod has unbound immediate PersistentVolumeClaims #4040

Open parthokunda opened 9 months ago

parthokunda commented 9 months ago

Alertmanager and prometheus server is staying in pending status.

k get pods
NAME                                                 READY   STATUS    RESTARTS   AGE
prometheus-alertmanager-0                            0/1     Pending   0          4m8s
prometheus-kube-state-metrics-6b464f5b88-h8b29       1/1     Running   0          4m8s
prometheus-prometheus-node-exporter-dq8pc            1/1     Running   0          4m8s
prometheus-prometheus-node-exporter-wrn6g            1/1     Running   0          4m8s
prometheus-prometheus-node-exporter-zsm9d            1/1     Running   0          4m8s
prometheus-prometheus-pushgateway-7857c44f49-jgqhj   1/1     Running   0          4m8s
prometheus-server-8fffdb69d-xfj4p                    0/2     Pending   0          4m8s
kubectl describe pod prometheus-server-8fffdb69d-xfj4p
Name:             prometheus-server-8fffdb69d-xfj4p
Namespace:        default
Priority:         0
Service Account:  prometheus-server
Node:             <none>
Labels:           app.kubernetes.io/component=server
                  app.kubernetes.io/instance=prometheus
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=prometheus
                  app.kubernetes.io/part-of=prometheus
                  app.kubernetes.io/version=v2.48.0
                  helm.sh/chart=prometheus-25.8.0
                  pod-template-hash=8fffdb69d
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/prometheus-server-8fffdb69d
Containers:
  prometheus-server-configmap-reload:
    Image:      quay.io/prometheus-operator/prometheus-config-reloader:v0.67.0
    Port:       <none>
    Host Port:  <none>
    Args:
      --watched-dir=/etc/config
      --reload-url=http://127.0.0.1:9090/-/reload
    Environment:  <none>
    Mounts:
      /etc/config from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7m85w (ro)
  prometheus-server:
    Image:      quay.io/prometheus/prometheus:v2.48.0
    Port:       9090/TCP
    Host Port:  0/TCP
    Args:
      --storage.tsdb.retention.time=15d
      --config.file=/etc/config/prometheus.yml
      --storage.tsdb.path=/data
      --web.console.libraries=/etc/prometheus/console_libraries
      --web.console.templates=/etc/prometheus/consoles
      --web.enable-lifecycle
    Liveness:     http-get http://:9090/-/healthy delay=30s timeout=10s period=15s #success=1 #failure=3
    Readiness:    http-get http://:9090/-/ready delay=30s timeout=4s period=5s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /data from storage-volume (rw)
      /etc/config from config-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7m85w (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus-server
    Optional:  false
  storage-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  prometheus-server
    ReadOnly:   false
  kube-api-access-7m85w:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  2m18s  default-scheduler  0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..

The PVC's are always in pending status.

k get pvc
NAME                                STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
prometheus-server                   Pending                                                     5m6s
storage-prometheus-alertmanager-0   Pending                                                     5m6s
k get pv
No resources found

Even after helm uninstall [REPO NAME] command, one of my pvc are left in pending status. All pods related to prometheus were remove though.

k get pv
No resources found
k describe pvc storage-prometheus-alertmanager-0
Name:          storage-prometheus-alertmanager-0
Namespace:     default
StorageClass:  
Status:        Pending
Volume:        
Labels:        app.kubernetes.io/instance=prometheus
               app.kubernetes.io/name=alertmanager
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason         Age                 From                         Message
  ----    ------         ----                ----                         -------
  Normal  FailedBinding  98s (x42 over 11m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
adityagoel-mata commented 7 months ago

I'm facing the same issue. Did it get resolved?

gsmx64 commented 1 month ago

For Prometheus:

Install openebs: helm repo add openebs https://openebs.github.io/openebs helm repo update helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --create-namespace

Save pvc: kubectl get pvc prometheus-server -n default -o yaml > prometheus-server-pvc.yaml

Open pvc yaml file: sudo vi prometheus-server-pvc.yaml

Edit storageClassName with: storageClassName: openebs-hostpath

Delete pvc: kubectl delete pvc/prometheus-server -n default

Apply edited pvc: kubectl apply -f prometheus-server-pvc.yaml

NOTE: Change "-n default" with your current namespace. Same with grafana.