Open rbo opened 7 months ago
https://access.redhat.com/solutions/6992399
$ oc logs -n openshift-user-workload-monitoring deployment/prometheus-operator -c prometheus-operator | grep "it accesses file system via bearer token" | head
level=warn ts=2024-04-05T18:12:32.218813744Z caller=resource_selector.go:171 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=openshift-operators/ramen-hub-operator-metrics-monitor namespace=openshift-user-workload-monitoring prometheus=user-workload
$ oc get servicemonitor ramen-hub-operator-metrics-monitor -o yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
creationTimestamp: "2024-04-05T18:12:32Z"
generation: 1
labels:
control-plane: controller-manager
olm.managed: "true"
name: ramen-hub-operator-metrics-monitor
namespace: openshift-operators
ownerReferences:
- apiVersion: operators.coreos.com/v1alpha1
blockOwnerDeletion: false
controller: false
kind: ClusterServiceVersion
name: odr-hub-operator.v4.15.0-rhodf
uid: c198ece5-952e-4aa6-9809-dc428a13e2c2
resourceVersion: "635683986"
uid: a6b5be5c-cea6-4919-b3e1-f81195dec1d3
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
path: /metrics
port: https
scheme: https
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
control-plane: controller-manager
$ oc get pods -l control-plane=controller-manager
NAME READY STATUS RESTARTS AGE
external-secrets-operator-controller-manager-65f56c8654-5csxz 1/1 Running 0 13d
ramen-hub-operator-5d7bd796d5-xjg2d 2/2 Running 4 (13d ago) 13d
Silence the Alert, looks like the serviemonitor selector ist to wide.
Looks like something todo with ODF disaster reocvery monitoring, that pops up when googling for ramen-hub-operator-metrics-monitor
If we didnt, we should file a bug.
Prometheus operator in openshift-user-workload-monitoring namespace rejected 1 prometheus/ServiceMonitor resources.