Closed ServerNinja closed 3 days ago
Hey @ServerNinja,
Thanks for using Logging-operator. Here's how our monitoring system works:
metrics.serviceMonitor
is enabled, the operator creates a Kubernetes Service with the suffix "-metrics"
that exposes the configured metrics endpoint of your SyslogNG/Fluentd/Fluentbit
pods.The Prometheus Service monitor will be able to scrape the metrics this way because the port numbers and the endpoint names match.
Manually tested:
Problem description
Using the "logging" CRD, I'm noticing that the fluentd and fluentbit configurations are not respecting the
metrics.serviceMonitor: true
configuration. Interestingly, it does configure the buffer metrics for fluentd.What I've noticed is that with metrics enabled, the operator will configure the serviceMonitor objects in K8s but do not configure the metrics endpoints on the daemonset for fluentbit nor the statefulset for fluentd.
Versions: Logging-operator helm chart version: 4.9.0 Logging-operator docker image: ghcr.io/kube-logging/logging-operator:4.9.0
"logging" manifest":
Screenshots: The serviceMonitor is created as expected
However, the http-metrics port is not exposed in the statefulset configuration
Any help or advice is much appreciated