Open pyo-counting opened 2 months ago
I wanna drop default ServiceMonitor labels for target. So I set it as below.
prometheus.operator.servicemonitors "service_monitor" { forward_to = [prometheus.relabel.kps.receiver] clustering { enabled = true } scrape { default_scrape_interval = "10s" default_scrape_timeout = "10s" } // drop default ServiceMonitor label // ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/running-exporters.md#default-labels rule { action = "labeldrop" regex = "node|namespace|service|pod|container|endpoint" } }
But with above configurations, I found that the actual Prometheus setting as below applies.
- job_name: serviceMonitor/kube-system/kps-dev-all-helm-karpenter/0 honor_timestamps: true track_timestamps_staleness: false scrape_interval: 10s scrape_timeout: 10s scrape_protocols: - OpenMetricsText1.0.0 - OpenMetricsText0.0.1 - PrometheusText0.0.4 metrics_path: /metrics scheme: http enable_compression: true follow_redirects: true enable_http2: true relabel_configs: - separator: ; regex: node|namespace|service|pod|container|endpoint replacement: $1 action: labeldrop - source_labels: [job] separator: ; regex: (.*) target_label: __tmp_prometheus_job_name replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_instance, __meta_kubernetes_service_labelpresent_app_kubernetes_io_instance] separator: ; regex: (kps-dev-all-helm-karpenter);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name, __meta_kubernetes_service_labelpresent_app_kubernetes_io_name] separator: ; regex: (karpenter);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_port_name] separator: ; regex: http-metrics replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Node;(.*) target_label: node replacement: ${1} action: replace - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Pod;(.*) target_label: pod replacement: ${1} action: replace - source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: namespace replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: service replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_container_name] separator: ; regex: (.*) target_label: container replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_name] separator: ; regex: (.*) target_label: pod replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_phase] separator: ; regex: (Failed|Succeeded) replacement: $1 action: drop - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: job replacement: ${1} action: replace - separator: ; regex: (.*) target_label: endpoint replacement: http-metrics action: replace kubernetes_sd_configs: - role: endpoints kubeconfig_file: "" follow_redirects: false enable_http2: false namespaces: own_namespace: false names: - kube-system
I think user settings should be applied after performing ServiceMonitor labelling operation.
Would be great if this would be possible for ServiceMonitors and PodMonitors.
Background
I wanna drop default ServiceMonitor labels for target. So I set it as below.
But with above configurations, I found that the actual Prometheus setting as below applies.
Proposal
I think user settings should be applied after performing ServiceMonitor labelling operation.