prometheus / prometheus

The Prometheus monitoring system and time series database.
https://prometheus.io/
Apache License 2.0
54.84k stars 9.03k forks source link

High RAM and CPU usage with prometheus-operator #6090

Closed syst0m closed 4 years ago

syst0m commented 4 years ago

Bug Report

Deployed prometheus-operator helm chart to an EKS cluster. The prometheus instance is used for monitoring both the kubernetes workloads, and the CI/CD agents. The memory usage has been slowly increasing over time, it's ~19GB at the moment. Also, the CPU usage has grown, from 0.1->0.6 on average.

prom1 prom2 prom3 prom4 prom5

I used tsdb to analyze the prometheus db, it looks like the ephemeral nature of the CI/CD agents is causing high churn:

Label pairs most involved in churning:
31569 job=buildkite-agents
27172 Queue=beefy
4480 instance=172.21.3.173:9100
Label names most involved in churning:
39311 __name__
39193 instance
39134 job
31567 Queue
30481 device
7564 service

High cardinality labels don't seem to be an issue:

Highest cardinality labels:
1315 __name__
1058 address
989 mountpoint
985 device
757 id

What did you expect to see? Lower resource usage.

What did you see instead? Under which circumstances? High resource usage, with high churn, probably caused by ephemeral CI/CD agents.

Environment

Linux 4.14.97-90.72.amzn2.x86_64 x86_64
prometheus, version 2.7.1 (branch: HEAD, revision: 62e591f928ddf6b3468308b7ac1de1c63aa7fcf3)
  build user:       root@f9f82868fc43
  build date:       20190131-11:16:59
  go version:       go1.11.5

Create default rules for monitoring the cluster

defaultRules: create: true rules: alertmanager: true etcd: true general: true k8s: true kubeApiserver: true kubePrometheusNodeAlerting: true kubePrometheusNodeRecording: true kubeScheduler: true kubernetesAbsent: true kubernetesApps: true kubernetesResources: true kubernetesStorage: true kubernetesSystem: true node: true prometheusOperator: true prometheus: true

global: rbac: create: true

Configuration for alertmanager

ref: https://prometheus.io/docs/alerting/alertmanager/

alertmanager:

Deploy alertmanager

enabled: true

Service account for Alertmanager to use.

ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/

serviceAccount: create: true name: "alertmanager"

Alertmanager configuration directives

ref: https://prometheus.io/docs/alerting/configuration/#configuration-file

https://prometheus.io/webtools/alerting/routing-tree-editor/

config: global: resolve_timeout: 5m route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'null' routes:

Using default values from https://github.com/helm/charts/blob/master/stable/grafana/values.yaml

grafana: enabled: true

adminPassword: "JgxzUa9ZpJsMOFQHKXu5"

Deploy default dashboards.

defaultDashboardsEnabled: true

grafana.ini: users: viewers_can_edit: false auth: disable_login_form: false disable_signout_menu: false auth.anonymous: enabled: true org_role: Viewer security: allow_embedding: true

list of datasources to insert/update depending

whats available in the database

https://grafana.com/docs/features/datasources/cloudwatch/#configure-the-datasource-with-provisioning

datasources: datasources.yaml: # The name is important, it seems... apiVersion: 1 datasources:

Component scraping the kube api server

kubeApiServer: enabled: true tlsConfig: serverName: kubernetes insecureSkipVerify: false

If your API endpoint address is not reachable (as in AKS) you can replace it with the kubernetes service

relabelings: []

- sourceLabels:

- __meta_kubernetes_namespace

- __meta_kubernetes_service_name

- __meta_kubernetes_endpoint_port_name

action: keep

regex: default;kubernetes;https

- targetLabel: address

replacement: kubernetes.default.svc:443

serviceMonitor: jobLabel: component selector: matchLabels: component: apiserver provider: kubernetes

Component scraping the kubelet and kubelet-hosted cAdvisor

kubelet: enabled: true namespace: kube-system

serviceMonitor:

Enable scraping the kubelet over https. For requirements to enable this see

## https://github.com/coreos/prometheus-operator/issues/926
##
https: true
# cAdvisorMetricRelabelings:
# - sourceLabels: [__name__, image]
#   separator: ;
#   regex: container_([a-z_]+);
#   replacement: $1
#   action: drop
# - sourceLabels: [__name__]
#   separator: ;
#   regex: container_(network_tcp_usage_total|network_udp_usage_total|tasks_state|cpu_load_average_10s)
#   replacement: $1
#   action: drop

Component scraping the kube controller manager

kubeControllerManager: enabled: true

If your kube controller manager is not deployed as a pod, specify IPs it can be found on

endpoints: []

- 10.141.4.22

- 10.141.4.23

- 10.141.4.24

If using kubeControllerManager.endpoints only the port and targetPort are used

service: port: 10252 targetPort: 10252 selector: k8s-app: kube-controller-manager

serviceMonitor:

Enable scraping kube-controller-manager over https.

## Requires proper certs (not self-signed) and delegated authentication/authorization checks
##
https: false

Component scraping coreDns. Use either this or kubeDns

coreDns: enabled: true service: port: 9153 targetPort: 9153 selector: k8s-app: coredns

Component scraping kubeDns. Use either this or coreDns

kubeDns: enabled: false service: selector: k8s-app: kube-dns

Component scraping etcd

kubeEtcd: enabled: true

If your etcd is not deployed as a pod, specify IPs it can be found on

endpoints: []

- 10.141.4.22

- 10.141.4.23

- 10.141.4.24

Etcd service. If using kubeEtcd.endpoints only the port and targetPort are used

service: port: 4001 targetPort: 4001 selector: k8s-app: etcd-server

Configure secure access to the etcd cluster by loading a secret into prometheus and

specifying security configuration below. For example, with a secret named etcd-client-cert

serviceMonitor:

scheme: https

insecureSkipVerify: false

serverName: localhost

caFile: /etc/prometheus/secrets/etcd-client-cert/etcd-ca

certFile: /etc/prometheus/secrets/etcd-client-cert/etcd-client

keyFile: /etc/prometheus/secrets/etcd-client-cert/etcd-client-key

serviceMonitor: scheme: http insecureSkipVerify: false serverName: "" caFile: "" certFile: "" keyFile: ""

Component scraping kube scheduler

kubeScheduler: enabled: true

If your kube scheduler is not deployed as a pod, specify IPs it can be found on

endpoints: []

- 10.141.4.22

- 10.141.4.23

- 10.141.4.24

If using kubeScheduler.endpoints only the port and targetPort are used

service: port: 10251 targetPort: 10251 selector: k8s-app: kube-scheduler

serviceMonitor:

Enable scraping kube-controller-manager over https.

## Requires proper certs (not self-signed) and delegated authentication/authorization checks
##
https: false

Component scraping kube state metrics

kubeStateMetrics: enabled: true

Configuration for kube-state-metrics subchart

kube-state-metrics: rbac: create: true podSecurityPolicy: enabled: true

Deploy node exporter as a daemonset to all nodes

nodeExporter: enabled: true

Use the value configured in prometheus-node-exporter.podLabels

jobLabel: jobLabel

serviceMonitor: {}

metric relabel configs to apply to samples before ingestion.

##
# metricRelabelings:
# - sourceLabels: [__name__]
#   separator: ;
#   regex: ^node_mountstats_nfs_(event|operations|transport)_.+
#   replacement: $1
#   action: drop

Configuration for prometheus-node-exporter subchart

prometheus-node-exporter: podLabels:

Add the 'node-exporter' label to be used by serviceMonitor to match standard common usage in rules and grafana dashboards

##
jobLabel: node-exporter

extraArgs:

Manages Prometheus and Alertmanager components

prometheusOperator: enabled: true

Service account for Alertmanager to use.

ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/

serviceAccount: create: true name: ""

Configuration for Prometheus operator service

service: annotations: {} labels: {} clusterIP: ""

Port to expose on each node

Only used if service.type is 'NodePort'

nodePort: 38080

Additional ports to open for Prometheus service

ref: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services

additionalPorts: []
#  - name: thanos-cluster
#    port: 10900
#    nodePort: 30111

Loadbalancer IP

Only use if service.type is "loadbalancer"

loadBalancerIP: ""
loadBalancerSourceRanges: []

Service type

NodepPort, ClusterIP, loadbalancer

type: ClusterIP

## List of IP addresses at which the Prometheus server service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []

Deploy CRDs used by Prometheus Operator.

createCustomResource: true

Customize CRDs API Group

crdApiGroup: monitoring.coreos.com

Attempt to clean up CRDs created by Prometheus Operator.

cleanupCustomResource: false

Labels to add to the operator pod

podLabels: {}

Assign a PriorityClassName to pods if set

priorityClassName: ""

Define Log Format

Use logfmt (default) or json-formatted logging

logFormat: logfmt

Decrease log verbosity to errors only

logLevel: error

If true, the operator will create and maintain a service for scraping kubelets

ref: https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus-operator/README.md

kubeletService: enabled: true namespace: kube-system

Create a servicemonitor for the operator

serviceMonitor: selfMonitor: true

Resource limits & requests

resources: {}

limits:

cpu: 200m

memory: 200Mi

requests:

cpu: 100m

memory: 100Mi

Define which Nodes the Pods are scheduled on.

ref: https://kubernetes.io/docs/user-guide/node-selection/

nodeSelector: {}

Tolerations for use with node taints

ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/

tolerations: []

- key: "key"

operator: "Equal"

value: "value"

effect: "NoSchedule"

Assign the prometheus operator to run on specific nodes

ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

affinity: {}

requiredDuringSchedulingIgnoredDuringExecution:

nodeSelectorTerms:

- matchExpressions:

- key: kubernetes.io/e2e-az-name

operator: In

values:

- e2e-az1

- e2e-az2

securityContext: runAsNonRoot: true runAsUser: 65534

Prometheus-operator image

image: repository: quay.io/coreos/prometheus-operator tag: v0.30.1 pullPolicy: IfNotPresent

Configmap-reload image to use for reloading configmaps

configmapReloadImage: repository: quay.io/coreos/configmap-reload tag: v0.0.1

Prometheus-config-reloader image to use for config and rule reloading

prometheusConfigReloaderImage: repository: quay.io/coreos/prometheus-config-reloader tag: v0.30.1

Set the prometheus config reloader side-car CPU limit. If unset, uses the prometheus-operator project default

configReloaderCpu: 100m

Set the prometheus config reloader side-car memory limit. If unset, uses the prometheus-operator project default

configReloaderMemory: 25Mi

Hyperkube image to use when cleaning up

hyperkubeImage: repository: k8s.gcr.io/hyperkube tag: v1.12.1 pullPolicy: IfNotPresent

Deploy a Prometheus instance

prometheus:

enabled: true

Service account for Prometheuses to use.

ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/

serviceAccount: create: true name: ""

Configuration for Prometheus service

service: annotations: {} labels: {} clusterIP: ""

## To be used with a proxy extraContainer port
targetPort: 9090

## List of IP addresses at which the Prometheus server service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []

## Port to expose on each node
## Only used if service.type is 'NodePort'
##
nodePort: 39090

## Loadbalancer IP
## Only use if service.type is "loadbalancer"
loadBalancerIP: ""
loadBalancerSourceRanges: []
## Service type
##
type: ClusterIP

sessionAffinity: ""

rbac:

Create role bindings in the specified namespaces, to allow Prometheus monitoring

## a role binding in the release namespace will always be created.
##
roleNamespaces:
  - kube-system

Configure pod disruption budgets for Prometheus

ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget

This configuration is immutable once created and will require the PDB to be deleted to be changed

https://github.com/kubernetes/kubernetes/issues/45398

podDisruptionBudget: enabled: false minAvailable: 1 maxUnavailable: ""

ingress: enabled: false annotations: {} labels: {}

## Hostnames.
## Must be provided if Ingress is enabled.
##
# hosts:
#   - prometheus.domain.com
hosts: []

## TLS configuration for Prometheus Ingress
## Secret must be manually created in the namespace
##
tls: []
  # - secretName: prometheus-general-tls
  #   hosts:
  #     - prometheus.example.com

serviceMonitor: selfMonitor: true

Settings affecting prometheusSpec

ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec

prometheusSpec:

## Interval between consecutive scrapes.
##
scrapeInterval: ""

## Interval between consecutive evaluations.
##
evaluationInterval: ""

## ListenLocal makes the Prometheus server listen on loopback, so that it does not bind against the Pod IP.
##
listenLocal: false

## Image of Prometheus.
##
image:
  repository: quay.io/prometheus/prometheus
  tag: v2.7.1

## Tolerations for use with node taints
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
#  - key: "key"
#    operator: "Equal"
#    value: "value"
#    effect: "NoSchedule"

## Alertmanagers to which alerts will be sent
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#alertmanagerendpoints
##
## Default configuration will connect to the alertmanager deployed as part of this release
##
alertingEndpoints: []
# - name: ""
#   namespace: ""
#   port: http
#   scheme: http

## External labels to add to any time series or alerts when communicating with external systems
##
externalLabels: {}

## External URL at which Prometheus will be reachable.
##
externalUrl: ""

## Define which Nodes the Pods are scheduled on.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}

## Secrets is a list of Secrets in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods.
## The Secrets are mounted into /etc/prometheus/secrets/. Secrets changes after initial creation of a Prometheus object are not
## reflected in the running Pods. To change the secrets mounted into the Prometheus Pods, the object must be deleted and recreated
## with the new list of secrets.
##
secrets: ["infra-aws-credentials"]

## ConfigMaps is a list of ConfigMaps in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods.
## The ConfigMaps are mounted into /etc/prometheus/configmaps/.
##
configMaps: []

## QuerySpec defines the query command line flags when starting Prometheus.
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#queryspec
##
query: {}

## Namespaces to be selected for PrometheusRules discovery.
## If nil, select own namespace. Namespaces to be selected for ServiceMonitor discovery.
## See https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#namespaceselector for usage
##
ruleNamespaceSelector: {}

## If true, a nil or {} value for prometheus.prometheusSpec.ruleSelector will cause the
## prometheus resource to be created with selectors based on values in the helm deployment,
## which will also match the PrometheusRule resources created
##
ruleSelectorNilUsesHelmValues: true

## PrometheusRules to be selected for target discovery.
## If {}, select all ServiceMonitors
##
ruleSelector: {}
## Example which select all prometheusrules resources
## with label "prometheus" with values any of "example-rules" or "example-rules-2"
# ruleSelector:
#   matchExpressions:
#     - key: prometheus
#       operator: In
#       values:
#         - example-rules
#         - example-rules-2
#
## Example which select all prometheusrules resources with label "role" set to "example-rules"
# ruleSelector:
#   matchLabels:
#     role: example-rules

## If true, a nil or {} value for prometheus.prometheusSpec.serviceMonitorSelector will cause the
## prometheus resource to be created with selectors based on values in the helm deployment,
## which will also match the servicemonitors created
##
serviceMonitorSelectorNilUsesHelmValues: true

## ServiceMonitors to be selected for target discovery.
## If {}, select all ServiceMonitors
##
serviceMonitorSelector: {}
## Example which selects ServiceMonitors with label "prometheus" set to "somelabel"
# serviceMonitorSelector:
#   matchLabels:
#     prometheus: somelabel

## Namespaces to be selected for ServiceMonitor discovery.
## See https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#namespaceselector for usage
##
serviceMonitorNamespaceSelector: {}

## How long to retain metrics
##
retention: 30d

## If true, the Operator won't process any Prometheus configuration changes
##
paused: false

## Number of Prometheus replicas desired
##
replicas: 1

## Log level for Prometheus be configured in
##
logLevel: info

## Prefix used to register routes, overriding externalUrl route.
## Useful for proxies that rewrite URLs.
##
routePrefix: /

## Standard object’s metadata. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata
## Metadata Labels and Annotations gets propagated to the prometheus pods.
##
podMetadata: {}
# labels:
#   app: prometheus
#   k8s-app: prometheus

## Pod anti-affinity can prevent the scheduler from placing Prometheus replicas on the same node.
## The default value "soft" means that the scheduler should *prefer* to not schedule two replica pods onto the same node but no guarantee is provided.
## The value "hard" means that the scheduler is *required* to not schedule two replica pods onto the same node.
## The value "" will disable pod anti-affinity so that no anti-affinity rules will be configured.
podAntiAffinity: ""

## If anti-affinity is enabled sets the topologyKey to use for anti-affinity.
## This can be changed to, for example, failure-domain.beta.kubernetes.io/zone
##
podAntiAffinityTopologyKey: kubernetes.io/hostname

## The remote_read spec configuration for Prometheus.
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#remotereadspec
remoteRead: {}
# - url: http://remote1/read

## The remote_write spec configuration for Prometheus.
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#remotewritespec
remoteWrite: {}
  # remoteWrite:
  #   - url: http://remote1/push

## Resource limits & requests
##
resources: {}
# requests:
#   memory: 400Mi

## Prometheus StorageSpec for persistent data
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/storage.md
##
storageSpec:
  volumeClaimTemplate:
    spec:
      storageClassName: gp2
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 100Gi
    selector: {}
## AdditionalScrapeConfigs allows specifying additional Prometheus scrape configurations. Scrape configurations
## are appended to the configurations generated by the Prometheus Operator. Job configurations must have the form
## as specified in the official Prometheus documentation:
## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#<scrape_config>. As scrape configs are
## appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility
## to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible
## scrape configs are going to break Prometheus after the upgrade.
##
## The scrape configuraiton example below will find master nodes, provided they have the name .*mst.*, relabel the
## port to 2379 and allow etcd scraping provided it is running on all Kubernetes master nodes
##
additionalScrapeConfigs:
  - job_name: 'buildkite-agents'
    ec2_sd_configs:
      - region: eu-west-1
        port: 9100
    relabel_configs:
    - source_labels:  [__meta_ec2_tag_BuildkiteQueue]
      target_label: Queue

#     # Only monitor instances with a Name starting with "SD Demo"
#   - source_labels: [__meta_ec2_tag_Name]
#     regex: SD Demo.*
#     action: keep
#     # Use the instance ID as the instance label
#   - source_labels: [__meta_ec2_instance_id]
#     target_label: instance
# - job_name: kube-etcd
#   kubernetes_sd_configs:
#     - role: node
#   scheme: https
#   tls_config:
#     ca_file:   /etc/prometheus/secrets/etcd-client-cert/etcd-ca
#     cert_file: /etc/prometheus/secrets/etcd-client-cert/etcd-client
#     key_file:  /etc/prometheus/secrets/etcd-client-cert/etcd-client-key
#   relabel_configs:
#   - action: labelmap
#     regex: __meta_kubernetes_node_label_(.+)
#   - source_labels: [__address__]
#     action: replace
#     target_label: __address__
#     regex: ([^:;]+):(\d+)
#     replacement: ${1}:2379
#   - source_labels: [__meta_kubernetes_node_name]
#     action: keep
#     regex: .*mst.*
#   - source_labels: [__meta_kubernetes_node_name]
#     action: replace
#     target_label: node
#     regex: (.*)
#     replacement: ${1}
#   metric_relabel_configs:
#   - regex: (kubernetes_io_hostname|failure_domain_beta_kubernetes_io_region|beta_kubernetes_io_os|beta_kubernetes_io_arch|beta_kubernetes_io_instance_type|failure_domain_beta_kubernetes_io_zone)
#     action: labeldrop

## AdditionalAlertManagerConfigs allows for manual configuration of alertmanager jobs in the form as specified
## in the official Prometheus documentation https://prometheus.io/docs/prometheus/latest/configuration/configuration/#<alertmanager_config>.
## AlertManager configurations specified are appended to the configurations generated by the Prometheus Operator.
## As AlertManager configs are appended, the user is responsible to make sure it is valid. Note that using this
## feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release
## notes to ensure that no incompatible AlertManager configs are going to break Prometheus after the upgrade.
##
additionalAlertManagerConfigs: []
# - consul_sd_configs:
#   - server: consul.dev.test:8500
#     scheme: http
#     datacenter: dev
#     tag_separator: ','
#     services:
#       - metrics-prometheus-alertmanager

## AdditionalAlertRelabelConfigs allows specifying Prometheus alert relabel configurations. Alert relabel configurations specified are appended
## to the configurations generated by the Prometheus Operator. Alert relabel configurations specified must have the form as specified in the
## official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs.
## As alert relabel configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the
## possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible alert relabel
## configs are going to break Prometheus after the upgrade.
##
additionalAlertRelabelConfigs: []
# - separator: ;
#   regex: prometheus_replica
#   replacement: $1
#   action: labeldrop

## SecurityContext holds pod-level security attributes and common container settings.
## This defaults to non root user with uid 1000 and gid 2000.
## https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md
##
securityContext:
  runAsNonRoot: true
  runAsUser: 1000
  fsGroup: 2000

##  Priority class assigned to the Pods
##
priorityClassName: ""

## Thanos configuration allows configuring various aspects of a Prometheus server in a Thanos environment.
## This section is experimental, it may change significantly without deprecation notice in any release.
## This is experimental and may change significantly without backward compatibility in any release.
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#thanosspec
##
thanos: {}

## Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to a Prometheus pod.
##  if using proxy extraContainer  update targetPort with proxy container port
containers:
  - name: "prometheus"
    env:
    - name: AWS_ACCESS_KEY_ID
      value: ${prometheus_aws_access_key}
    - name: AWS_SECRET_ACCESS_KEY
      value: ${prometheus_aws_secret_key}

## Enable additional scrape configs that are managed externally to this chart. Note that the prometheus
## will fail to provision if the correct secret does not exist.
##
additionalScrapeConfigsExternal: false

additionalServiceMonitors: []

Name of the ServiceMonitor to create

- name: ""

## Additional labels to set used for the ServiceMonitorSelector. Together with standard labels from
## the chart
##
# additionalLabels: {}

## Service label for use in assembling a job name of the form <label value>-<port>
## If no label is specified, the service name is used.
##
# jobLabel: ""

## Label selector for services to which this ServiceMonitor applies
##
# selector: {}

## Namespaces from which services are selected
##
# namespaceSelector:
  ## Match any namespace
  ##
  # any: false

  ## Explicit list of namespace names to select
  ##
  # matchNames: []

## Endpoints of the selected service to be monitored
##
# endpoints: []
  ## Name of the endpoint's service port
  ## Mutually exclusive with targetPort
  # - port: ""

  ## Name or number of the endpoint's target port
  ## Mutually exclusive with port
  # - targetPort: ""

  ## File containing bearer token to be used when scraping targets
  ##
  #   bearerTokenFile: ""

  ## Interval at which metrics should be scraped
  ##
  #   interval: 30s

  ## HTTP path to scrape for metrics
  ##
  #   path: /metrics

  ## HTTP scheme to use for scraping
  ##
  #   scheme: http

  ## TLS configuration to use when scraping the endpoint
  ##
  #   tlsConfig:

      ## Path to the CA file
      ##
      # caFile: ""

      ## Path to client certificate file
      ##
      # certFile: ""

      ## Skip certificate verification
      ##
      # insecureSkipVerify: false

      ## Path to client key file
      ##
      # keyFile: ""

      ## Server name used to verify host name
      ##
      # serverName: ""

* Logs:

level=warn ts=2019-10-02T09:55:41.617255426Z caller=scrape.go:835 component="scrape manager" scrape_pool=back-office/task-manager-accounts-api/0 target=http://10.0.2.188:9090/metrics msg="append failed" err="invalid metric type \"manager-accounts_requests_total counter\"" level=warn ts=2019-10-02T09:55:49.124152611Z caller=scrape.go:835 component="scrape manager" scrape_pool=back-office/submission-tracker-api/0 target=http://10.0.2.241:9090/metrics msg="append failed" err="invalid metric type \"tracker_requests_total counter\"" level=warn ts=2019-10-02T09:55:49.67682979Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:300: watch of *v1.Endpoints ended with: too old resource version: 74574738 (74575856)" level=warn ts=2019-10-02T09:55:57.619654811Z caller=scrape.go:835 component="scrape manager" scrape_pool=back-office/task-manager-accounts-api/0 target=http://10.0.1.212:9090/metrics msg="append failed" err="invalid metric type \"manager-accounts_requests_total counter\"" level=warn ts=2019-10-02T09:56:03.09292573Z caller=scrape.go:835 component="scrape manager" scrape_pool=back-office/submission-tracker-api/0 target=http://10.0.3.194:9090/metrics msg="append failed" err="invalid metric type \"tracker_requests_total counter\"" level=warn ts=2019-10-02T09:56:11.617093048Z caller=scrape.go:835 component="scrape manager" scrape_pool=back-office/task-manager-accounts-api/0 target=http://10.0.2.188:9090/metrics msg="append failed" err="invalid metric type \"manager-accounts_requests_total counter\"" level=warn ts=2019-10-02T09:56:19.124170247Z caller=scrape.go:835 component="scrape manager" scrape_pool=back-office/submission-tracker-api/0 target=http://10.0.2.241:9090/metrics msg="append failed" err="invalid metric type \"tracker_requests_total counter\"" level=warn ts=2019-10-02T09:56:27.620129768Z caller=scrape.go:835 component="scrape manager" scrape_pool=back-office/task-manager-accounts-api/0 target=http://10.0.1.212:9090/metrics msg="append failed" err="invalid metric type \"manager-accounts_requests_total counter\"" level=warn ts=2019-10-02T09:56:33.093527932Z caller=scrape.go:835 component="scrape manager" scrape_pool=back-office/submission-tracker-api/0 target=http://10.0.3.194:9090/metrics msg="append failed" err="invalid metric type \"tracker_requests_total counter\"" level=warn ts=2019-10-02T09:56:41.61707015Z caller=scrape.go:835 component="scrape manager" scrape_pool=back-office/task-manager-accounts-api/0 target=http://10.0.2.188:9090/metrics msg="append failed" err="invalid metric type \"manager-accounts_requests_total counter\"" level=warn ts=2019-10-02T09:56:49.123975587Z caller=scrape.go:835 component="scrape manager" scrape_pool=back-office/submission-tracker-api/0 target=http://10.0.2.241:9090/metrics msg="append failed" err="invalid metric type \"tracker_requests_total counter\""

simonpasquier commented 4 years ago

I would encourage you to update to a recent version of Prometheus as the TSDB code has been improved since v2.7.1. Also the last screenshot shows an increase in queries which can explain the increased CPU.

I'm closing it for now. If you have further questions, please use our user mailing list, which you can also search.

metalmatze commented 4 years ago

This seems to be about the Prometheus Operator itself and there also have been lots of improvements. One of those was a fix for a bug that caused lots of CPU usage. Please update the Prometheus Operator to v0.30+ too.

LiamGoodacre commented 4 years ago

Thanks for the kitten video whoever used the accidentally posted slack hook URL :sweat_smile: