sourcegraph / sourcegraph-public-snapshot

Code AI platform with Code Search & Cody
https://sourcegraph.com
Other
10.1k stars 1.27k forks source link

cadvisor metrics and problematic identification #17365

Open bobheadxi opened 3 years ago

bobheadxi commented 3 years ago

Background

Prometheus generally attaches useful labels based on the target it is scraping. For example, when scraping frontend, Prometheus reaches out to frontend, knows certain things about frontend (e.g. service name, pod, instance, etc), and can attach those labels onto metrics exported by frontend

cAdvisor exports metrics for other containers. So despite all cAdvisor metrics looking like they are coming from cAdvisor, they are actually for other containers.

On some systems cAdvisor generates a name that is some combination of fields that might make a target monitored by cAdvisor unique. This worked alright for a while right up until we discovered it didn't: https://github.com/sourcegraph/sourcegraph/issues/17069, https://github.com/sourcegraph/sourcegraph/issues/17072

Problem

We need an effective way to identify Sourcegraph services inside cAdvisor metrics. The current strategy is outlined in our docs, but the approach is not perfect:

We are a bit hamstrung in that whatever name-labelling convention we have must also:

Docker-compose doesn't seem to be as much of an issue since it doesn't seem they are generally deployed on machines that do anything other than serve Sourcegraph on docker-compose, but in kubernetes there's no telling what's on the nodes.

One approach attempted was to filter on namespace via metric_relabel_configs in k8s (https://github.com/sourcegraph/deploy-sourcegraph/pull/1644), e.g.:

      metric_relabel_configs:
      # cAdvisor-specific customization. Drop container metrics exported by cAdvisor
      # not in the same namespace as Sourcegraph.
      # Uncomment this if you have problems with certain dashboards or cAdvisor itself
      # picking up non-Sourcegraph services. Ensure all Sourcegraph services are running
      # within the Sourcegraph namespace you have defined.
      # The regex must keep matches on '^$' (empty string) to ensure other metrics do not
      # get dropped.
      - source_labels: [container_label_io_kubernetes_pod_namespace]
        regex: ^$|ns-sourcegraph # ensure this matches with namespace declarations
        action: keep

but due to the various ways a namespace can be applied when deploying, there's no guarantee that a customer won't forget to update the Prometheus relabel rule - more discussion in https://github.com/sourcegraph/deploy-sourcegraph/pull/1578. Regardless, this is the change currently applied in Cloud and k8s.sgdev.org to resolve our issue.

bobheadxi commented 3 years ago

Possible solutions

⭐ Dhall saves us

Provide the metric relabel config in generated configuration (example above and in https://github.com/sourcegraph/deploy-sourcegraph/pull/1644) based on namespace customization. Could be https://github.com/sourcegraph/sourcegraph/issues/17640

Monitoring generate generates exceptions

The monitoring generator could generate regex excludes on a case-by-case basis. This is really gnarly, and probably not a great idea, since we could jsut as easily accidentally break things on another deployment method (which is what happened in https://github.com/sourcegraph/sourcegraph/issues/17072)

Namespace checking in metric_relabel_configs

Using relabel config that tries to ensure container_label_io_kubernetes_pod_namespace and ns are the same (only get metrics within the same namespace as cAdvisor). This would be the dream solution.

Unfortunately, after much trawling of internet forums I am pretty sure Prometheus does not allow a comparison like this without knowing some information about the namespace, a la the solution applied in https://github.com/sourcegraph/deploy-sourcegraph/pull/1644). Solutions like this one did not seem to work.

Fork cAdvisor to allow filtering by Kubernetes namespace

If cAdvisor can be configured to filter containers by Kubernetes namespace, we can use the Kubernetes Downward API to ensure the correct namespace is always used. There is currently no such capability.

Something configures Kubernetes Prometheus rule

Similar to the above solution, we use the Kubernetes Downward API to configure Prometheus rule for nodes (that currently picks up cAdvisor metrics - also see https://github.com/sourcegraph/deploy-sourcegraph/pull/1644). prom-wrapper, which currently only configures alertmanager configuration, could do this. An init container might be able to handle this as well (this is similar to what is described in https://github.com/sourcegraph/deploy-sourcegraph/pull/1578#discussion_r558250748)

This could be difficult, because our Prometheus config is quite involved.

Do not use cAdvisor

cAdvisor has been a bit painful. Maybe all container monitoring tools are painful, but maybe there are some more flexible ones out there that we can consider. https://github.com/google/cadvisor/issues/2215 seems to indicate a shift away from relying on cAdvisor as well.

github-actions[bot] commented 2 years ago

Heads up @davejrt @ggilmore @dan-mckean @caugustus-sourcegraph @stephanx - the "team/delivery" label was applied to this issue.