Open bobheadxi opened 3 years ago
Provide the metric relabel config in generated configuration (example above and in https://github.com/sourcegraph/deploy-sourcegraph/pull/1644) based on namespace
customization. Could be https://github.com/sourcegraph/sourcegraph/issues/17640
The monitoring generator could generate regex excludes on a case-by-case basis. This is really gnarly, and probably not a great idea, since we could jsut as easily accidentally break things on another deployment method (which is what happened in https://github.com/sourcegraph/sourcegraph/issues/17072)
metric_relabel_configs
Using relabel config that tries to ensure container_label_io_kubernetes_pod_namespace
and ns
are the same (only get metrics within the same namespace as cAdvisor). This would be the dream solution.
Unfortunately, after much trawling of internet forums I am pretty sure Prometheus does not allow a comparison like this without knowing some information about the namespace, a la the solution applied in https://github.com/sourcegraph/deploy-sourcegraph/pull/1644). Solutions like this one did not seem to work.
If cAdvisor can be configured to filter containers by Kubernetes namespace, we can use the Kubernetes Downward API to ensure the correct namespace is always used. There is currently no such capability.
Similar to the above solution, we use the Kubernetes Downward API to configure Prometheus rule for nodes (that currently picks up cAdvisor metrics - also see https://github.com/sourcegraph/deploy-sourcegraph/pull/1644). prom-wrapper, which currently only configures alertmanager configuration, could do this. An init container might be able to handle this as well (this is similar to what is described in https://github.com/sourcegraph/deploy-sourcegraph/pull/1578#discussion_r558250748)
This could be difficult, because our Prometheus config is quite involved.
cAdvisor has been a bit painful. Maybe all container monitoring tools are painful, but maybe there are some more flexible ones out there that we can consider. https://github.com/google/cadvisor/issues/2215 seems to indicate a shift away from relying on cAdvisor as well.
Heads up @davejrt @ggilmore @dan-mckean @caugustus-sourcegraph @stephanx - the "team/delivery" label was applied to this issue.
Background
Prometheus generally attaches useful labels based on the target it is scraping. For example, when scraping
frontend
, Prometheus reaches out tofrontend
, knows certain things aboutfrontend
(e.g. service name, pod, instance, etc), and can attach those labels onto metrics exported byfrontend
cAdvisor exports metrics for other containers. So despite all cAdvisor metrics looking like they are coming from cAdvisor, they are actually for other containers.
On some systems cAdvisor generates a
name
that is some combination of fields that might make a target monitored by cAdvisor unique. This worked alright for a while right up until we discovered it didn't: https://github.com/sourcegraph/sourcegraph/issues/17069, https://github.com/sourcegraph/sourcegraph/issues/17072Problem
We need an effective way to identify Sourcegraph services inside cAdvisor metrics. The current strategy is outlined in our docs, but the approach is not perfect:
prometheus-to-*
exporters get picked up on theprometheus
matcher.We are a bit hamstrung in that whatever name-labelling convention we have must also:
io.kubernetes.container.name,io.kubernetes.pod.name,io.kubernetes.pod.namespace,io.kubernetes.pod.uid
)sourcegraph-frontend
vsfrontend
vssourcegraph-frontend-internal
)Docker-compose doesn't seem to be as much of an issue since it doesn't seem they are generally deployed on machines that do anything other than serve Sourcegraph on docker-compose, but in kubernetes there's no telling what's on the nodes.
One approach attempted was to filter on namespace via
metric_relabel_configs
in k8s (https://github.com/sourcegraph/deploy-sourcegraph/pull/1644), e.g.:but due to the various ways a namespace can be applied when deploying, there's no guarantee that a customer won't forget to update the Prometheus relabel rule - more discussion in https://github.com/sourcegraph/deploy-sourcegraph/pull/1578. Regardless, this is the change currently applied in Cloud and k8s.sgdev.org to resolve our issue.