Open ptodev opened 1 month ago
So far I can think of two ways to do this.
This option is being discussed in a dedicated proposal PR. I'm not sure how realistic this option is, because it could become quite a bad footgun.
The new component would have two types of inputs:
targets
from discovery
componentsprometheus
componentsIt'd add extra labels to the metrics, and then send the metrics downstream to other prometheus
components. It'd work similarly to otelcol.processor.k8sattributes
and its pod_association block. otelcol.processor.discovery also does something similar, and it is a very unique Alloy/Agent component which is not in the Collector.
Ideally we should come up with a solution which can also be reused for Loki components - #810.
Request
prometheus.exporter
components tend to not taketargets
as "input" attributes. Some do (e.g. prometheus.exporter.snmp and prometheus.exporter.blackbox), but others such as prometheus.exporter.kafka simply have an attribute for the URL of the system they are monitoring (kafka_uris
). It would be nice to leverage thediscovery
components when working withprometheus.exporter
, so that the metrics can have extra labels.The simplest way I can think of is to have an additional argument for every
prometheus.exporter
that takes intargets
. For example,prometheus.exporter.kafka
can have akafka_targets
argument.kafka_targets
must contain a special__address__
label with the Kafka API URL, and it may contain various additional labels which can be passed down toprometheus.scrape
.An alternative approach for
prometheus.exporter.kafka
would be to leave the existingkafka_uris
attribute as the only attribute which can contain Kafka API URLs, and instead we can have some sort ofadditional_labels
attribute. However, it may be harder to associate the right labels with the right kafka instance.Use case
You may want to run
prometheus.exporter
to monitor a Kafka or Redis running on k8s, and you may want k8s labels such as pod name.