Closed toporek3112 closed 2 months ago
Is each cluster getting its own collector?
Yes.
One option is to add a resourceprocessor or transformprocessor to add the attribute to all data it receives. Does that solve the problem?
Not really since Prometheis is scraping the otel-collectors metrics from the /metrics enpoint directly.
Config:
telemetry:
logs:
level: "debug"
encoding: "json"
metrics:
level: detailed
address: ${env:MY_POD_IP}:8888
Oh I misunderstand, this is about the collector's own telemetry?
Yes, exactly
In that case you should be able to do
service:
telemetry:
resource:
cluster_name: my-cluster
In your collector config and all the telemetry it emits should include that resource attribute.
Personally I don't have an issue with that workaround for the collector, but I guess I'm having trouble understanding why we wouldn't want to support all ServiceMonitor options if support for ServiceMonitors are already included in the helm chart, especially in the case of the operator where you can't change that.
So this is working fine for most of the metrics:
In that case you should be able to do
service: telemetry: resource: cluster_name: my-cluster
In your collector config and all the telemetry it emits should include that resource attribute.
But there are still some metrics left where the cluster
label is not applied:
Metrics are:
This is not that critical, but another point to allow configuring relabelings
in the serviceMonitor
Since we're already supporting the ability to configure the installed service monitor, adding this feature does make sense.
Awesome! PRs have been updated to resolve conflicts and bump the chart version. Let me know if there's anything I need to do
So I tested the new helm chart version 0.97.2 with the helm template
command and the result gives me a serviceMonitor which appears to have a wrong intend for the relabelings
and metricRelabelings
:
k apply -f - <<-EOT
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: otel
namespace: default
labels:
helm.sh/chart: opentelemetry-collector-0.97.2
app.kubernetes.io/name: otel-metrics
app.kubernetes.io/instance: opentelemetry-collector
app.kubernetes.io/version: "0.104.0"
app.kubernetes.io/managed-by: Helm
spec:
selector:
matchLabels:
app.kubernetes.io/name: otel-metrics
app.kubernetes.io/instance: opentelemetry-collector
component: standalone-collector
endpoints:
- port: metrics
relabelings:
- action: replace
replacement: my-cluster
targetLabel: cluster
metricRelabelings:
- action: keep
regex: otelcol_process_memory_rss|target_info
sourceLabels:
- __name__
EOT
when I try to apply this manualy I get: Error from server (BadRequest): error when creating "STDIN": ServiceMonitor in version "v1" cannot be handled as a ServiceMonitor: strict decoding error: unknown field "spec.metricRelabelings", unknown field "spec.relabelings"
Am I the only one having this problem?
Hi, me and my team are importing metrics from multiple clusters with the help of the otel-collector. Because of that we need a certain overview from what cluster these metrics come from. Thats why all metrics should have a cluster label. It works well for the imported metrics.
Unfortunately, currently I don't see a possibility to append the cluster label to the otel-collectors metrics itself. Basically I would like to be able to add a
relabelings
config in the serviceMonitor in the helm chart. Currently there is no option for this in the collector and operator helm chart:Is this a valid request?