i also deployed a collector with 1 prometheus receiver job named c1 configured to scrape pods in all namespaces with a specific prefix. pods in these namespaces are prom annotated.
when i inspect the ingested metrics, i anticipated metrics e.g. jvm_buffer_count_buffers to have job="c1" telling me that this metric was scraped by my custom c1 collector.
however same metric e.g. jvm_buffer_count_buffers also exists with label cx_otel_integration_name="coralogix-integration-helm" which is from otel-integration. this is the part that confuses me. how does this works? i confirmed that these app pods is not pushing metrics to any collectors.
also, for the sample metric jvm_buffer_count_buffers with label cx_otel_integration_name="coralogix-integration-helm" i see 3 timeseries with exactly the same labels, even the Time only, differs by label host_id.
Hi CX team,
I would like to seek help understanding the otel-integration
0.0.65
.i am using a minimal https://github.com/coralogix/telemetry-shippers/blob/master/otel-integration/k8s-helm/values.yaml and only changing
global
section.i also deployed a collector with 1 prometheus receiver job named
c1
configured to scrape pods in all namespaces with a specific prefix. pods in these namespaces are prom annotated.when i inspect the ingested metrics, i anticipated metrics e.g.
jvm_buffer_count_buffers
to havejob="c1"
telling me that this metric was scraped by my customc1
collector.however same metric e.g.
jvm_buffer_count_buffers
also exists with labelcx_otel_integration_name="coralogix-integration-helm"
which is fromotel-integration
. this is the part that confuses me. how does this works? i confirmed that these app pods is not pushing metrics to any collectors.also, for the sample metric
jvm_buffer_count_buffers
with labelcx_otel_integration_name="coralogix-integration-helm"
i see 3 timeseries with exactly the same labels, even the Time only, differs by labelhost_id
.