Closed mkielar closed 1 year ago
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
I think the reason for this may be difference in implementation.
See this fragment in prometheusreceiver
implementation:
https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/internal/transaction.go#L201-L202
vs. this implementation in statsdreceiver
:
https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/statsdreceiver/protocol/statsd_parser.go#L249-L252
You can see the objects/types on which the Name
and Version
attributes are set, differ (pcommon.InstrumentationScope
for statsdreceiver
vs. pmetric.NewMetrics -> ResourceMetrics -> ScopeMetrics -> Scope
for prometheusexporter
). It seems the latter makes awsemfexporter
use the receiver name as Metric Dimension, and the former does not.
@paologallinaharbur, you seem to be the author of both of those implementations, can you please take a look and/or comment on the issue?
Also: We're also testing the behaviour of https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/otlpreceiver with awsemfexporter
, I should have some results later this week.
I've just realized, aws-otel-collector
0.30.0
uses 0.78.0
release of opentelemetry-collector-contrib
, and the changes introduced by #23563 were only merged in 0.81.0
. I'm going to close this ticket, and wait for aws-otel-collector
to catch up with the latest changes, then test again. Apologies for the noise...
@mkielar
I did some investigation that I'll dump it here in case you need it. (Otherwise ignore it)
You can see the objects/types on which the Name and Version attributes are set, differ (pcommon.InstrumentationScope for statsdreceiver vs. pmetric.NewMetrics -> ResourceMetrics -> ScopeMetrics -> Scope for prometheusexporter). It seems the latter makes awsemfexporter use the receiver name as Metric Dimension, and the former does not.
prometheusexporter
SetName
acts as well on pcommon.InstrumentationScope
returned by scope()
in the line you mentioned
I run the tests and how the scope is added seems exactly the same (I would say that the SetName
and SetVersion
are good safenets). Moreover, you mentioned that
@paologallinaharbur, I managed to set up local workspace and debug the tests, and I saw exactly what you're showing on screenshots. Which led me to the fact, that it's not the implementation, but simply an older version of the dependency in aws-otel-collector
. As I said, I'll wait for the AWS to catch up, and try upgrading again in a month or two.
Anyway, thanks a lot for looking into that (and apologies for wasting your time).
Component(s)
exporter/awsemf, receiver/prometheus, receiver/statsd
What happened?
Description
We have
aws-otel-collector
0.30.0
running alongside a Java App (which exposes Prometheus metrics) and AWS/Envoy Sidecar (which exposes StatsD metrics).aws-otel-collector
is configured to process both those sources using separate pipelines, and to push the metrics to AWS CloudWatch usingawsemfexporter
. We have previously used version0.16.1
of theaws-otel-collector
and are only now upgraging.Previously, metrics from both sources were stored in CloudWatch "as-is". After the upgrade, however, we noticed, that the Prometheus metrics gained a new Dimension:
OTelLib
, with valueotelcol/prometheusreceiver
. This, obviously broke a few things on our end (like CloudWatch Alarms).After digging a bit, I found this two tickets, which were supposed to get both of these receivers to the same place in terms of populating
otel.library.name
:Unfortunately I was not able to grasp how that translates to
OTelLib
metric dimension set inawsemfexporter
but it seems somehow related at this point.My understanding is, that it's de-facto standard for the receivers to add the name and version of the library to processed metrics, but I do not understand how or why at all is that information being added as a dimension. I also do not understand if that's an expected outcome, thus, it's hard for me to figure out whether it's a bug in
prometheusreceiver
(that it adds that as a dimension),statsdreceiver
(that it doesn't add it as a dimension) orawsemfexporter
. I'd be grateful for any guidance on this matter.Steps to Reproduce
Expected Result
I would expect the following:
awsemfexporter
would add the newOTelLib
Dimension regardless where the metrics come from. Or would not add that at all. I'm not sure what is considered the "correct" behaviour here. I would expect it to be consistent across receivers, however.awsemfexporter
configuration, it has dedicated logic to handle thatOTelLib
Dimension. I think it would be a good idea to be able to implement a switch that would control whether theOTelLib
Dimension is being added or not. In our case, forcefully adding this new Dimension to all collected metrics will break A LOT of things around our observability solution.Actual Result
prometheusreceiver
are stored byawsemfexporter
with additionalOTelLib
dimension set tootelcol/prometheusreceiver
.statsdreceiver
are stored by identical configuration ofawsemfexporter
withoutOTelLib
dimension.awsemfexporter
in a way that it would not add theOTelLib
dimension.Collector version
v0.78.0 (according to: https://github.com/aws-observability/aws-otel-collector/releases/tag/v0.30.0)
Environment information
Environment
OS: AWS ECS / Fargate We're running custom-built Docker Image, based on
amazonlinux:2
, with a Dockerfile lookling like below:OpenTelemetry Collector configuration
Log output
Additional context
N/A