Open architgarg95 opened 3 days ago
using node auto instrumentation, version specified above as well. Any help would be appreciated
It look like there's a feature enabled on the Python and Java Prometheus exporters that makes resource attributes appear on the metric. This is an optional feature that we have not implemented yet, so I'm changing this to a feature-request
.
Specification: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk_exporters/prometheus.md#configuration
What version of OpenTelemetry are you using?
"@opentelemetry/api": "1.9.0", "@opentelemetry/auto-instrumentations-node": "0.49.1",
What version of Node are you using?
Node 14.17.6 and Node 16.19.0 both versions tested
What did you do?
I have multiple instances of a service running on 2 AWS EC2 (2 instances). Due to this the counter values maintained for http_server_duration_milliseconds_count have some difference that leads to Grafana Alloy (based on opentelemetry collector) showing a zig zag pattern (due to time diff between the 2 instances start). In java and python for multiple instances getting an instance__id associated with metrics but in case of Nodejs that was not the case. Thought might be due to docker container or k8 cluster, tried the same for this application but still not getting any containerid or instance_id to distinguish between different instances metrics
What did you expect to see?
expected to see behaviour similar to Java and Python Opentelemetry wherein getting the instance_id attribute in the metrics
What did you see instead?
no instance_id attribute in metrics for nodejs
Additional context
This really affects the graph in grafana alloy resulting in rate graphs having a much higher value than showing zero, which really put us off