Open VijayPatil872 opened 3 months ago
Pinging code owners:
connector/servicegraph: @jpkrohling @mapno @JaredTan95
See Adding Labels via Comments if you do not have permissions to add labels yourself.
any update on the issue?
Can you provide more information on why metrics are incorrect? A test or test data that reproduces the behaviour would be very helpful
@mapno If we consider traces_service_graph_request_total metrics or traces_service_graph_request_failed_total metrics, these should be counter, but it is seen fluctuating up and down. similarly for calls_total metrics in case of spanmetrics it should be a counter, but the graph is up & down at sometimes. Also Can you explain for me what kind of A test or test data you need as the configurations as applied above. Let me know for addition details required.
Component(s)
connector/servicegraph
What happened?
Description
I am using
servicegraph
connector to generate service graph and metrics from span. the metrics are emitted by the connector are fluctuating up and down. We are using service graphs connector to build service graph. We have deployed a layer of Collectors containing the load-balancing exporter in front of traces Collectors doing the span metrics and service graph connector processing. The load-balancing exporter is used to hash the trace ID consistently and determine which collector backend should receive spans for that trace. the service graph exporting the metrics to Grafana mimir withprometheusremotewrite
exporter.Steps to Reproduce
Expected Result
The metrics are emitted by the connector should be correct
Actual Result
Collector version
0.104.0
Environment information
No response
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response