The reason i think this is important is because they show up the same in the exporter since atoms and string serialize the same.
The end result is you get two metrics with the same labels in your prometheus output which causes prometheus to fail to be able to scrape the endpoint due to bad data (duplicated entries)
It was pretty hard to track down but eventually I noticed the difference by checking the internal aggregation tables with
TelemetryMetricsPrometheus.Core.Aggregator.get_time_series and filtering for the metric i was looking for and seeing what it was being stored as
To resolve this I suggest this PR which makes atoms and strings fundamentally the same in the aggregator to it's internal ETS table
The reason i think this is important is because they show up the same in the exporter since atoms and string serialize the same. The end result is you get two metrics with the same labels in your prometheus output which causes prometheus to fail to be able to scrape the endpoint due to bad data (duplicated entries)
It was pretty hard to track down but eventually I noticed the difference by checking the internal aggregation tables with
TelemetryMetricsPrometheus.Core.Aggregator.get_time_series
and filtering for the metric i was looking for and seeing what it was being stored asTo resolve this I suggest this PR which makes atoms and strings fundamentally the same in the aggregator to it's internal ETS table