open-telemetry / opentelemetry-collector-contrib

Contrib repository for the OpenTelemetry Collector
https://opentelemetry.io
Apache License 2.0
2.86k stars 2.24k forks source link

New component: Metrics DeDuplicator processor #20921

Open nicolastakashi opened 1 year ago

nicolastakashi commented 1 year ago

The purpose and use-cases of the new component

Engineers may want to set up a highly available environment to collect metrics by having pairs of data sources collecting the same set of targets and then sending the samples to the OpenTelemetry collector to enrich and export the data to the metrics backend (e.g Thanos).

Although many of the metrics backends provide support for metrics deduplication on the query runtime, it requires customers to store duplicate samples which requires more computing resources such as storage or it may increase their bills if using a vendor backend (depending on the vendor pricing strategy). With that in mind would be amazing if the OpenTelemtry Collector provided some deduplication processor.

This processor may work based on some existing implementation like Thanos, Grafana Mimir, Cortex, or any other valuable implementation.

Example configuration for the component

metricsdeduplicator:
  replica_label: 
    - prometheus_replica

Telemetry data types supported

Metrics

Is this a vendor-specific component?

Sponsor (optional)

No response

Additional context

No response

nicolastakashi commented 1 year ago

There's one issue #17874 asking a processor to deduplicate traces, maybe we can join efforts, don't know.

atoulme commented 1 year ago

Would you please expand on the use case? I am not sure you can safely in parallel scrape a Prometheus metrics endpoint since it is stateful. The approach I have heard folks tend to take is to use a leader election mechanism to pick a specific collector to perform collection, with a failover.

nicolastakashi commented 1 year ago

Indeed most of the current implementations are leveraging only one instance to deduplicate metrics, this could be an initial state, like ensuring this exporter is not running in multiple replicas.

But exploring feature implementations we may have a similar strategy as the existing one to load balancer traces based on the trace id, we may discuss the idea of how to load balance metrics between collectors.

This processor is more suitable to run on central collectors which are receiving metrics from other agents, so it's easier to ensure only one replica running or feature lb implementation.

github-actions[bot] commented 1 year ago

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

nicolastakashi commented 1 year ago

I still think it's relevant

github-actions[bot] commented 1 year ago

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

github-actions[bot] commented 10 months ago

This issue has been closed as inactive because it has been stale for 120 days with no activity.

nicolastakashi commented 1 month ago

Bringing this subject back! I've been testing some implementation available in this repo: https://github.com/nicolastakashi/metricsdedupprocessor

I'd like to get feedback around that, can we reopen the issue @atoulme ?