open-telemetry / opentelemetry-collector-contrib

Contrib repository for the OpenTelemetry Collector
https://opentelemetry.io
Apache License 2.0
3.08k stars 2.37k forks source link

[processors/cumulativetodelta] getting UNSUPPORTED_METRIC_TYPE_MONOTONIC_CUMULATIVE_SUM in otel logs #33673

Open vaibhhavv opened 4 months ago

vaibhhavv commented 4 months ago

Component(s)

processor/cumulativetodelta

Describe the issue you're reporting

We have a usecase, we receive data from our producers on our otel, then we export the data to dynatrace. To mention, we use 'otlphttp/export-to-dynatrace' exporter for exporting it to dynatrace. We are receiving below warnings in our otel logs and can also see some data is being dropped.

2024-06-20T12:17:11.450Z    warn    otlphttpexporter@v0.102.1/otlp.go:358   Partial success response    {"kind": "exporter", "data_type": "metrics", "name": "otlphttp/export-to-dynatrace", "message": "The following issues were encountered while ingesting OTLP metrics:\nErrors:\nUnsupported metric: 'traces_request_total' - Reason: UNSUPPORTED_METRIC_TYPE_MONOTONIC_CUMULATIVE_SUM\n", "dropped_data_points": 58}

2024-06-20T12:17:11.680Z    warn    otlphttpexporter@v0.102.1/otlp.go:358   Partial success response    {"kind": "exporter", "data_type": "metrics", "name": "otlphttp/export-to-dynatrace", "message": "The following issues were encountered while ingesting OTLP metrics:\nErrors:\nUnsupported metric: 'system.disk.io' - Reason: UNSUPPORTED_METRIC_TYPE_MONOTONIC_CUMULATIVE_SUM\n", "dropped_data_points": 318}

2024-06-20T12:17:11.749Z    warn    otlphttpexporter@v0.102.1/otlp.go:358   Partial success response    {"kind": "exporter", "data_type": "metrics", "name": "otlphttp/export-to-dynatrace", "message": "The following issues were encountered while ingesting OTLP metrics:\nErrors:\nUnsupported metric: 'prometheus_tsdb_total' - Reason: UNSUPPORTED_METRIC_TYPE_MONOTONIC_CUMULATIVE_SUM\n", "dropped_data_points": 541}

Then we got to know about the "cumulativetodelta" processor. We quickly tried it in our use case with the below config. After configuring it in the pipeline, the above errors are gone.

processors:
    cumulativetodelta/export-to-dynatrace:

But currently with the above configurations, the otel will utilise its resources on every metric for cumulativetodelta processor. And in our case we are receiving the above mentioned error/warning "UNSUPPORTED_METRIC_TYPE_MONOTONIC_CUMULATIVE_SUM" for a large set of metrics. It does not looks good to add every metrics in "include" parameter of the processor nor do they have a common string so regex can be configured. If in future some metrics are added by consumers and we receive same errors then we have to manually keep adding those metrics in the "include" parameter which is again a headache.

Question is: Do we have some way to include metrics based upon the type? Like we do in "filterprocessor" example below.

processors:
  filter/ottl:
    metrics:
      metric:
          - 'type == METRIC_DATA_TYPE_HISTOGRAM'

If we have some way to "include" metrics based on types apart from string & regex, it would be great. Any other production solution is also appreciated for the usecase by the experts.

github-actions[bot] commented 4 months ago

Pinging code owners:

vaibhhavv commented 4 months ago

Hi @TylerHelmuth could you please share your expertise here.

TylerHelmuth commented 4 months ago

Unfortunately the cummulativetodelta processor doesn't have metric type as a selection option. It could be made to have that feature.

vaibhhavv commented 4 months ago

@TylerHelmuth If that's the case, then I as a user of OpenTelemetry want to recommend that as a feature. In many production use cases at ground level, this feature will benefit the community and save a lot of manual work. Also, this will lead us to save resources in the end and eventually decrease the data centre's carbon footprint.

bacherfl commented 3 months ago

@TylerHelmuth if this is a feature that should be added, I would be happy to work on that - CC @evan-bradley

jdespatis commented 2 months ago

I have the same problem while pushing some metrics to Dynatrace through my ADOT collector

@TylerHelmuth , are you ok with the PR provided by @bacherfl ? it would be awesome to merge it to fix this issue ;)

github-actions[bot] commented 2 weeks ago

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

vaibhhavv commented 2 weeks ago

Hi @TylerHelmuth, can we bring this requested feature to the processor? @bacherfl already made some amazing commits in the PR to enable this feature.