open-telemetry / opentelemetry-collector-contrib

Contrib repository for the OpenTelemetry Collector
https://opentelemetry.io
Apache License 2.0
3.02k stars 2.33k forks source link

aws ressource detectors are excuted even if not configured #24072

Closed cforce closed 1 year ago

cforce commented 1 year ago

Component(s)

exporter/datadog

What happened?

Description

Although not configured the at least aws detector (maybe other as well) is executed anyway

Steps to Reproduce

Configure the processor for detection without aws but still find warnings because scanning for aws cloud api is to not successfull

Expected Result

Only those detectors configured will be executed

Actual Result

Aws (and maybe others) are still involved

Collector version

0.81.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04") Compiler(if manually compiled): (e.g., "go 1.20.5")

https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor

OpenTelemetry Collector configuration

extensions:
  zpages:
    endpoint: '0.0.0.0:55679'
  health_check:
    endpoint: '0.0.0.0:8081'
  memory_ballast:
    size_mib: 512
receivers:
  hostmetrics:
    collection_interval: 10s
    scrapers:
      paging:
        metrics:
          system.paging.utilization:
            enabled: true
      cpu:
        metrics:
          system.cpu.utilization:
            enabled: true
      memory: null
      load:
        cpu_average: true
      network: null
      process:
        mute_process_name_error: false
        mute_process_exe_error: false
        mute_process_io_error: false
  hostmetrics/disk:
    collection_interval: 3m
    scrapers:
      disk: null
      filesystem:
        metrics:
          system.filesystem.utilization:
            enabled: true
  otlp:
    protocols:
      grpc:
        endpoint: '0.0.0.0:4317'
      http:
        endpoint: '0.0.0.0:4318'
  prometheus/otelcol:
    config:
      scrape_configs:
        - job_name: otelcol
          scrape_interval: 10s
          static_configs:
            - targets:
                - '0.0.0.0:8888'
processors:
  resourcedetection:
    detectors:
      - env
      - system
      - docker
      - azure
    timeout: 10s
    override: false
  cumulativetodelta: null
  batch/metrics:
    send_batch_max_size: 1000
    send_batch_size: 100
    timeout: 10s
  batch/traces:
    send_batch_max_size: 1000
    send_batch_size: 100
    timeout: 5s
  batch/logs:
    send_batch_max_size: 1000
    send_batch_size: 100
    timeout: 30s
  attributes:
    actions:
      - key: tags
        value:
          - 'DD_ENV:${env:ENVIRONMENT}'
          - 'geo:${env:GEO}'
        action: upsert
  resource:
    attributes:
      - key: DD_ENV
        value: '${env:ENVIRONMENT}'
        action: insert
      - key: env
        value: '${env:ENVIRONMENT}'
        action: insert
      - key: geo
        value: '${env:GEO}'
        action: insert
      - key: region
        value: '${env:REGION}"'
        action: insert
exporters:
  datadog:
    api:
      site: datadoghq.com
      key: '${env:DATADOG_API_KEY}'
    metrics:
      resource_attributes_as_tags: true
    host_metadata:
      enabled: true
      tags:
        - 'DD_ENV:${env:ENVIRONMENT}'
        - 'geo:${env:GEO}'
        - 'region:${env:REGION}'
service:
  extensions:
    - zpages
    - health_check
    - memory_ballast
  telemetry:
    metrics:
      address: '0.0.0.0:8888'
    logs:
      level: ${env:LOG_LEVEL || 'info'}
  pipelines:
    traces:
      receivers:
        - otlp
      processors:
        - batch/traces
      exporters:
        - datadog
    metrics/hostmetrics:
      receivers:
        - otlp
      processors:
        - batch/metrics
      exporters:
        - datadog
    metrics:
      receivers:
        - otlp
      processors:
        - batch/metrics
      exporters:
        - datadog

Log output

023-07-10T16:37:50.605Z        info    service/telemetry.go:81 Setting up own telemetry...
2023-07-10T16:37:50.605Z        info    service/telemetry.go:104        Serving Prometheus metrics      {"address": "0.0.0.0:8888", "level": "Basic"}
2023/07/10 16:37:50 WARN: failed to get session token, falling back to IMDSv1: 403 connecting to 169.254.169.254:80: connecting to 169.254.169.254:80: dial tcp 169.254.169.254:80: connectex: A socket operation was 
attempted to an unreachable network.: Forbidden
        status code: 403, request id:
caused by: EC2MetadataError: failed to make EC2Metadata request
connecting to 169.254.169.254:80: connecting to 169.254.169.254:80: dial tcp 169.254.169.254:80: connectex: A socket operation was attempted to an unreachable network.
        status code: 403, request id:
2023-07-10T16:37:50.684Z        info    provider/provider.go:30 Resolved source {"kind": "exporter", "data_type": "metrics", "name": "datadog", "provider": "system", "source": {"Kind":"host","Identifier":"b53bd9e04e85"}}

Additional context

No response

github-actions[bot] commented 1 year ago

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

mx-psi commented 1 year ago

This log likely comes from the Datadog exporter and not from the resource detection processor. The Datadog exporter calls the AWS EC2 endpoint to determine the cloud provider the Collector is running on. This is not configurable at the moment, one way to avoid the call is to set the hostname option. See #16442 for more details.

mx-psi commented 1 year ago

I am going to close this as a duplicate of #22807. Let's continue the discussion over there