open-telemetry / opentelemetry-helm-charts

OpenTelemetry Helm Charts
https://opentelemetry.io
Apache License 2.0
385 stars 463 forks source link

opentelemetry helm charts is not straightforward for Airflow[otel] integration #1087

Open dthauvin opened 5 months ago

dthauvin commented 5 months ago

Hello I'am trying to setup opentelemetry collector with apache-airflow[otel] metrics integration. I tried a lot of configuration without succeeded .

Airflow 2.8.2 with pip install apache-airflow[otel] Helm Chart version 0.84.0 app version 0.96.0

My airflow metrics configuration look like :

[metrics]
otel_on = True
otel_host = kube-opentelemetry-collector.open-telemetry.svc.cluster.local
otel_port = 4318
otel_interval_milliseconds = 30000 
otel_ssl_active: False

When trying to follow airflow breeze configuration http://localhost:8889/metrics is up and running but do not display anything .

My helm chart values.yaml look like :

mode: deployment
resources:
  limits:
    cpu: 250m
    memory: 512Mi
config:
  receivers:
    otlp:
      protocols:
        http: 
          endpoint: 0.0.0.0:4318
  processors:
    batch: {}
  exporters:
    debug:
      verbosity: detailed
    prometheus:
      endpoint: 0.0.0.0:8889
  service:
    pipelines:
      traces:
        receivers: [otlp]
        processors: [batch]
        exporters: [debug]
      metrics:
        receivers: [otlp]
        processors: [batch]
        exporters: [debug, prometheus]

When choosing default helm chart values.yaml i get only otel default metrics something like on http://localhost:8888/metrics

# HELP otelcol_exporter_send_failed_metric_points Number of metric points in failed attempts to send to destination.
# TYPE otelcol_exporter_send_failed_metric_points counter
otelcol_exporter_send_failed_metric_points{exporter="debug",service_instance_id="668b72b5-0850-4043-b9b3-53e266900ac4",service_name="otelcol-contrib",service_version="0.96.0"} 0
# HELP otelcol_exporter_sent_metric_points Number of metric points successfully sent to destination.
# TYPE otelcol_exporter_sent_metric_points counter

My helm chart values look like

.
.
mode: deployment
config:
  exporters:
    debug: {}
    logging: {}
  extensions:
    health_check:
      endpoint: '0.0.0.0:13133'
    memory_ballast: {}
  processors:
    batch: {}
    memory_limiter: null
  receivers:
    jaeger:
      protocols:
        grpc:
          endpoint: '0.0.0.0:14250'
        thrift_http:
          endpoint: '0.0.0.0:14268'
        thrift_compact:
          endpoint: '0.0.0.0:6831'
    otlp:
      protocols:
        grpc:
          endpoint: '0.0.0.0:4317'
        http:
          endpoint: '0.0.0.0:4318'
    prometheus:
      config:
        scrape_configs:
          - job_name: opentelemetry-collector
            scrape_interval: 10s
            static_configs:
              - targets:
                  - '0.0.0.0:8888'
    zipkin:
      endpoint: '0.0.0.0:9411'
  service:
    telemetry:
      metrics:
        address: '0.0.0.0:8888'
    extensions:
      - health_check
      - memory_ballast
    pipelines:
      logs:
        exporters:
          - debug
        processors:
          - memory_limiter
          - batch
        receivers:
          - otlp
      metrics:
        exporters:
          - debug
        processors:
          - memory_limiter
          - batch
        receivers:
          - otlp
          - prometheus
      traces:
        exporters:
          - debug
        processors:
          - memory_limiter
          - batch
        receivers:
          - otlp
          - jaeger
          - zipkin
.
.
ports:
  otlp:
  metrics:
    enabled: true
    containerPort: 8888
    servicePort: 8888
    protocol: TCP

Network connectivity between airflow containers and opentelemetry deployment is also OK .

I could use your help.

Any suggestions ?

Any thought?

astanishevskyi-gl commented 5 months ago

I have the same problem

TylerHelmuth commented 5 months ago

I am no familiar with Airflow, is it the source of metrics and is it sending over OTLP?

If you done want the otel default metrics remove the prometheus receiver.