opensearch-project / dashboards-observability

Visualize and explore your logs, traces and metrics data in OpenSearch Dashboards
https://opensearch.org/docs/latest/observability-plugin/index/
Apache License 2.0
16 stars 55 forks source link

How to Opentelemetry metrics are get in OpenSearch dashboard? #2172

Open Ommkwn2001 opened 2 months ago

Ommkwn2001 commented 2 months ago

Describe the bug

I install OpenSearch Dashboard , Dataprepper and Opentelemetry use with Helm chart.

So i First of i install OpenSearch and OpenSearch Dashboard and for the automatic generate index in the Opensearch dashboard i install the "Dataprepper" with this configuration in value.yaml code.

Dataprepper configuration code in value.yaml:

`

  config:
    otel-metrics-pipeline:
      workers: 8
      delay: 3000
      source:
        otel_metrics_source:
          health_check_service: true
          ssl: false
      processor:
        - otel_metrics:
            calculate_histogram_buckets: true
            calculate_exponential_histogram_buckets: true
            exponential_histogram_max_allowed_scale: 10
            flatten_attributes: false
      sink:
        - opensearch:
            hosts: ["https://opensearch-cluster-master.default.svc.cluster.local:9200"]
            username: "admin"
            password: "TadhakDev01"
            insecure: true
            index_type: custom
            index: ss4o_metrics-otel-%{yyyy.MM.dd}
            bulk_size: 4`

there are automatic create an index in the Opensearch dashboard with this name "ss4o_metrics-otel-%{yyyy.MM.dd}".

and i install Opentelemetry with this configuration in value.yaml code.

Opentelemetry configuration code in value.yaml:

`

 config:
    exporters:
      otlp/data-prepper:
        endpoint: my-data-prepper-release.default.svc.cluster.local:21891
        tls:
          insecure: true

      debug: {}
    extensions:
      # The health_check extension is mandatory for this chart.
      # Without the health_check extension the collector will fail the readiness and liveliness probes.
      # The health_check extension can be modified, but should never be removed.
      health_check:
        endpoint: ${env:MY_POD_IP}:13133
    processors:
      batch: {}
      # Default memory limiter configuration for the collector based on k8s resource limits.
      memory_limiter:
        check_interval: 5s
        limit_mib: 512 
        spike_limit_percentage: 25
    receivers:
      kubeletstats:
        insecure_skip_verify: true
        # collection_interval: 30s
        metrics:
          container.cpu.time:
            enabled: true
          container.cpu.utilization:
            enabled: true
          container.memory.available:
            enabled: true
          container.memory.usage:
            enabled: true
          k8s.node.cpu.time:
            enabled: true
          k8s.node.cpu.usage:
            enabled: true
          k8s.node.memory.available:
            enabled: true
          k8s.node.memory.usage:
            enabled: true
          k8s.pod.cpu.time:
            enabled: true
          k8s.pod.cpu.usage:
            enabled: true
          k8s.pod.memory.available:
            enabled: true
          k8s.pod.memory.usage:
            enabled: true

      k8s_cluster:
        node_conditions_to_report: [Ready, MemoryPressure]
        allocatable_types_to_report: [cpu, memory]
        metrics:
          k8s.container.cpu_limit:
            enabled: true
          k8s.container.cpu_request:
            enabled: true
          k8s.container.memory_limit:
            enabled: true
          k8s.container.memory_request:
            enabled: true
      zipkin:
        endpoint: ${env:MY_POD_IP}:9411
    service:
      telemetry:
        metrics:
          address: ${env:MY_POD_IP}:8888
      extensions:
        - health_check
      pipelines:
        logs:
          exporters:
            - debug
          processors:
            - memory_limiter
            - batch
          receivers:
            - otlp
        metrics:
          receivers: [hostmetrics]
          processors: []
          exporters: [otlp/data-prepper]`

when i install this Opentelemetry than my index "ss4o_metrics-otel-%{yyyy.MM.dd}" total size are automatically high. this is the index image: index

there are all metrics are show in the "Discover" in OpenSearch dashboard. this is the "Discover" image : discover

but in the OpenSearch dashboard "Metrics" there are not any metrics are available in this "Metrics" form this is the "Metrics" image : metrics

Expected behavior So i want to all index metrics are get in the OpenSearch dashboard "Metrics".

OpenSearch Version Please list the version of OpenSearch being used.

Dashboards Version OpenSearch Dashboards version : 2.16.0

Host/Environment (please complete the following information):

LDrago27 commented 1 month ago

@opensearch-project/admin move it to observability-dashboards

andrross commented 1 month ago

[Catch All Triage - 1, 2, 3]

glelarge commented 1 week ago

Hi, I met the same issue, index is displayed in the select box, but no metrics displayed.

Cause

Checking the developer console reveals a 500 HTTP request with this message:

Fetch Document Names Error:Error: Fetch Otel Metrics Error:[illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [name] in order to load field data by uninverting the inverted index. Note that this can use significant memory.

Screenshot from 2024-10-29 12-02-39

Temp fix

Following this post the mapping API helped to change the type of the name field to the correct type keyword, and now the metrics appears correctly :

Screenshot from 2024-10-29 15-24-35

Expected behavior

The mapping field type of name should be set to keyword.

I didn't find exactly what provides the default mapping used to create the index, and what actually creates this mapping on the index: dataprepper, dashoard or plugin... Another curious thing is the mapping creation flow:

So I'm asking if mapping rely only on the provided template or also on received data.

Permanent workaround

To fix this error permanently, we have configured the data prepper deployment with a custom mapping downloaded directly from the OpenSearch catalog : https://github.com/opensearch-project/opensearch-catalog/blob/main/schema/observability/metrics/metrics-1.0.0.mapping

I attached the yaml configuration for the data prepper metrics pipeline.

Next

That's not finished, only HISTOGRAM metrics are displayed, I don't know why, so I created this #2236