signalfx / splunk-otel-collector

Apache License 2.0
192 stars 152 forks source link

openTelemetry - prometheus metrics not visible in Splunk Enterprise #2426

Closed Davinder2609 closed 1 year ago

atoulme commented 1 year ago

Please provide more information: steps to reproduce, versions of Splunk and the collector, actual vs expected results.

Davinder2609 commented 1 year ago

@atoulme thanks ! These are config I am using -

Docker-compose.yml

version: "3"
services:
  # Sample Go application producing counter metrics.
  prometheus:
    image:  prom/prometheus:v2.29.2
    ports:
      - 9090:9090

    volumes:
       - ./config:/etc

    hostname: prometheus
    container_name: prometheus

  # OpenTelemetry Collector
  otelcollector:
    image: quay.io/signalfx/splunk-otel-collector:0.29.0
    container_name: otelcollector

    command: [ "--config=/etc/otel-collector-config.yml","--log-level=INFO" ]
    volumes:
      - ./config:/etc
    depends_on:
      - prometheus

Prometheus.yml

scrape_configs:
  - job_name: Confluent Cloud
    scrape_interval: 60s
    honor_labels: true
    honor_timestamps: true

    static_configs:
      - targets:
          - api.telemetry.confluent.cloud

    scheme: https
    basic_auth:
      username: username
      password: password

    metrics_path: /v2/metrics/cloud/export

    params:
      resource.kafka.id:
        - kafka_id
      resource.schema_registry.id:
          - schema_id

    tls_config:
      insecure_skip_verify: true

Otel-collector-config.yml

receivers:
    prometheus_simple:
       collection_interval: 60s
       endpoint: prometheus:9090

       metrics_path: /federate

       params:
        match[]:
          - '{job="ccloud_metrics"}'
          - '{__name__=~"job:.*"}'

#

    otlp:
        protocols:
            grpc:
              endpoint: 0.0.0.0:4317
            http:
              endpoint: 0.0.0.0:4318
exporters:
    splunk_hec/metrics:
        # Splunk HTTP Event Collector token.
        token: "*******************************"

        # URL to a Splunk instance to send data to.

        #endpoint: "https://*********splunkcloud.com/services/collector"

        # Optional Splunk source: https://docs.splunk.com/Splexicon:Source
        source: "app:metrics"
        #Optional Splunk source type: https://docs.splunk.com/Splexicon:Sourcetype
        sourcetype: "Kafka"
        # Splunk index, optional name of the Splunk index targeted.
        index: "main"

        # Maximum HTTP connections to use simultaneously when sending data. Defaults to 100.
        max_connections: 20

        # Whether to disable gzip compression over HTTP. Defaults to false.
        disable_compression: false
        # HTTP timeout when sending data. Defaults to 10s.
        timeout: 10s
        # Whether to skip checking the certificate of the HEC endpoint when sending data over HTTPS. Defaults to false.
        # For this demo, we use a self-signed certificate on the Splunk docker instance, so this flag is set to true.      
        insecure_skip_verify: true      

processors:
    batch:

extensions:
    health_check:
      endpoint: 0.0.0.0:13133
    pprof:
      endpoint: :1888
    zpages:
      endpoint: :55679

service:
    extensions: [pprof, zpages, health_check]
    pipelines:
      metrics:
        receivers: [prometheus_simple]
        processors: [batch]
        exporters: [splunk_hec/metrics]
Davinder2609 commented 1 year ago

I can see metrics visible in prometheus- Screen Shot 2022-12-29 at 8 34 43 AM (2)

Davinder2609 commented 1 year ago

but not in splunk- Screen Shot 2022-12-29 at 8 35 41 AM (2)

atoulme commented 1 year ago

Metrics are not events. You are sending metrics to an event index. You should use a metrics index instead. Further, you should use the metrics workspace to navigate metrics in Splunk Enterprise. Please read more here: https://docs.splunk.com/Documentation/SMW/1.1.9/Use/Navigate

Closing as this is not a bug with the collector, please reopen if more work is needed.

Davinder2609 commented 1 year ago

image

Davinder2609 commented 1 year ago

@atoulme I changed index,but only getting default metrics in splunk Getting this otelcollector | 2023-01-03T15:28:49.993Z warn internal/metricsbuilder.go:121 Failed to scrape Prometheus endpoint {"kind": "receiver", "name": "prometheus_simple", "scrape_timestamp": 1672759679802, "target_labels": "map[name:up exported_instance:api.telemetry.confluent.cloud:443 exported_job:Confluent Cloud instance:prometheus:9090 job:prometheus_simple/prometheus:9090]"} otelcollector | 2023-01-03T15:29:50.028Z warn internal/metricsbuilder.go:121 Failed to scrape Prometheus endpoint {"kind": "receiver", "name": "prometheus_simple", "scrape_timestamp": 1672759739786, "target_labels": "map[name:up exported_instance:api.telemetry.confluent.cloud:443 exported_job:Confluent Cloud instance:prometheus:9090 job:prometheus_simple/prometheus:9090]"}

atoulme commented 1 year ago

It seems like you have connectivity issues to your endpoint. Please debug that the endpoint is available and accessible from your collector.