grafana / alloy

OpenTelemetry Collector distribution with programmable pipelines
https://grafana.com/oss/alloy
Apache License 2.0
1.29k stars 175 forks source link

Docker image: Server listens to wrong ip #1469

Open snathanail opened 1 month ago

snathanail commented 1 month ago

What's wrong?

When spinning up a new container for Alloy, the server listens to 127.0.0.1:12345 rather than 0.0.0.0:12345. This causes the requests to http://localhost:12345 from outside the container to fail (ERR_EMPTY_RESPONSE from Chrome, error 52 from curl).

This has an easy fix, bind the listening IP address of the Alloy server to 0.0.0.0:12345 rather than 127.0.0.1:12345, and it will be solved. I mitigated that by adding the below line to the docker-compose.yml: entrypoint: alloy run --server.http.listen-addr=0.0.0.0:12345 --storage.path=/var/lib/alloy/data /etc/alloy/config.alloy but it should be set like that by default.

Steps to reproduce

  1. Created a docker-compose.yml as below:
grafana.alloy:
  image: grafana/alloy:latest
  volumes:
    - ./my-path-blah/config.alloy:/etc/alloy/config.alloy
  environment:
    OTEL_SERVICE_NAME: "blah"
    OTEL_RESOURCE_ATTRIBUTES: "deployment.environment=development,service.namespace=blah-blah,service.version=1"
  ports:
    - 12345:12345
    - 4317:4317 # OTLP gRPC receiver
    - 4318:4318 # OTLP http receiver

Also create a config.alloy as in the below configuration and save it under /my-path-blah

  1. Run docker-compose up
  2. Access http://localhost:12345
  3. Observe ERR_EMPTY_RESPONSE by browser.

System information

Windows 11 with Docker Desktop 4.33.1

Software version

Grafana Alloy v1.3

Configuration

otelcol.receiver.otlp "default" {
    // configures the default grpc endpoint "0.0.0.0:4317"
    grpc { }
    // configures the default http/protobuf endpoint "0.0.0.0:4318"
    http { }

    output {
        metrics = [otelcol.processor.resourcedetection.default.input]
        logs    = [otelcol.processor.resourcedetection.default.input]
        traces  = [otelcol.processor.resourcedetection.default.input]
    }
}

otelcol.processor.resourcedetection "default" {
    detectors = ["env", "system"] // add "gcp", "ec2", "ecs", "elastic_beanstalk", "eks", "lambda", "azure", "aks", "consul", "heroku"  if you want to use cloud resource detection

    system {
        hostname_sources = ["os"]
    }

    output {
        metrics = [otelcol.processor.transform.add_resource_attributes_as_metric_attributes.input]
        logs    = [otelcol.processor.batch.default.input]
        traces  = [
            otelcol.processor.batch.default.input,
            otelcol.connector.host_info.default.input,
        ]
    }
}

otelcol.connector.host_info "default" {
    host_identifiers = ["host.name"]

    output {
        metrics = [otelcol.processor.batch.default.input]
    }
}

otelcol.processor.transform "add_resource_attributes_as_metric_attributes" {
    error_mode = "ignore"

    metric_statements {
        context    = "datapoint"
        statements = [
            "set(attributes[\"deployment.environment\"], resource.attributes[\"deployment.environment\"])",
            "set(attributes[\"service.version\"], resource.attributes[\"service.version\"])",
        ]
    }

    output {
        metrics = [otelcol.processor.batch.default.input]
    }
}

otelcol.processor.batch "default" {
    output {
        metrics = [otelcol.exporter.otlphttp.grafana_cloud.input]
        logs    = [otelcol.exporter.otlphttp.grafana_cloud.input]
        traces  = [otelcol.exporter.otlphttp.grafana_cloud.input]
    }
}

otelcol.exporter.otlphttp "grafana_cloud" {
    client {
        endpoint = "https://otlp-gateway-prod-eu-west-3.grafana.net/otlp"
        auth     = otelcol.auth.basic.grafana_cloud.handler
    }
}

otelcol.auth.basic "grafana_cloud" {
    username = "xxxxxx"
    password = "yyyyyy"
}

Logs

...
2024-08-14 08:09:44 ts=2024-08-14T05:09:44.6795315Z level=info msg="finished complete graph evaluation" controller_path=/ controller_id="" trace_id=f891f5239e4d53d4798c6b55279ace51 duration=7.456871ms
2024-08-14 08:09:44 ts=2024-08-14T05:09:44.679799052Z level=info msg="scheduling loaded components and services"
2024-08-14 08:09:44 ts=2024-08-14T05:09:44.680297987Z level=info msg="starting cluster node" service=cluster peers_count=0 peers="" advertise_addr=127.0.0.1:12345
2024-08-14 08:09:44 ts=2024-08-14T05:09:44.680580219Z level=info msg="peers changed" service=cluster peers_count=1 peers=01d9f3a2af2b
2024-08-14 08:09:44 ts=2024-08-14T05:09:44.680634703Z level=info msg="now listening for http traffic" service=http addr=127.0.0.1:12345
2024-08-14 08:09:44 ts=2024-08-14T05:09:44.680949941Z level=info msg="began detecting resource information" component_path=/ component_id=otelcol.processor.resourcedetection.default
2024-08-14 08:09:44 ts=2024-08-14T05:09:44.681093243Z level=info msg="Starting host_info connector" component_path=/ component_id=otelcol.connector.host_info.default
2024-08-14 08:09:44 ts=2024-08-14T05:09:44.681190634Z level=info msg="Starting GRPC server" component_path=/ component_id=otelcol.receiver.otlp.default endpoint=0.0.0.0:4317
2024-08-14 08:09:44 ts=2024-08-14T05:09:44.681325674Z level=info msg="detected resource information" component_path=/ component_id=otelcol.processor.resourcedetection.default resource="map[deployment.environment:local host.name:01d9f3a2af2b os.type:linux service.namespace:xxxxx service.version:1]"
2024-08-14 08:09:44 ts=2024-08-14T05:09:44.681659156Z level=info msg="Starting HTTP server" component_path=/ component_id=otelcol.receiver.otlp.default endpoint=0.0.0.0:4318
github-actions[bot] commented 5 days ago

This issue has not had any activity in the past 30 days, so the needs-attention label has been added to it. If the opened issue is a bug, check to see if a newer release fixed your issue. If it is no longer relevant, please feel free to close this issue. The needs-attention label signals to maintainers that something has fallen through the cracks. No action is needed by you; your issue will be kept open and you do not have to respond to this comment. The label will be removed the next time this job runs if there is new activity. Thank you for your contributions!