open-telemetry / opentelemetry-collector-contrib

Contrib repository for the OpenTelemetry Collector
https://opentelemetry.io
Apache License 2.0
2.9k stars 2.27k forks source link

OTEL pod crashed when kafka broker has connectivity issue #24029

Closed rupeshnemade closed 3 months ago

rupeshnemade commented 1 year ago

Component(s)

exporter/kafka

Describe the issue you're reporting

During otel pod initialisation normally OTEL pod checks if it can connect with Kafka broker & once connection establishes it starts running. Sometimes when kafka broker has issue may be due to network or storage and OTEL is not able to connect kafka, then otel pod goes into CrashedLoop state which impacts complete log forwarding. Ideally issue in one kafka broker shouldn't take down whole OTEL setup & stop log forwarding.

Can we remove this hard dependancy of OTEL connection establishment & instead just throw a warning?

Here's the console output from the collector in case the Kafka broker is down:

$ ./otelcol-sumo-0.80.0-sumo-0-linux_amd64 --config ./config.yaml 
2023-07-07T10:44:35.305Z        info    service/telemetry.go:81 Setting up own telemetry...
2023-07-07T10:44:35.305Z        info    service/telemetry.go:104        Serving Prometheus metrics      {"address": ":8888", "level": "Basic"}
Error: failed to build pipelines: failed to create "kafka" exporter for data type "metrics": kafka: client has run out of available brokers to talk to: dial tcp 127.0.0.1:9092: connect: connection refused
2023/07/07 10:44:36 collector server run finished with error: failed to build pipelines: failed to create "kafka" exporter for data type "metrics": kafka: client has run out of available brokers to talk to: dial tcp 127.0.0.1:9092: connect: connection refused
github-actions[bot] commented 1 year ago

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

andrzej-stencel commented 1 year ago

Here's another description of the issue, rephrasing what Rupesh wrote above.

Steps to reproduce, assuming there's Kafka running on the host with the service broker at the default localhost:9092 endpoint and the following Otelcol configuration:

exporters:
  kafka:
  logging:

receivers:
  hostmetrics:
    scrapers:
      memory:

service:
  pipelines:
    metrics:
      exporters:
      - kafka
      - logging
      receivers:
      - hostmetrics

Scenario A: Start collector when Kafka is up

  1. Make sure the Kafka service broker is running at localhost:9092.
  2. Start the collector with the below config.
  3. Observe that the collector starts correctly.
  4. Shut down the Kafka service broker.
  5. Observe that the collector continues to run, writing errors to logs about not being able to reach the service broker.
  6. Start the Kafka service broker back up.
  7. Observe that the collector picks up the connection to the service broker again and resumes sending data.

Scenario B: Start collector when Kafka is down

  1. Make sure the Kafka service broker is NOT running at localhost:9092.
  2. Start the collector with the below config.

Actual behavior:

The collector fails to start:

$ ./otelcol-sumo-0.80.0-sumo-0-linux_amd64 --config ./config.yaml 
2023-07-07T10:44:35.305Z        info    service/telemetry.go:81 Setting up own telemetry...
2023-07-07T10:44:35.305Z        info    service/telemetry.go:104        Serving Prometheus metrics      {"address": ":8888", "level": "Basic"}
Error: failed to build pipelines: failed to create "kafka" exporter for data type "metrics": kafka: client has run out of available brokers to talk to: dial tcp 127.0.0.1:9092: connect: connection refused
2023/07/07 10:44:36 collector server run finished with error: failed to build pipelines: failed to create "kafka" exporter for data type "metrics": kafka: client has run out of available brokers to talk to: dial tcp 127.0.0.1:9092: connect: connection refused

Expected behavior:

The collector starts correctly and writes error logs to console until the endpoint is available.

MovieStoreGuy commented 1 year ago

I disagree that the Kafka component should warn if communicating with the brokers is an issue.

The last thing I would want is data to be silently discarded but I don't know of a reasonable outcome that ensures that Kafka errors are surfaced while ensure the data in transport makes it to the endpoint.

github-actions[bot] commented 1 year ago

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

EOjeah commented 11 months ago

I agree that the collector should not crash when kafka is down. It could work like some other exporters like zipkin, even if the zipkin host you specify at the exporters config is unreachable. it will log messages to the console when/if spans are attempting to send. Optionally retry after X time but eventually drop, alerts and monitoring dashboards can be easily created to monitor when "Exporting Failed" or even the metrics exposed by the collector like otelcol_exporter_send_failed_spans metric

For example, configuring the collector with invalid zipkin endpoint, the collector starts but when you try sending a trace, an output can be like

opentelemetry-collector_1  | 2023-10-06T12:45:50.546Z   info    exporterhelper/queued_retry.go:426      Exporting failed. Will retry the request after interval.       {"kind": "exporter", "data_type": "traces", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://zipkins:9411/api/v2/spans\": dial tcp: lookup zipkins on 127.0.0.11:53: no such host", "interval": "30.173935436s"}
opentelemetry-collector_1  | 2023-10-06T12:46:20.731Z   info    exporterhelper/queued_retry.go:426      Exporting failed. Will retry the request after interval.       {"kind": "exporter", "data_type": "traces", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://zipkins:9411/api/v2/spans\": dial tcp: lookup zipkins on 127.0.0.11:53: no such host", "interval": "37.873234376s"}
opentelemetry-collector_1  | 2023-10-06T12:46:58.616Z   error   exporterhelper/queued_retry.go:175      Exporting failed. No more retries left. Dropping data. {"kind": "exporter", "data_type": "traces", "name": "zipkin", "error": "max elapsed time expired failed to push trace data via Zipkin exporter: Post \"http://zipkins:9411/api/v2/spans\": dial tcp: lookup zipkins on 127.0.0.11:53: no such host", "dropped_items": 1}
opentelemetry-collector_1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).onTemporaryFailure
opentelemetry-collector_1  |    go.opentelemetry.io/collector@v0.69.0/exporter/exporterhelper/queued_retry.go:175
opentelemetry-collector_1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
opentelemetry-collector_1  |    go.opentelemetry.io/collector@v0.69.0/exporter/exporterhelper/queued_retry.go:410
opentelemetry-collector_1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send
opentelemetry-collector_1  |    go.opentelemetry.io/collector@v0.69.0/exporter/exporterhelper/traces.go:137
opentelemetry-collector_1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1
opentelemetry-collector_1  |    go.opentelemetry.io/collector@v0.69.0/exporter/exporterhelper/queued_retry.go:205
opentelemetry-collector_1  | go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1
opentelemetry-collector_1  |    go.opentelemetry.io/collector@v0.69.0/exporter/exporterhelper/internal/bounded_memory_queue.go:61

After some retries, it fails to send but does not crash the collector which sounds like an acceptable behaviour @MovieStoreGuy what'd you think?

djaglowski commented 11 months ago

Ideally issue in one kafka broker shouldn't take down whole OTEL setup & stop log forwarding.

This is the design principle we follow broadly in the collector. We should only fail to start if the problem is clearly permanent. Otherwise, we should run and retry as possible.

While this can cause some situations where errors go unnoticed, this should motivate us to improve observability of the collector itself. We're close to adding a notion of component status, which will give us an obvious signal that something is wrong. Aside from that, custom metrics describing failed connection attempts, dropped data, etc will be useful.

EOjeah commented 11 months ago

@djaglowski Just realised there's an option in the kafka exporter to handle intermittent failures with metadata Setting the metadata.full to false helps with the issue when pod fails to start if brokers are unavailable 🤦. This way, it acts like zipkin/jaeger and will drop the traces (after some retries)

github-actions[bot] commented 9 months ago

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

rupeshnemade commented 7 months ago

I agree @djaglowski that 'We should only fail to start if the problem is clearly permanent' but we saw an instances when complete crashed during restarts because of kafka broker is down.

github-actions[bot] commented 5 months ago

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

github-actions[bot] commented 3 months ago

This issue has been closed as inactive because it has been stale for 120 days with no activity.

harshalschaudhari commented 2 months ago

Note: I have tried out kafka using docker with port 29092 kafka container started first then otel-collector container runs after kafka.

Pod Logs as below

2024-07-18 16:25:51 Error: cannot start pipelines: kafka: client has run out of available brokers to talk to: dial tcp 172.20.0.9:29092: connect: connection refused 2024-07-18 16:25:51 2024/07/18 10:55:51 collector server run finished with error: cannot start pipelines: kafka: client has run out of available brokers to talk to: dial tcp 172.20.0.9:29092: connect: connection refused

Additional information I am able to connect Kafka using Offset Explorer 3.0. image

OTEL Configuration

exporters:
  kafka:
    brokers:
      - kafka:9092