elastic / beats

:tropical_fish: Beats - Lightweight shippers for Elasticsearch & Logstash
https://www.elastic.co/products/beats
Other
82 stars 4.92k forks source link

Kafka output only logs errors in debug level #37275

Open belimawr opened 11 months ago

belimawr commented 11 months ago

The Kafka output currently only logs connection issues on debug level.

All the log entries I saw were from this file: https://github.com/elastic/beats/blob/main/libbeat/outputs/kafka/client.go

Steps to reproduce

  1. Start Filebeat using the Kafka output
  2. Create any sort of connection issue, you can even bring down the Kafka cluster
  3. The logs informing about the error are only at debug level.

Configuration files

Bear in mind you will have to add your local IP address on those files.

filebeat.yml

```yaml filebeat.inputs: - id: my-log-input paths: - /tmp/flog.log type: log output: kafka: broker_timeout: 30 compression: none hosts: - :9091 partition: random: group_events: 1 required_acks: 1 timeout: 30 topics: - topic: my-topic-three type: kafka version: 2.6.0 queue.mem: flush.timeout: 2s logging: level: debug selectors: - kafka ```

docker-compose.yml

```yaml version: '3' services: zookeeper: image: zookeeper:3.4.9 hostname: zookeeper ports: - "2181:2181" environment: ZOO_MY_ID: 1 ZOO_PORT: 2181 ZOO_SERVERS: server.1=zookeeper:2888:3888 volumes: - ./data/zookeeper/data:/data - ./data/zookeeper/datalog:/datalog kafka1: image: confluentinc/cp-kafka:5.3.0 hostname: kafka1 ports: - "9091:9091" environment: KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19091,LISTENER_DOCKER_EXTERNAL://:9091 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181" KAFKA_BROKER_ID: 1 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 volumes: - ./data/kafka1/data:/var/lib/kafka/data depends_on: - zookeeper kafka2: image: confluentinc/cp-kafka:5.3.0 hostname: kafka2 ports: - "9092:9092" environment: KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka2:19092,LISTENER_DOCKER_EXTERNAL://:9092 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_BROKER_ID: 2 volumes: - ./data/kafka2/data:/var/lib/kafka/data depends_on: - zookeeper kafka3: image: confluentinc/cp-kafka:5.3.0 hostname: kafka3 ports: - "9093:9093" environment: KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka3:19093,LISTENER_DOCKER_EXTERNAL://:9093 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181" KAFKA_BROKER_ID: 3 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 volumes: - ./data/kafka3/data:/var/lib/kafka/data depends_on: - zookeeper kafdrop: image: obsidiandynamics/kafdrop restart: "no" ports: - "9000:9000" environment: KAFKA_BROKERCONNECT: "kafka1:19091,kafka2:19092,kafka3:19093" depends_on: - kafka1 - kafka2 - kafka3 ```

Tutorial on running a Kafka cluster with Docker: https://betterprogramming.pub/a-simple-apache-kafka-cluster-with-docker-kafdrop-and-python-cf45ab99e2b9

Example log entries

{"log.level":"debug","@timestamp":"2023-12-01T17:29:16.714+0100","log.logger":"kafka","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/kafka.(*msgRef).dec","file.name":"kafka/client.go","file.line":406},"message":"Kafka publish failed with: kafka: couldn't fetch broker metadata (check that your client and broker are using the same encryption and authentication settings)","service.name":"filebeat","ecs.version":"1.6.0"}

{"log.level":"debug","@timestamp":"2023-12-01T17:29:44.528+0100","log.logger":"kafka","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/kafka.(*msgRef).dec","file.name":"kafka/client.go","file.line":406},"message":"Kafka publish failed with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)","service.name":"filebeat","ecs.version":"1.6.0"}
belimawr commented 10 months ago

Some of the error messages like Kafka publish failed with: kafka: can be logged in every event, so just changing the log level will flood our logs. We'll have to find a better way to report connection issues with Kafka.