confluentinc / librdkafka

The Apache Kafka C/C++ library
Other
284 stars 3.15k forks source link

Error "Disconnected while requesting ApiVersion: might be caused by incorrect security.protocol configuration..." connecting to kafka broker 3.8.0 #4831

Open VLorz opened 2 months ago

VLorz commented 2 months ago

Description

I'm trying to connect to a Kafka broker with librdkafka but the producer always fails with error

%6|1724670994.540|FAIL|us-od.kafka-producer-1#producer-1| [thrd:127.0.0.1:9092/1]: 127.0.0.1:9092/1: Disconnected while requesting ApiVersion: might be caused by incorrect security.protocol configuration (connecting to a SSL listener?) or broker version is < 0.10 (see api.version.request) (after 0ms in state APIVERSION_QUERY, 3 identical error(s) suppressed)

Broker version is 3.8.0 (Docker image: bitnami/kafka, sha256:ed3c7264b110293d565cbe4ab479631f8b56196e98d19d4ab4fba689a142f176).

I run my client against librdkafka version 2.5.0, installed on an Alpine (3.19.0) Docker container. I installed librdkafka from the edge/community repository using apk add --no-cache librdkafka-dev --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community. I also installed glib-dev, lz4-dev, pkgconfig, openssl-dev and all build and debug tools I need, as this is a development container.

The broker is configured with following settings:

KAFKA_BROKER_ID=1
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_CFG_LISTENERS=CLIENT://:9093,EXTERNAL://:9092
KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9093,EXTERNAL://127.0.0.1:9092
KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
ALLOW_PLAINTEXT_LISTENER=yes

I create the client (producer) with this:

// Configuration
rd_kafka_conf_t *kafka_conf = rd_kafka_conf_new();
// Parameters are copied from a YAML file:
// bootstrap.servers=localhost:9092
// security.protocol=PLAINTEXT
// log_level=7
// api.version.request=true

// Producer
producer = rd_kafka_new(RD_KAFKA_PRODUCER, kafka_conf, errstr, sizeof(errstr));

// Messages are sent to the queue using:
err = rd_kafka_producev(
    producer,
    RD_KAFKA_V_TOPIC(topic_name),
    RD_KAFKA_V_MSGFLAGS(RD_KAFKA_MSG_F_COPY),
    RD_KAFKA_V_VALUE((void *) message_content.c_str(), message_content.size()),
    RD_KAFKA_V_OPAQUE(NULL),
    RD_KAFKA_V_END);

If I use a client developed in Kotlin that makes use of a Java client, I can connect to the broker and publish or consume without issues.

It is the same for a test Python application with default settings, I can connect and send messages:

producer = KafkaProducer(bootstrap_servers="localhost:9092")
fut = producer.send(
  topic="my-topic",
  value=value
)
res = fut.get(timeout=10)
producer.flush()

Things noted

I captured network traffic using WireShark and found something that called my attention. The C++ client goes through a long list of metadata requests, and seems it can't go any further from that point: kafka-1

The Python client, which I assume can be using a possibly outdated rdkafka library version, does not go through that long list of: kafka-2

Is there any configuration I'm missing? Is there any other component I need to install for the client to be able to operate as expected?

BR, V.

Checklist

Please provide the following information:

anchitj commented 2 months ago

Can you capture debug logs withdebug='all' in your config and provide them?

VLorz commented 2 months ago

I've now tested against librdkafka 2.5.0-2, built and installed from sources, configured with --enable-zlib --enable-zstd --enable-ssl --enable-gssapi --enable-curl --disable-lz4-ext.

Config parameters are: bootstrap.servers=192.168.1.106:9092, log_level=7, debug=all, allow.auto.create.topics=true.

Logs are attached. BR log-librdkafka-1.txt

VLorz commented 2 months ago

Any findings, @anchitj?

VLorz commented 2 months ago

Any updates on this, @anchitj?

VLorz commented 1 month ago

Hi, were you able to take look at this?

emasab commented 1 month ago

It seem that in your test calls (ApiVersions, Metadata) are succeeding when connecting to the boostrap server 192.168.1.106:9092/bootstrap but failing when connecting to the advertised listener 127.0.0.1:9092 could it check if in this test port forwarding was enabled to reach the docker container?

VLorz commented 4 weeks ago

I've created a new compose, now the test application runs from inside the same docker application as zookeeper and Kafka. The error is different now: Disconnected while requesting ApiVersion: might be caused by incorrect security.protocol configuration (connecting to a SSL listener?) or broker version is < 0.10 (...etc...). I've attached the log. log-librdkafka-2.txt

Here below are the sections for zookeeper and Kafka.

  zookeeper:
    container_name: oddev_zookeeper
    image: 'bitnami/zookeeper:latest'
    ports:
      - '12181:2181'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
    networks:
      - monitor

  kafka:
    container_name: oddev_kafka
    image: 'bitnami/kafka:latest'
    networks:
      - monitor
    ports:
      - '19092:9092'
      - '19093:9093'
    environment:
      - KAFKA_BROKER_ID=1
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
      - KAFKA_CFG_LISTENERS=CLIENT://:9093,EXTERNAL://:9092
      - KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9093,EXTERNAL://127.0.0.1:9092
      - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
      - KAFKA_CFG_ZOOKEEPER_CONNECT=oddev_zookeeper:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper
  kafka-visualizer:
    container_name: oddev_kafka-visualizer
    image: 'provectuslabs/kafka-ui:latest'
    networks:
      - monitor
    ports:
      - '18080:8080'
    environment:
      - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9093