wurstmeister / kafka-docker

Dockerfile for Apache Kafka
http://wurstmeister.github.io/kafka-docker/
Apache License 2.0
6.92k stars 2.73k forks source link

Kafka can't be connected after restart on docker swarm #646

Open hustlibraco opened 3 years ago

hustlibraco commented 3 years ago

I prepare two nodes for kafka. Kafka deploy mode is global, and network mode is host. So every node has a single kafka instance. Other applicaton connect kafka via INSIDE listener without authentication.

Mostly kafka in docker swarm works well. But when I restart kafka or kafka automaticlly restart, the applications connecting kafka would fail and raise HostException, that because Kafka container host had changed, thus the INSIDE listener address had also changed. Other applications cannot perceive kafka INSIDE listener's changes, so this error occurs.

I try the solution:

  1. change INSIDE listener to a fixed name. like INSIDE://kafka0:9092. But zookeeper don't allow to have same listener for different broker.

    java.lang.IllegalArgumentException: requirement failed: Configured end points kafka0:9092 in advertised listeners are already registered by broker 1002
        at kafka.server.KafkaServer.$anonfun$createBrokerInfo$3(KafkaServer.scala:415)
        at kafka.server.KafkaServer.$anonfun$createBrokerInfo$3$adapted(KafkaServer.scala:413)
        at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at kafka.server.KafkaServer.createBrokerInfo(KafkaServer.scala:413)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:272)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
        at kafka.Kafka$.main(Kafka.scala:82)
        at kafka.Kafka.main(Kafka.scala)
  2. fix the hostname of the kafka container. I can get a fixed INSIDE listener, but other apps can't access the hostname. Seems to be not easy to config hosts of all service in docker swarm for kafka.

  3. Last solution is modify kafka clients of other apps to catch hostexception and force to reconnect to the new hosts. I think it is too troublesome to do so I have not try it.

I can not connect kafka via outside listener(there are some strange problems), so i can do nothing now, I hope someone can help, be grateful!

Here is my docker swarm config:

  kafka0:
    image: wurstmeister/kafka:2.12-2.5.0
    depends_on:
      - zookeeper
    ports:
      - target: 9093
        published: 9093
        protocol: tcp
        mode: host
    environment:
      HOSTNAME_COMMAND: "docker info -f '{{`{{.Swarm.NodeAddr}}`}}'"
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:SASL_PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://_{HOSTNAME_COMMAND}:9093
      KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9093
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
      KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf -Dzookeeper.sasl.client=false"
      KAFKA_SASL_ENABLED_MECHANISMS: 'PLAIN,SCRAM-SHA-256,SCRAM-SHA-512'
      KAFKA_SUPER_USERS: User:admin
      KAFKA_CREATE_TOPICS: "test:1:1,firewall_log_raw:10:2,firewall_log_parse:10:2,firewall_log_correlate:10:2"
      KAFKA_LOG_RETENTION_BYTES: 107374182400
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
      KAFKA_LOG_DIRS: /kafka/kafka-logs
      CUSTOM_INIT_SCRIPT: /opt/kafka/init_script.sh
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /data00/kafka:/kafka
    networks:
      - ember
    deploy:
      mode: global
      placement:
        constraints:
          - "node.labels.type==kafka"
    configs:
      - source: kafka_init_script
        target: /opt/kafka/init_script.sh
        mode: 0755