IBM / sarama

Sarama is a Go library for Apache Kafka.
MIT License
11.39k stars 1.75k forks source link

[Docker] Can't connect to Kafka Server with container's name in docker network #1683

Closed dinopuguh closed 8 months ago

dinopuguh commented 4 years ago

Versions

Sarama Kafka Go
v1.26.1 v2.4.1 v1.14.1
Configuration

This is my application called as my-app:


brokers := fmt.Sprintf("%v:%v", os.Getenv("KAFKA_HOST"), os.Getenv("KAFKA_PORT"))
group := os.Getenv("CONSUMER_GROUP")
topics := os.Getenv("TOPICS")
version := os.Getenv("KAFKA_VERSION")

kafkaConfig := sarama.NewConfig()
kafkaConfig.Consumer.Offsets.Initial = sarama.OffsetOldest
kafkaConfig.Consumer.Group.Rebalance.Strategy = sarama.BalanceStrategyRoundRobin
kafkaConfig.Version = version

if username != "" && password != "" {
    kafkaConfig.Net.SASL.Enable = true
    kafkaConfig.Net.SASL.User = username
    kafkaConfig.Net.SASL.Password = password
}

client, err := sarama.NewConsumerGroup(strings.Split(brokers, ","), group, kafkaConfig)
if err != nil {
    log.Panicf("Error creating consumer group client: %v", err)
}

docker-compose.yml file:

version: "3.7"

services:
  zookeeper:
    container_name: zookeeper
    image: zookeeper:3.6
    ports:
      - 2181:2181
    environment:
      ALLOW_ANONYMOUS_LOGIN: 1
      ZOOKEEPER_SERVER_ID: 1
    volumes:
      - /zookeeper/data:/data
      - /zookeeper/log:/datalog
    networks:
      - my-network
  kafka:
    container_name: kafka
    image: wurstmeister/kafka:2.12-2.4.1
    ports:
      - 9093:9093
    expose:
      - 9092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LOG_DIRS: /logs
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9092,OUTSIDE://localhost:9093
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_LISTENERS: INSIDE://kafka:9092,OUTSIDE://0.0.0.0:9093
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
    volumes:
      - /kafka/data:/logs
      - /kafka/docker.sock:/var/run/docker.sock
    depends_on:
      - zookeeper
    networks:
      - my-network
  my-app:
    container_name: my-app
    image: my-app:3.0
    ports:
      - 9001:9090
    depends_on:
      - kafka
    environment:
      KAFKA_HOST: kafka
      KAFKA_PORT: 9092
      CONSUMER_GROUP: my-group
      TOPICS: reviews
      KAFKA_VERSION: 2.4.1
    networks:
      - my-network

networks:
  my-network:
    name: my-network
Logs

When I set KAFKA_HOST:kafka in docker-compose.yml:

logs: CLICK ME

``` Error creating consumer group client: kafka: client has run out of available brokers to talk to (Is your cluster reachable?) ```

Problem Description

Sarama can't create a consumer group with that configuration. But, if I set KAFKA_HOST with Kafka container's IP Address, it works very well. So, I think sarama cannot accept brokers that defined with the hostname.

please for your help, thank you.

NyanProgrammer commented 4 years ago

+1 using TLS, and inside a k8s cluster

ghost commented 3 years ago

Thank you for taking the time to raise this issue. However, it has not had any activity on it in the past 90 days and will be closed in 30 days if no updates occur. Please check if the master branch has already resolved the issue since it was raised. If you believe the issue is still valid and you would like input from the maintainers then please comment to ask for it to be reviewed.

josenavarrotbc commented 3 years ago

+1 using docker-compose in a similar way as documented in the ticket. Interestingly enough, same error reproduces implementing the kafka consumer code with confluent-kafka-go, but running with segment.io client the consumer code is able to connect to the broker.

Also, with services running from a similar docker compose set up, we are able to do docker exec my-ap ping kafka and the IP is resolved correctly

ankurjha7 commented 3 years ago

+1 I am facing a similar issue . The app is working fine on local network as well as when deployed on server but when deployed on k8s it starts giving the issue "cluster not reachable" . Using TLS

AlmogBaku commented 2 years ago

Anyone has succeeded with that? I'm spending the last 2 days on this issue 🤦‍♂️

AlmogBaku commented 2 years ago

More information:

  1. I'm using this docker-compose.yml, and adding a shared network to the containers
  2. When trying to access the cluster from a docker with a Java client, it works. I.E: docker run --network mynetwork -it taion809/kafka-cli:2.2.0 /bin/bash
  3. This is the full log of the sarama error:
    [sarama] 2021/10/13 15:09:25 client.go:138: Initializing new client
    [sarama] 2021/10/13 15:09:25 config.go:544: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
    [sarama] 2021/10/13 15:09:25 config.go:544: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
    [sarama] 2021/10/13 15:09:25 client.go:871: client/metadata fetching metadata for all topics from broker localhost:9092
    [sarama] 2021/10/13 15:09:25 broker.go:160: Failed to connect to broker localhost:9092: dial tcp [::1]:9092: connect: connection refused
    [sarama] 2021/10/13 15:09:25 client.go:914: client/metadata got error from broker -1 while fetching metadata: dial tcp [::1]:9092: connect: connection refused
    [sarama] 2021/10/13 15:09:25 client.go:925: client/metadata no available broker to send metadata request to
    [sarama] 2021/10/13 15:09:25 client.go:661: client/brokers resurrecting 1 dead seed brokers
    [sarama] 2021/10/13 15:09:25 client.go:855: client/metadata retrying after 250ms... (3 attempts remaining)
    [sarama] 2021/10/13 15:09:25 config.go:544: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
    [sarama] 2021/10/13 15:09:25 client.go:871: client/metadata fetching metadata for all topics from broker localhost:9092
    [sarama] 2021/10/13 15:09:25 broker.go:160: Failed to connect to broker localhost:9092: dial tcp [::1]:9092: connect: connection refused
    [sarama] 2021/10/13 15:09:25 client.go:914: client/metadata got error from broker -1 while fetching metadata: dial tcp [::1]:9092: connect: connection refused
    [sarama] 2021/10/13 15:09:25 client.go:925: client/metadata no available broker to send metadata request to
    [sarama] 2021/10/13 15:09:25 client.go:661: client/brokers resurrecting 1 dead seed brokers
    [sarama] 2021/10/13 15:09:25 client.go:855: client/metadata retrying after 250ms... (2 attempts remaining)
    [sarama] 2021/10/13 15:09:25 config.go:544: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
    [sarama] 2021/10/13 15:09:25 client.go:871: client/metadata fetching metadata for all topics from broker localhost:9092
    [sarama] 2021/10/13 15:09:25 broker.go:160: Failed to connect to broker localhost:9092: dial tcp [::1]:9092: connect: connection refused
    [sarama] 2021/10/13 15:09:25 client.go:914: client/metadata got error from broker -1 while fetching metadata: dial tcp [::1]:9092: connect: connection refused
    [sarama] 2021/10/13 15:09:25 client.go:925: client/metadata no available broker to send metadata request to
    [sarama] 2021/10/13 15:09:25 client.go:661: client/brokers resurrecting 1 dead seed brokers
    [sarama] 2021/10/13 15:09:25 client.go:855: client/metadata retrying after 250ms... (1 attempts remaining)
    [sarama] 2021/10/13 15:09:26 config.go:544: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
    [sarama] 2021/10/13 15:09:26 client.go:871: client/metadata fetching metadata for all topics from broker localhost:9092
    [sarama] 2021/10/13 15:09:26 broker.go:160: Failed to connect to broker localhost:9092: dial tcp [::1]:9092: connect: connection refused
    [sarama] 2021/10/13 15:09:26 client.go:914: client/metadata got error from broker -1 while fetching metadata: dial tcp [::1]:9092: connect: connection refused
    [sarama] 2021/10/13 15:09:26 client.go:925: client/metadata no available broker to send metadata request to
    [sarama] 2021/10/13 15:09:26 client.go:661: client/brokers resurrecting 1 dead seed brokers
    [sarama] 2021/10/13 15:09:26 client.go:234: Closing Client
    Unable to get cluster admin: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
AlmogBaku commented 2 years ago

I also tried to reset docker to factory mode(macos), and multiple configurations and docker images for Kafka. It seems like this problem only persists with sarama, and not with other clients.

This is really odd - since the code just parse the metadata response

ankurjha7 commented 2 years ago

Yes, the confluent go kafka client worked fine for this. In case you are not using any admin commands you are better off with any other library or you might have to switch to a different language lib if you are looking for admin functions. It appears sarama is not able to parse hostname within a cluster and hence not able to connect.

AlmogBaku commented 2 years ago

my bad, it works.

github-actions[bot] commented 1 year ago

Thank you for taking the time to raise this issue. However, it has not had any activity on it in the past 90 days and will be closed in 30 days if no updates occur. Please check if the main branch has already resolved the issue since it was raised. If you believe the issue is still valid and you would like input from the maintainers then please comment to ask for it to be reviewed.

dnwe commented 1 year ago

It’s not immediately obvious from the provided logs why you are hitting errors with the Kafka image and docker-compose pair that you mention. Certainly Sarama can connect to docker based setups with a network defined, the FVT runs exactly in this way. I’d recommend the docker-compose.yml and Dockerfile.kafka in the root folder of this repo as your starting point

github-actions[bot] commented 9 months ago

Thank you for taking the time to raise this issue. However, it has not had any activity on it in the past 90 days and will be closed in 30 days if no updates occur. Please check if the main branch has already resolved the issue since it was raised. If you believe the issue is still valid and you would like input from the maintainers then please comment to ask for it to be reviewed.