Closed hznuyyh closed 4 months ago
Hi!
Normally a "TLS handshake error" has to do with incorrect certificates set or maybe untrusted certificates. How did you create the certs?
Hi!
Normally a "TLS handshake error" has to do with incorrect certificates set or maybe untrusted certificates. How did you create the certs?
I used the script mentioned in Readme. kafka-generate-ssl.sh
I set up the cluster according to the steps in Readme , and the cluster can run normally now.
And when I used the demo provided by Sarma, I was able to connect correctly and complete Produce and Consume. But when I use Kafka-console-producer.sh, Kafka returns the above error
Hi @hznuyyh ,
I can see 2 things in your docker-compose:
hostname: kafak.example.com
If you are facing the issue with kafka image, you probably should ask for support. In case it only happens with bitnami image, please let us know.
You could also take a look at this post in case it helps.
this post
Hi @dgomezleon Thank you for your reply
kafka:
image: 'kafka:3.7'
image label => : "Labels": { "com.vmware.cp.artifact.flavor": "sha256:c50c90cfd9d12b445b011e6ad529f1ad3daea45c26d20b00732fae3cd71f6a83", "org.opencontainers.image.base.name": "docker.io/bitnami/minideb:bookworm", "org.opencontainers.image.created": "2024-05-04T09:25:23Z", "org.opencontainers.image.description": "Application packaged by VMware, Inc", "org.opencontainers.image.documentation": "https://github.com/bitnami/containers/tree/main/bitnami/kafka/README.md", "org.opencontainers.image.licenses": "Apache-2.0", "org.opencontainers.image.ref.name": "3.7.0-debian-12-r4", "org.opencontainers.image.source": "https://github.com/bitnami/containers/tree/main/bitnami/kafka", "org.opencontainers.image.title": "kafka", "org.opencontainers.image.vendor": "VMware, Inc.", "org.opencontainers.image.version": "3.7.0" },
I re-generated the certificate and tried again
I used
openssl s_client -debug -connect kafka.example.com:9092 -tls1_2
Came to check the certificate. It returned for me
Start Time: 1716953619 Timeout : 7200 (sec) Verify return code: 19 (self signed certificate in certificate chain) Extended master secret: yes
It seems that I used my own cert. I think it is caused by this problem.
I tried to add it *ssl.endpoint.identification.algorithm=* in consumer.properties
I have no name!@kafka:/$ cat /opt/bitnami/kafka/config/consumer.properties
bootstrap.servers=localhost:9092
group.id=test-consumer-group
ssl.keystore.type=JKS ssl.truststore.type=JKS ssl.key.password=IQ65KHcr4VsS0TLO ssl.keystore.location=/opt/bitnami/kafka/config/certs/kafka.keystore.jks ssl.truststore.location=/opt/bitnami/kafka/config/certs/kafka.truststore.jks ssl.keystore.password=IQ65KHcr4VsS0TLO ssl.truststore.password=IQ65KHcr4VsS0TLO ssl.endpoint.identification.algorithm=
But it did not take effect.
The kafka log is still returned by this command
I have no name!@kafka:/$ kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --consumer.config /opt/bitnami/kafka/config/consumer.properties [2024-05-29 03:39:43,744] WARN [Consumer clientId=console-consumer, groupId=test-consumer-group] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2024-05-29 03:39:43,920] WARN [Consumer clientId=console-consumer, groupId=test-consumer-group] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2024-05-29 03:39:44,189] WARN [Consumer clientId=console-consumer, groupId=test-consumer-group] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2024-05-29 03:39:44,510] WARN [Consumer clientId=console-consumer, groupId=test-consumer-group] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2024-05-29 03:39:45,057] WARN [Consumer clientId=console-consumer, groupId=test-consumer-group] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2024-05-29 03:28:05,010] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /127.0.0.1 (channelId=127.0.0.1:9092-127.0.0.1:49458-5) (SSL handshake failed) (org.apache.kafka.common.network.Selector) [2024-05-29 03:28:06,024] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /127.0.0.1 (channelId=127.0.0.1:9092-127.0.0.1:49474-5) (SSL handshake failed) (org.apache.kafka.common.network.Selector) [2024-05-29 03:28:07,050] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /127.0.0.1 (channelId=127.0.0.1:9092-127.0.0.1:49482-5) (SSL handshake failed) (org.apache.kafka.common.network.Selector)
The steps for generating my certificate are as follows
1. bash kafka-generate-ssl.sh
2. Set *password* where you enter your password,
3. yes/no Type *y* when we need,
4. For other steps, enter *enter*,
5. common name and first/last name are set to *kafka.example.com*
Hi @hznuyyh ,
I noticed that KAFKA_CLIENT_LISTENER_NAME=SASL_SSL
is missing in docker-compose.yaml (we will update the README.md) with this. Also, note that KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=SCRAM-SHA-512
is duplicated (but this is not taking effect).
This docker-compose.yaml
worked for me:
version: '2'
services:
kafka:
image: 'bitnami/kafka:3.7'
hostname: kafka.example.com
ports:
- '9092:9092'
environment:
# KRaft
- KAFKA_CFG_NODE_ID=0
- KAFKA_CFG_PROCESS_ROLES=controller,broker
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka.example.com:9093
# Listeners
- KAFKA_CFG_LISTENERS=CLIENT://:9092,CONTROLLER://:9093
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:SASL_PLAINTEXT,CLIENT:SASL_SSL
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://:9092
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
# SSL
- KAFKA_CERTIFICATE_PASSWORD=my_pass
- KAFKA_TLS_TYPE=JKS
- KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=
# SASL
- KAFKA_CLIENT_USERS=user
- KAFKA_CLIENT_PASSWORDS=password
- KAFKA_CLIENT_LISTENER_NAME=SASL_SSL
- KAFKA_CONTROLLER_USER=controller_user
- KAFKA_CONTROLLER_PASSWORD=controller_password
- KAFKA_INTER_BROKER_USER=controller_user
- KAFKA_INTER_BROKER_PASSWORD=controller_password
volumes:
- './certs/kafka.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro'
- './certs/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro'
I hope it helps
Hi @hznuyyh ,
I noticed that
KAFKA_CLIENT_LISTENER_NAME=SASL_SSL
is missing in docker-compose.yaml (we will update the README.md) with this. Also, note thatKAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=SCRAM-SHA-512
is duplicated (but this is not taking effect).This
docker-compose.yaml
worked for me:version: '2' services: kafka: image: 'bitnami/kafka:3.7' hostname: kafka.example.com ports: - '9092:9092' environment: # KRaft - KAFKA_CFG_NODE_ID=0 - KAFKA_CFG_PROCESS_ROLES=controller,broker - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka.example.com:9093 # Listeners - KAFKA_CFG_LISTENERS=CLIENT://:9092,CONTROLLER://:9093 - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:SASL_PLAINTEXT,CLIENT:SASL_SSL - KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://:9092 - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER - KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT - KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN # SSL - KAFKA_CERTIFICATE_PASSWORD=my_pass - KAFKA_TLS_TYPE=JKS - KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM= # SASL - KAFKA_CLIENT_USERS=user - KAFKA_CLIENT_PASSWORDS=password - KAFKA_CLIENT_LISTENER_NAME=SASL_SSL - KAFKA_CONTROLLER_USER=controller_user - KAFKA_CONTROLLER_PASSWORD=controller_password - KAFKA_INTER_BROKER_USER=controller_user - KAFKA_INTER_BROKER_PASSWORD=controller_password volumes: - './certs/kafka.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro' - './certs/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro'
I hope it helps
@dgomezleon
Thank you very much. It worked. And this parameter must be specified when Kafka-console-consumer-sh is used
--bootstrap-server kafka.example.com:9092
I think this is related to the CommonName specified when the certificate was generated. Anyway, setting KAFKA_CLIENT_LISTENER_NAME=SASL_SSL is a valid practice
In addition to this configuration, KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=SCRAM-SHA-512 I don't quite understand what it does, it seems to work on port 9093, but when I set SCRAM-SHA-512 it doesn't work, and the Kafka cluster won't start properly, it has to be set to PLAIN
I see there are other issues41415 not sure if it is the same problem
I'm glad it worked for you.
As my mate @migruiz4 mentioned here, SCRAM is not supported.
Hi @hznuyyh,
As mentioned in #41415 , controller-to-controller communications do not support SCRAM at the moment.
This issue has been reported to the Kafka upstream project, so meanwhile only PLAIN mechanism can be used.
@migruiz4 @dgomezleon Thank you very much. I understand. This Issue can be closed
Name and Version
bitnami/kafka:3.7
What architecture are you using?
amd64
What steps will reproduce the bug?
Using image bitnami/kafka:3.7 and docker compose
Docker compose configuration:
Kafka is Running.
I Used Saram-Golang Client to connect,It's worked and return the success message to me.
But When I used kafka-console-producer.sh
returns
Kafka Log:
Looking at this file /opt/bitnami/kafka/config/producer.properties, I'm not sure if the problem is caused by the absence of the KAFKA_CLIENT_USERS and KAFKA_CLIENT_PASSWORDS
And Here is my saram-go client demo
SaramSource
Also, note that line 98-106 of the demo, algorithm RAM-Sha-512 does not take effect. I have looked at other issues and others have encountered this problem
What is the expected behavior?
The ability to connect directly to the kafka cluster using scripts within the container
What do you see instead?
Here is all config