Closed levissfuture closed 1 year ago
Hi,
Could you enter one of the containers and check the rendered configuration? In order to do so, you could try overriding the entrypoint and cmd with a sleep and then run inside the container /opt/bitnami/scripts/kafka/entrypoint.sh /opt/bitnami/scripts/kafka/run.sh
Do you mean check server.proprerties? Here it is:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This configuration file is intended for use in ZK-based mode, where Apache ZooKeeper is required.
# See kafka.server.KafkaConfig for additional details and defaults
#
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
############################# Socket Server Settings #############################
# The address the socket server listens on. If not configured, the host name will be equal to the value of
# java.net.InetAddress.getCanonicalHostName(), with PLAINTEXT listener name, and port 9092.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=CONTROLLER://:9093,EXTERNAL://:9092,INTERNAL://:29092
# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
advertised.listeners=INTERNAL://kafka-1:29092,EXTERNAL://kafka-01.domain.local:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,EXTERNAL:SASL_SSL,INTERNAL:PLAINTEXT
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/bitnami/kafka/data
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
log.retention.bytes=-1
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
allow.everyone.if.no.acl.found=true
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
auto.create.topics.enable=True
auto.leader.rebalance.enable=True
broker.rack=DC1
compression.type=producer
controller.listener.names=CONTROLLER
controller.quorum.voters=1@kafka-1:9093,2@kafka-2:9093,3@kafka-3:9093
default.replication.factor=1
early.start.listeners=CONTROLLER
inter.broker.listener.name=INTERNAL
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
log.roll.hours=168
max.partition.fetch.bytes=1048576
max.request.size=1048576
metadata.log.dir=/bitnami/kafka/metadata/
min.insync.replicas=1
process.roles=broker,controller
sasl.enabled.mechanisms=PLAIN
sasl.inter.broker.protocol=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
security.protocol=SASL_SSL
ssl.keystore.type=PEM
ssl.key.password=somepassword
super.users=User:CN=user;
ssl.truststore.type=PEM
ssl.keystore.key=keyinformation
Hi @levissfuture ,
I can see allow.everyone.if.no.acl.found=true
is set.
Did you try with other versions for branch 3.3.x? It seems to be still valid (docs)
Can confirm that issue happens on all 3.3.x releases and 3.4.0. On 3.2.3 everything works fine.
edit:
set KAFKA_CFG_SUPER_USERS: "User:ANONYMOUS"
Thanks for the feedback. I forwarded this to the team since we plan to work on Kraft support now that feature becomes production-ready. We will update this ticket when there are updates on this.
I can confirm this issue with 3.4.0 using KRaft mode. The setup work perfectly fine until StandardAuthorizer is enabled and super.users are set with the CN of the hosts.
I tried 2 ways of getting the CN. 1: Looking in the logs and copy pasting the principal I found there. (Unescaped with spaces)
I can confirm this issue with 3.4.0 using KRaft mode. The setup work perfectly fine until StandardAuthorizer is enabled and 2. running "openssl x509 -noout -in -subject" and copy pasting the escaped version of the cn from there (the file i used to populate the keystore and truststore).
Maybe trying with "openssl x509 -in /path/to/your/certificate.pem -noout -subject -nameopt RFC2253"
I got same error in kubernetes after i set authorizerClassName: "org.apache.kafka.metadata.authorizer.StandardAuthorizer"
. If i set it to empty there is no errors anymore.
image:
registry: docker.io
repository: bitnami/kafka
tag: 3.4.1-debian-11-r4
pullPolicy: IfNotPresent
debug: true
authorizerClassName: "org.apache.kafka.metadata.authorizer.StandardAuthorizer"
allowEveryoneIfNoAclFound: true
auth:
clientProtocol: tls
externalClientProtocol: tls
interBrokerProtocol: tls
controllerProtocol: tls
sasl:
mechanisms: plain,scram-sha-256,scram-sha-512
interBrokerMechanism: plain
jaas:
clientUsers:
- user
clientPasswords: []
interBrokerUser: admin
interBrokerPassword: ""
zookeeperUser: ""
zookeeperPassword: ""
existingSecret: ""
tls:
type: pem
pemChainIncluded: false
existingSecrets: []
autoGenerated: true
endpointIdentificationAlgorithm: https
Hi there!
I have been investigating this issue, and I think I have been able to narrow this down to 'controller_user having to be included in the Super.users list' In my case, I was able to fix the issue by setting Super.users to something similar to:
super.users=User:controller_user,User:admin;
Otherwise, the Controller process does not have permission to write the __cluster_metadata
although allow.everyone.if.no.acl.found=true
is set.
out-kafka-2-1 | [2023-06-21 08:59:40,044] ERROR [ControllerApis nodeId=2] Unexpected error handling request RequestHeader(apiKey=VOTE, apiVersion=0, clientId=raft-client-1, correlationId=68, headerVersion=2) -- VoteRequestData(clusterId='abcdefghijklmnopqrstug', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1, candidateId=1, lastOffsetEpoch=0, lastOffset=0)])]) with context RequestContext(header=RequestHeader(apiKey=VOTE, apiVersion=0, clientId=raft-client-1, correlationId=68, headerVersion=2), connectionId='172.29.0.3:9093-172.29.0.4:54942-0', clientAddress=/172.29.0.4, principal=User:controller_user, listenerName=ListenerName(CONTROLLER), securityProtocol=SASL_SSL, clientInformation=ClientInformation(softwareName=apache-kafka-java, softwareVersion=3.4.1), fromPrivilegedListener=false, principalSerde=Optional[org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder@762e532c]) (kafka.server.ControllerApis)
This may be a bug with Kafka itself and not the container image.
I'm aware that the current bitnami/kafka
does not have built-in support for Controller with SASL, this has been tested with a current WIP version of the image which includes some major changes and new features.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
It is not kafka issue. I switch to confluentinc kafka and i enabled Kraft ACL. I did not get any errors and cluster started successfully
Hi @post-human-world,
Could you please provide more details? What properties is your Kafka server running?
Our image does not contain any logic related to ACL, env variables such as KAFKA_CFG_SUPER_USERS
are directly mapped as Kafka server properties super.users=
.
I don't have deep knowledge of Kafka, but using the following deployment with the following conditions:
version: "2"
services:
kafka-0:
image: bitnami/kafka:3.5
ports:
- "9092"
environment:
# KRaft settings
- KAFKA_CFG_NODE_ID=0
- KAFKA_CFG_PROCESS_ROLES=controller,broker
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka-0:9093,1@kafka-1:9093,2@kafka-2:9093
- KAFKA_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv
# Listeners
- KAFKA_CFG_LISTENERS=SASL://:9092,CONTROLLER://:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=SASL://:9092
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=SASL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=SASL
# SASL
- KAFKA_CLIENT_USERS=user
- KAFKA_CLIENT_PASSWORDS=password
- KAFKA_CLIENT_LISTENER_NAME=SASL
- KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN
- KAFKA_CONTROLLER_USER=controller_user
- KAFKA_CONTROLLER_PASSWORD=controller_password
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_INTER_BROKER_USER=inter_broker_user
- KAFKA_INTER_BROKER_PASSWORD=inter_broker_password
# ACL
- KAFKA_CFG_SUPER_USERS=User:user;
- KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND="true"
- KAFKA_CFG_AUTHORIZER_CLASS_NAME=org.apache.kafka.metadata.authorizer.StandardAuthorizer
- KAFKA_CFG_EARLY_START_LISTENERS=CONTROLLER
volumes:
- kafka_0_data:/bitnami/kafka
kafka-1:
image: bitnami/kafka:3.5
ports:
- "9092"
environment:
# KRaft settings
- KAFKA_CFG_NODE_ID=1
- KAFKA_CFG_PROCESS_ROLES=controller,broker
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka-0:9093,1@kafka-1:9093,2@kafka-2:9093
- KAFKA_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv
# Listeners
- KAFKA_CFG_LISTENERS=SASL://:9092,CONTROLLER://:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=SASL://:9092
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=SASL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=SASL
# SASL
- KAFKA_CLIENT_USERS=user
- KAFKA_CLIENT_PASSWORDS=password
- KAFKA_CLIENT_LISTENER_NAME=SASL
- KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN
- KAFKA_CONTROLLER_USER=controller_user
- KAFKA_CONTROLLER_PASSWORD=controller_password
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_INTER_BROKER_USER=inter_broker_user
- KAFKA_INTER_BROKER_PASSWORD=inter_broker_password
# ACL
- KAFKA_CFG_SUPER_USERS=User:user;
- KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND="true"
- KAFKA_CFG_AUTHORIZER_CLASS_NAME=org.apache.kafka.metadata.authorizer.StandardAuthorizer
- KAFKA_CFG_EARLY_START_LISTENERS=CONTROLLER
volumes:
- kafka_1_data:/bitnami/kafka
kafka-2:
image: bitnami/kafka:3.5
ports:
- "9092"
environment:
# KRaft settings
- KAFKA_CFG_NODE_ID=2
- KAFKA_CFG_PROCESS_ROLES=controller,broker
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka-0:9093,1@kafka-1:9093,2@kafka-2:9093
- KAFKA_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv
# Listeners
- KAFKA_CFG_LISTENERS=SASL://:9092,CONTROLLER://:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=SASL://:9092
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=SASL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=SASL
# SASL
- KAFKA_CLIENT_USERS=user
- KAFKA_CLIENT_PASSWORDS=password
- KAFKA_CLIENT_LISTENER_NAME=SASL
- KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN
- KAFKA_CONTROLLER_USER=controller_user
- KAFKA_CONTROLLER_PASSWORD=controller_password
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_INTER_BROKER_USER=inter_broker_user
- KAFKA_INTER_BROKER_PASSWORD=inter_broker_password
# ACL
- KAFKA_CFG_SUPER_USERS=User:user;
- KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND="true"
- KAFKA_CFG_AUTHORIZER_CLASS_NAME=org.apache.kafka.metadata.authorizer.StandardAuthorizer
- KAFKA_CFG_EARLY_START_LISTENERS=CONTROLLER
volumes:
- kafka_2_data:/bitnami/kafka
volumes:
kafka_0_data:
driver: local
kafka_1_data:
driver: local
kafka_2_data:
driver: local
The Kafka KRaft cluster will fail with the following error:
[2023-08-01 07:06:22,946] ERROR [ControllerApis nodeId=0] Unexpected error handling request RequestHeader(apiKey=VOTE, apiVersion=0, clientId=raft-client-2, correlationId=2879, headerVersion=2) -- VoteRequestData(clusterId='abcdefghijklmnopqrstug', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=23, candidateId=2, lastOffsetEpoch=0, lastOffset=0)])]) with context RequestContext(header=RequestHeader(apiKey=VOTE, apiVersion=0, clientId=raft-client-2, correlationId=2879, headerVersion=2), connectionId='172.25.0.2:9093-172.25.0.4:53688-0', clientAddress=/172.25.0.4, principal=User:controller_user, listenerName=ListenerName(CONTROLLER), securityProtocol=SASL_PLAINTEXT, clientInformation=ClientInformation(softwareName=apache-kafka-java, softwareVersion=3.5.0), fromPrivilegedListener=false, principalSerde=Optional[org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder@68ee6739]) (kafka.server.ControllerApis)
But, as previously mentioned, by adding the Controller SASL user to the list of super.users the cluster is deployed successfully:
Actually i did not fully use confluentinc kafka, i rewrite the startup script so i can pass some unique settings or file. It may a bit complicated, but in this way i avoided several bugs that will occur with default startup script.
I successfully run this manifest under minikube
Servers
Configs
@migruiz4 This is almost my full test example, you can try it.
Hi @post-human-world,
But in your server.properties file you have added the controller user to the list of super.users
, which is the fix I suggested to OP:
super.users=User:admin;User:interbroker;User:controller;User:superuser
I mentioned it could be a Kafka bug because documentation does not mention it:
Therefore, we can not ensure if this is a workaround for a bug and later versions of Kafka may make Controller listener privileged, as fromPrivilegedListener=false
may suggest, or adding the controller user to the list of super.users
is a permanent KRaft+ACL requirement.
Configuring super.users
is not enough to make them pass authentication. listener.name.inter_broker.plain.sasl.jaas.config
and listener.name.controller.plain.sasl.jaas.config
is necessary. They can use same username to do authentication e.g. all of them use admin
https://kafka.apache.org/documentation/#security_jaas_broker
Our images configure JAAS based on the input env variables KAFKA_USERS/KAFKA_PASSWORDS
, KAFKA_INTER_BROKER_USER/KAFKA_INTER_BROKER_PASSWORD
, and KAFKA_CONTROLLER_USER/KAFKA_CONTROLLER_PASSWORD
.
In previous versions, our images created a kafka_jaas.conf
file containing the JAAS configuration, but in the latest release we switched to a listener.name.<name>.<mechanism>.sasl.jaas.config
approach.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Using Bitnami Kafka Helm 23.0, Kafka 3.5.0
I have set KAFKA_CFG_AUTHORIZER_CLASS_NAME=org.apache.kafka.metadata.authorizer.StandardAuthorizer KAFKA_CFG_SUPER_USERS=User:user;User:controller_user;User:ANONYMOUS
I have added one user "test_producer" to a topic "test-acl" with "Read" permission but still am able to write to the "test-acl" topic with test_producer user and also its working for any user.
Any fix for this ? Not seeing any error in the broker logs
Could you please create a separate issue for that describing your specific use case? Thanks
EDIT: Nevermind, I just saw https://github.com/bitnami/charts/issues/18997
Hi everyone, if you guys are trying to deploy confluent based kafka (7.6.1 - latest) with ACL SASL PLAINTEXT, here is the working solution (works for the bitnami too, just modify particular env vars to CFG type):
kafka1:
image: confluentinc/cp-kafka:${CONFLUENT_VERSION}
hostname: kafka1
container_name: kafka1
user: 0:0 # this is for volume binding permissions cause by default /var/docker_data is not writable
ports:
- "9094:9094"
- "9092:9092"
- "9997:9997"
volumes:
- "/var/docker_data/kafka1_data:/var/lib/kafka/data"
- "./etc/secrets/:/etc/kafka/jaas/"
environment:
# KRaft settings
TZ: "Asia/Tashkent"
KAFKA_NODE_ID: 1
CLUSTER_ID: 'ciWo7IWazngRchmPES6q5A=='
KAFKA_KRAFT_MODE: "true"
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka1:9093
# Listeners
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_INTER_BROKER_LISTENER_NAME: 'SASL_PLAINTEXT'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:SASL_PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_HOST:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT # when I was having errors my Controller was in PLAINTEXT -> converted it to SASL_PLAINTEXT
KAFKA_LISTENERS: CONTROLLER://kafka1:9093,SASL_PLAINTEXT://kafka1:29092,SASL_HOST://:9092,EXTERNAL://:9094
KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://kafka1:29092,SASL_HOST://172.23.0.3:9092,EXTERNAL://192.168.100.161:9094 # here 172.23.0.3 is my broker container IP
KAFKA_JMX_PORT: 9997
KAFKA_JMX_HOSTNAME: localhost
# SASL
KAFKA_AUTHORIZER_CLASS_NAME: org.apache.kafka.metadata.authorizer.StandardAuthorizer
KAFKA_SASL_ENABLED_MECHANISMS: 'PLAIN'
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: 'PLAIN'
KAFKA_SASL_MECHANISM_CONTROLLER_PROTOCOL: 'PLAIN' # added this line
KAFKA_SECURITY_PROTOCOL: 'SASL_PLAINTEXT'
# ACL
KAFKA_SUPER_USERS: User:admin,User:controller;User:ANONYMOUS # added two last users; here though I am not sure if user:anonymous is a necessary or not, just left it here, cause in error logs I saw some ANONYMOUS user issues too : )
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: 'true' # added this line
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/jaas/kafka_server_jaas.conf"
KAFKA_EARLY_START_LISTENERS: CONTROLLER # added this line; not sure if it is also necessary
# SETTINGS
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_AUTO_CREATE_TOPICS_ENABLE: true
# KAFKA_HEAP_OPTS: ${KAFKA_BROKER_HEAP_OPTS}
# deploy:
# resources:
# limits:
# memory: ${KAFKA_BROKER_MEM_LIMIT}
# .env file to pass $ variables:
# Sets environment variables used in docker-compose.yml
# Set to specific version needed
CONFLUENT_VERSION=7.6.1
# Limit JVM Heap Size
KAFKA_BROKER_HEAP_OPTS="-XX:MaxRAMPercentage=70.0"
# Limit container resources
KAFKA_BROKER_MEM_LIMIT=512m
# ./etc/secrets/kafka_server_jaas.conf file:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin159"
user_controller="controller159"
user_admin="admin159";
};
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
user_admin="admin159";
};
Name and Version
bitnami/kafka:3.3.2-debian-11-r4
What steps will reproduce the bug?
x-kafka-deploy: &kafka-deploy mode: replicated replicas: 1 update_config: parallelism: 1 order: stop-first failure_action: rollback delay: 10s rollback_config: parallelism: 1 order: stop-first restart_policy: condition: any delay: 10s window: 30s
x-kafka-service: &kafka-service networks:
x-kafka-evn: &kafka-env ALLOW_PLAINTEXT_LISTENER: "yes" BITNAMI_DEBUG: "true" KAFKA_ENABLE_KRAFT: "yes" KAFKA_KRAFT_CLUSTER_ID: "MFdlOAZlYmM9YzE2NDQxZA" KAFKA_CFG_PROCESS_ROLES: "broker,controller" KAFKA_CFG_CONTROLLER_LISTENER_NAMES: "CONTROLLER" KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: "CONTROLLER:PLAINTEXT,EXTERNAL:SASL_SSL,INTERNAL:PLAINTEXT" KAFKA_CFG_INTER_BROKER_LISTENER_NAME: "INTERNAL" KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: "1@kafka-1:9093,2@kafka-2:9093,3@kafka-3:9093" KAFKA_CFG_METADATA_LOG_DIR: "/bitnami/kafka/metadata/" KAFKA_CFG_LISTENERS: "CONTROLLER://:9093,EXTERNAL://:9092,INTERNAL://:29092" KAFKA_CFG_SECURITY_PROTOCOL: "SASL_SSL" KAFKA_CFG_SASL_ENABLED_MECHANISMS: "PLAIN" KAFKA_CFG_SASL_INTER_BROKER_PROTOCOL: "PLAIN" KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL: "PLAIN" KAFKA_CFG_SUPER_USERS: 'User:CN=user;' KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true" KAFKA_CFG_AUTHORIZER_CLASS_NAME: "org.apache.kafka.metadata.authorizer.StandardAuthorizer" KAFKA_CFG_EARLY_START_LISTENERS: "CONTROLLER" KAFKA_CFG_SSL_KEYSTORE_TYPE: "PEM" KAFKA_TLS_TYPE: "PEM" KAFKA_TLS_CLIENT_AUTH: "requested"
services:
kafka-1: image: bitnami/kafka:3.3.2-debian-11-r4 deploy: <<: *kafka-deploy placement: constraints:
node.labels.kafka_id_node == 1 environment: KAFKA_BROKER_ID: "1" KAFKA_CFG_BROKER_RACK: "DC1" KAFKA_CFG_ADVERTISED_LISTENERS: "INTERNAL://kafka-1:29092,EXTERNAL://kafka-01.domain.local:9092" KAFKA_CFG_SSL_KEY_PASSWORD: "somepassword" <<: kafka-env <<: kafka-service
kafka-2: image: bitnami/kafka:3.3.2-debian-11-r4 deploy: <<: *kafka-deploy placement: constraints:
node.labels.kafka_id_node == 2 environment: KAFKA_BROKER_ID: "2" KAFKA_CFG_BROKER_RACK: "DC1" KAFKA_CFG_ADVERTISED_LISTENERS: "INTERNAL://kafka-2:29092,EXTERNAL://kafka-02.domain.local:9092" KAFKA_CFG_SSL_KEY_PASSWORD: "somepassword" <<: kafka-env <<: kafka-service
kafka-3: image: bitnami/kafka:3.3.2-debian-11-r4 deploy: <<: *kafka-deploy placement: constraints:
[2023-02-07 14:25:27,115] ERROR [RaftManager nodeId=1] Unexpected error UNKNOWN_SERVER_ERROR in VOTE response: InboundResponse(correlationId=640, data=VoteResponseData(errorCode=-1, topics=[]), sourceId=2) (org.apache.kafka.raft.KafkaRaftClient) [2023-02-07 14:25:27,115] ERROR [ControllerApis nodeId=1] Unexpected error handling request RequestHeader(apiKey=VOTE, apiVersion=0, clientId=raft-client-2, correlationId=1572) -- VoteRequestData(clusterId='MFdlOAZlYmM9YzE2NDQxZA', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=29, candidateId=2, lastOffsetEpoch=11, lastOffset=12)])]) with context RequestContext(header=RequestHeader(apiKey=VOTE, apiVersion=0, clientId=raft-client-2, correlationId=1572), connectionId='10.0.1.100:9093-10.0.1.93:44994-0', clientAddress=/10.0.1.93, principal=User:ANONYMOUS, listenerName=ListenerName(CONTROLLER), securityProtocol=PLAINTEXT, clientInformation=ClientInformation(softwareName=apache-kafka-java, softwareVersion=3.3.2), fromPrivilegedListener=false, principalSerde=Optional[org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder@141f7b8e]) (kafka.server.ControllerApis)
What is the expected behavior?
I expect the cluster to start and I will be able to create ACLs for topics.
What do you see instead?
The cluster does not start due to the error specified in paragraph 4
Additional information
I think the problem is that this option does not work: "KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"" In the bitnami/kafka:3.3.2-debian-11-r5 - same problem. In the bitnami/kafka:3.2.3-debian-11-r48 - everything works well.