Closed cpockrandt closed 11 months ago
bitnami/kafka:3.6.0-debian-11-r2
amd64
controller: replicaCount: 3 kraft: enabled: true extraConfig: |- auto.create.topics.enable=true authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer super.users=User:${SASL_ADMIN_USER} sasl: enabledMechanisms: PLAIN,SCRAM-SHA-512 interBrokerMechanism: SCRAM-SHA-512 controllerMechanism: PLAIN # NOTE: scram-sha-512 noch nicht von Kraft unterstützt client: users: - "${SASL_ADMIN_USER}" - "${AKHQ_USER}" passwords: - "${SASL_ADMIN_PASS}" - "${AKHQ_PASSWORD}" externalAccess: enabled: true controller: service: loadBalancerNames: - name1 - name2 - name3 loadBalancerIPs: - 10.0.0.63 - 10.0.0.64 - 10.0.0.65
# Listeners configuration listeners=CLIENT://:9092,INTERNAL://:9094,EXTERNAL://:9095,CONTROLLER://:9093 advertised.listeners=CLIENT://foo.bar:9092,INTERNAL://foo.bar:9094,EXTERNAL://foo-ext.bar:9094 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT,INTERNAL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT # KRaft process roles process.roles=controller,broker node.id=0 controller.listener.names=CONTROLLER controller.quorum.voters=0@foo1.bar:9093,1@foo2.bar:9093,2@foo3.bar:9093 # Kraft Controller listener SASL settings sasl.mechanism.controller.protocol=PLAIN listener.name.controller.sasl.enabled.mechanisms=PLAIN listener.name.controller.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="controller_user" password="very-secret" user_controller_user="very-secret"; log.dir=/bitnami/kafka/data sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-512 # Interbroker configuration inter.broker.listener.name=INTERNAL sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512 # Listeners SASL JAAS configuration listener.name.client.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required user_admin="admin" user_akhq="very-secret3"; listener.name.client.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required; listener.name.internal.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="inter_broker_user" password="very-secret2" user_inter_broker_user="very-secret2" user_admin="admin" user_akhq="very-secret3"; listener.name.internal.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="inter_broker_user" password="very-secret2"; listener.name.external.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required user_admin="admin" user_akhq="very-secret3"; listener.name.external.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required; # End of SASL JAAS configuration auto.create.topics.enable=true authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer super.users=User:admin
As soon as I added authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer, the container wouldn't start anymore.
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
Defaulted container "kafka" out of: kafka, jmx-exporter, kafka-init (init) kafka 22:16:42.73 INFO ==> kafka 22:16:42.73 INFO ==> Welcome to the Bitnami kafka container kafka 22:16:42.73 INFO ==> Subscribe to project updates by watching https://github.com/bitnami/containers kafka 22:16:42.74 INFO ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues kafka 22:16:42.74 INFO ==> kafka 22:16:42.74 INFO ==> ** Starting Kafka setup ** kafka 22:16:42.86 INFO ==> Initializing KRaft storage metadata kafka 22:16:42.86 INFO ==> Adding KRaft SCRAM users at storage bootstrap kafka 22:16:42.88 INFO ==> Formatting storage directories to add metadata... Formatting /bitnami/kafka/data with metadata.version 3.6-IV2. kafka 22:16:46.05 INFO ==> ** Kafka setup finished! ** kafka 22:16:46.06 INFO ==> ** Starting Kafka ** {"level": "ERROR", "timestamp": "2023-11-24 22:16:55.334", "logger": "kafka.server.ControllerApis", "thread": "data-plane-kafka-request-handler-2", "NDC": "", "message": "[ControllerApis nodeId=0] Unexpected error handling request RequestHeader(apiKey=VOTE, apiVersion=0, clientId=raft-client-1, correlationId=0, headerVersion=2) -- VoteRequestData(clusterId='bLXpjluIRFijcRwYddYWrw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1, candidateId=1, lastOffsetEpoch=0, lastOffset=0)])]) with context RequestContext(header=RequestHeader(apiKey=VOTE, apiVersion=0, clientId=raft-client-1, correlationId= 0, headerVersion=2), connectionId='x.x.x.144:9093-x.x.x.145:33006-0', clientAddress=/x.x.x.145, principal=User:controller_user, listenerName=ListenerName(CONTROLLER), securityProtocol=SASL_PLAINTEXT, clientInformation=ClientInformation(softwareName=apache-kafka-java, softwareVersion=3.6.0), fromPrivilegedListener=false, principalSerde=Optional[org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder@6a79325e])" } org.apache.kafka.common.errors.AuthorizerNotReadyException
I ommitted some minor details from the yaml file (such as log4j)
Setting super.users=User:controller_user;User:${SASL_ADMIN_USER} made it work.
super.users=User:controller_user;User:${SASL_ADMIN_USER}
Name and Version
bitnami/kafka:3.6.0-debian-11-r2
What architecture are you using?
amd64
What steps will reproduce the bug?
What is the expected behavior?
As soon as I added
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
, the container wouldn't start anymore.What do you see instead?
Additional information
I ommitted some minor details from the yaml file (such as log4j)